Sunday, August 04, 2024

 

Global transformation through climate-friendly energy production, AI, and bioeconomy: G20 science academies recommend measures for a sustainable economy




Leopoldina

The United Nations’ (UN) Agenda 2030 sets out clear objectives for a globally sustainable society. Scientific knowledge plays an important role in this transformation. In advance of the G20 Summit on 18 and 19 November 2024 in Rio de Janeiro/Brazil, the G20 science academies (Science20), including the German National Academy of Sciences Leopoldina, have published the joint statement “Science for Global Transformation”. In the statement the academies recommend specific measures to advance the UN’s Sustainable Development Goals in areas including energy transition, artificial intelligence, bioeconomy, health, and social justice.

“Basic research and scientific innovation can help foster sustainable and resilient societies. International cooperation is particularly important for dealing with global challenges such as climate change and AI,” says Professor (ETHZ) Dr Gerald Haug, President of the German National Academy of Sciences Leopoldina. “A particular success is the inclusion of a CO2 price in the measures recommended by the academies, as this is an important market-based tool to help reduce emissions. An affordable and clean energy system remains the foundation for a sustainable economy. We can see that science already offers numerous solutions in this area, such as innovative forms of electricity generation from renewable sources, the development of effective means of energy storage, as well as carbon capture and storage technologies.”

In their statement, the science academies of the 20 leading industrialised and emerging countries stress the need for low-emission forms of energy such as solar and wind energy. Recommended measures to help achieve climate neutrality include the use of biofuels, sustainable hydrogen, energy storage, and establishing closed-loop recycling processes for the materials used in sustainable energy systems. The G20 science academies also voice their support for the introduction of market-based controlling instruments such as a global CO2 price.

The statement emphasises that the potential offered by artificial intelligence should be used, but also stresses the importance of creating international framework conditions for this technology. All nations should use AI in a fair and transparent manner. Investment in data infrastructure and high-performance computing centres is also necessary. Furthermore, it is important to enable citizens to make informed decisions about AI and are aware of its potential, limits, and possible risks.

Bioeconomy has the potential to reconcile economic growth and environmental protection. The science academies’ statement explains how investments in research and infrastructure for a sustainable bioeconomy, as well as international cooperation and the integration of local knowledge can help humanity deal with the challenges posed by climate change, loss of biodiversity, as well as challenges relating to poverty and health.

The statement also contains recommendations on health and social justice, for example the One Health approach that takes an integrated, unifying view of the health of humans, animals, and ecosystems. In addition to global access to vaccinations, global information exchange, health monitoring, and the development of antimicrobial substances and measures to tackle antimicrobial resistance (AMR), the statement also urges greater focus on mental illnesses, as mental health still receives insufficient attention in many countries.

The statement was prepared under the leadership of the Brazilian Academy of Sciences (Academia Brasileira de Ciências) and with the participation of Leopoldina members. On Tuesday, 30 July 2024 the Brazilian Academy of Sciences officially presented the statement to the Brazilian G20 Presidency.

The joint statement can be downloaded here: www.leopoldina.org/s20

The Leaders’ Summit of the 20 major industrialised and emerging countries (G20) in Rio de Janeiro/Brazil on 18 and 19 November 2024 is the eighth summit to which the scientific community is contributing through the dialogue forum “Science20”. The scientific advice process was launched for the G20 summit in 2017 as part of the German G20 Presidency. Under the leadership of the Leopoldina, the national science academies of the G20 states developed recommendations to improve global health. The G7 summits have also been accompanied by the science academies for more than ten years.

The Leopoldina on X: www.twitter.com/leopoldina

About the German National Academy of Sciences Leopoldina
As the National Academy of Sciences, the Leopoldina provides independent and science-based policy advice on socially relevant questions. For this purpose the Academy produces interdisciplinary statements based on scientific findings. Potential courses of action are given in these publications. The ultimate decisions are for the democratically elected government to make. The experts that write the statements work on a voluntary basis and are unbiased. The Leopoldina represents the German scientific community in international committees, including, for example, the scientific consultation for the annual G7 and G20 summits. It has around 1,700 members from over 30 countries and combines expertise from almost all areas of research. The Leopoldina was founded in 1652 and was appointed the German National Academy of Sciences in 2008. The Leopoldina is an independent scientific academy and is dedicated to the common good.






Shared awareness could lead to greener, more ethical, and useful smart machines


The EMERGE project proposes a collaborative shared awareness as a more reliable, energy-efficient, and ethically tractable framework for the coordination between artificial systems and humans than an artificial general intelligence



Da Vinci Labs

Collaborative Shared Awareness 

image: 

Shared awareness makes artificial agents easier to monitor, and control for human operators. It also enables systems to work better together, even if they were designed by different companies. Shared awareness could help autonomous vehicles avoid collisions, logistics robots coordinate the delivery of packages, or AI systems analyse complex patient medical history to come up useful treatment recommendations.

view more 

Credit: EMERGE Project





The future deployment of AIs and robots in our everyday work and life, from fully automated vehicles, to delivery robots, and AI assistants, could either be done by making increasingly capable agents that can do many tasks, or simpler more narrow agents that are designed for specific tasks.

The former is most often in the spotlight: AI systems incrementally trained with generalised competencies across many domains with the eventual development of an artificial general intelligence, which is ultimately linked with the possibility - or fear - of artificial entities gaining consciousness.

This raises several concerns. An automated face recognition system may be acceptable to assist in border control or asylum request processing when working within some boundaries and with appropriate high-security criteria. Endowing domain-general AI, also capable of speaking and taking health, educational or military decisions with such capacities is a threat. Besides that, operating a general-purpose system incurs in significant energy and emission costs, as evidenced by generative architectures such as large language models.

The alternative vision is defended by researchers members of the EMERGE project consortium in their recent publication in the journal Advanced Intelligent Systems. The authors argue that when orchestrating numerous simultaneous or sequential actions across different specialised AI systems, the presence of consciousness as a private integrative mechanism within each system is neither essential nor sufficient.

They propose that specialized AI systems tailored to specific tasks can be more reliable, energy-efficient, ethically tractable, and overall more effective than general intelligence. Meanwhile, these systems raise mostly a problem of effective coordination between different systems and humans, for which, they argue, simpler ways of sharing awareness are sufficient.

“What is needed is the capacity for selectively sharing relevant states with other AI systems in order to facilitate coordination and cooperation - or a collaborative shared awareness for short. As the word ‘awareness’ is sometimes used as a synonym for consciousness, it is important to stress that collaborative awareness is significantly different from consciousness,“ says Ophelia Deroy, Professor of Philosophy and Neuroscience at the Ludwig Maximilian University in Munich, Germany.

First, shared awareness is not a private state, by definition. If a swarm of bots has a shared awareness of the whole factory floor, this shared awareness is not reducible to the representation of space that each individual agent has. It is an emergent property. Second, shared awareness can be only transient, sharing states with others only when there is a need to coordinate individual goals or cooperate on a common goal.  Third, shared awareness can be selective regarding which states are relevant to be shared with others. And while the dominant views of consciousness mean it is integrated or unified, shared awareness can be partitioned across different agents: one system may need to share spatial information with another system, energy-levels with their controller, and share other aspects such as their confidence with other systems or their users.

“Shared awareness makes artificial agents easier to monitor, and control for human operators. It also enables systems to work better together, even if they were designed by different companies. Shared awareness could help autonomous vehicles avoid collisions, logistics robots coordinate the delivery of packages, or AI systems analyse complex patient medical history to come up useful treatment recommendations” says Sabine Hauert, Professor of Swarm Engineering at the University of Bristol, UK.

About the EMERGE project

The EMERGE project will deliver a new philosophical, mathematical, and technological framework to demonstrate, both theoretically and experimentally, how a collaborative awareness – a representation of shared existence, environment and goals – can arise from the interactions of elemental artificial entities. Learn more: https://eic-emerge.eu/

An international effort to define intelligence, consciousness and more: efforts to create consensus definitions for diverse intelligent systems




Cortical Labs
Figure 1Initial key terms, most applicable fields, and core approach toward consensus 

image: 

(A) Proposed key terms to define.

(B) Proposed most applicable specific fields the nomenclature guide will be used in; however, others may also find this work useful.

(C) A mixed method approach with a modified Delphi method. This approach entails an initial round with pre-selected open-ended questions (i), strategic refinement and categorization (ii), and collaborative consultation (iii) in an iterative manner (ii and iii) until a suitable level of consensus is achieved (iv). If consensus is not reached on any specific terms, a weighted majority voting system will be implemented to reach a conclusion (v).

view more 

Credit: Cortical Labs




A call for collaboration to define the language in all AI related spaces, with a focus on 'diverse intelligent systems' that include AI (Artificial Intelligence), LLMs (Large Language Models) and biological intelligences is underway. The study was led by Cortical Labs - the biological computing startup that created 'Dishbrain' - and brought together scientists, ethicists and researchers from the United Kingdoms, Canada, USA, the EU, Australia and Singapore. Together they have proposed a pathway forward to unify the language in the rapidly growing and controversial area.  

Whether progress is silicon-based such as the use of large language models, or through synthetic biology methods such as the development of organoids, a clear need for a community-based approach to seeking consensus on nomenclature is now vital. 

Commenting on the collaboration, Brett Kagan, Chief Scientific Officer at Cortical Labs, said: “We’re extremely excited about collaborating with the industry on a study that is both timely and essential. Ultimately, the purpose of this collaboration is to create a critical field guide for researchers, across a broad range of fields, who are engaged in the development of diverse generally intelligent systems. In what is a rapidly evolving space, such a guide doesn’t yet exist. 

Other scientists involved in the effort, such as Professor Ge Wang from RPI, USA share similar excitement: “Currently, multimodal multitask foundation models are under rapid development via digital communication but this approach is subject to major limitations. In this big picture, our proposed efforts will be instrumental.”

Toward a nomenclature consensus

Language used to describe specific phenomena for scientific and public discourse is complex and can be highly contentious for emerging science and technology. Rapidly growing fields aiming to create generally intelligent systems are controversial with disagreements, confusion, and ambiguity pervading discussions around the semantics used to describe this myriad of technologies. 

Even 15 years ago, for example, at least 71 distinct definitions of “intelligence” had been identified. The diverse technologies and disciplines that contribute toward the shared goal of creating generally intelligent systems further amplify disparate definitions used for any given concept. Today it is increasingly impractical for researchers to explicitly re-define every term that could be considered ambiguous, imprecise, interchangeable or seldom formally defined in each paper. 

A common language is needed to recognise, predict, manipulate, and build cognitive (or pseudo-cognitive) systems in unconventional embodiments that do not share straightforward aspects of structure or origin story with conventional natural species. Previous work proposing nomenclature guidelines are generally highly field specific and developed by selected experts, with little opportunity for broader community engagement.  

It’s for this reason that researchers and scientists in related fields are being invited to collaborate, broadly agree upon, and adopt nomenclature for this field. The purpose of the work is to provide utility and nuance to the discussion and offer authors an option to use language explicitly, unambiguously, and consistently, insofar as rapidly emerging fields will allow, through the adoption of nomenclature adhering to a theory-agnostic standard.

The collaboration will seek to define a non-exhaustive list of key terms (Figure 1A). The study to establish nomenclature consensus will be most applicable to a number of specific fields (Figure 1B) including, but not limited to, artificial intelligence, autonomous systems, consciousness research, machine learning, organoid intelligence and robotics. However, other fields beyond those listed may also derive value from the approach toward consensus.

The collaboration will be carried out using a mixed method approach with a modified Delphi method (Figure 1C). This approach entails an initial round with pre-selected open-ended questions (i), strategic refinement and categorisation (ii), and collaborative consultation (iii) in an iterative manner (ii and iii) until a suitable level of consensus is achieved (iv). If consensus is not reached on any specific terms, a weighted majority voting system will be implemented to reach a conclusion (v).

To participate, interested collaborators can register at: https://corticallabs.com/nomenclature.html

No comments: