Sunday, August 04, 2024

SPACE

The rotation of a nearby star stuns astronomers



University of Helsinki
Rotation of a star 

image: 

A nearby star V889 Herculis rotates the fastest at a latitude of about 40 degrees.

view more 

Credit: Jani Närhi, University of Helsinki





Astronomers from the University of Helsinki have found that the rotational profile of a nearby star, V889 Herculis, differs considerably from that of the Sun. The observation provides insights into the fundamental stellar strophysics and helps understanding the activity of the Sun, its spot structures and eruptions.

The Sun rotates the fastest at the equator, whereas the rotation rate slows down at higher latitudes and is the slowest as the polar regions. But a nearby Sun-like star V889 Herculis, some 115 light years away in the constellation of Hercules, rotates the fastest at a latitude of about 40 degrees, while both the equator and polar regions rotate more slowly.

Similar rotational profile has not been observed for any other star. The result is stunning because stellar rotation has been considered a well-understood fundamental physical parameter but such a rotational profile has not been predicted even in computer simulations.

– We applied a newly developed statistical technique to the data of a familiar star that has been studied in the University of Helsinki for years. We did not expect to see such anomalies in stellar rotation. The anomalies in the rotational profile of V889 Herculis indicate that our understanding of stellar dynamics and magnetic dynamos are insufficient, explains researcher Mikko Tuomi who coordinated the research

Dynamics of a ball of plasma

The target star V889 Herculis is much like a young Sun, telling a story about the history and evolution of the Sun. Tuomi emphasises that it is crucial to understand stellar astrophysics in order to, for instance, predict activity-induced phenomena on the Solar surface, such as spots and eruptions.

Stars are spherical structures where matter is in the state of plasma, consisting of charged particles. They are dynamical objects that hang in a balance between the pressure generated in nuclear reactions in their cores and their own gravity. They have no solid surfaces unlike many planets.

The stellar rotation is not constant for all latitudes – an effect known as differential rotation. It is caused by the fact that hot plasma rises to the star's surface via a phenomenon called convection, which in turn has an effect on the local rotation rate. This is because angular momentum must be conserved and the convection occurs perpendicular to the rotational axis near equator whereas it is parallel to the axis near the poles.

However, many factors such as stellar mass, age, chemical composition, rotation period, and magnetic field have effects on the rotation and give rise to variations in the differential rotation profiles.

A statistical method for determining rotational profile

Thomas Hackman, docent of astronomy, who participated in the research, explains that the Sun has been the only star for which studying the rotational profile has been possible.

– Stellar differential rotation is a very crucial factor that has an effect on the magnetic activity of stars. The method we have developed opens a new window into the inner workings of other stars.

The astronomers at the Department of Particle Physics and Astrophysics of the Helsinki University have determined the rotational profile of two nearby young stars by applying a new statistical modelling to long-baseline brightness observations. They modelled the periodic variations in the observations by accounting for the differences in the apparent spot movement at different latitudes. The spot movement then enabled estimating the rotational profile of the stars.

The second one of the targets stars, LQ Hydrae in the constellation of Hydra, was found to be rotating much like a rigid body -- the rotation appeared unchanged from the equator to the poles, which indicates that the differences are very small.

Observations from the Fairborne Observatory

The researchers base their results on the observations of the target stars from the Fairborn observatory. The brightnesses of the stars have been monitored with robotic telescopes for around 30 years, which provides insights into the behaviour of the stars over a long period of time.

 Tuomi appreciates the work of senior astronomer Gregory Henry, of Tennessee University, United States, who leads the Fairborne observational campaign.

– For many years, Greg's project has been extremely valuable in understanding the behaviour of nearby stars. Whether the motivation is to study the rotation and properties of young, active stars or to understand the nature of stars with planets, the observations from Fairborn Observatory have been absolutely crucial. It is amazing that even in the era of great space-based observatories we can obtain fundamental information on the stellar astrophysics with small 40cm ground-based telescopes.

The target stars V889 Herculis and LQ Hydrae are both roughly 50 million yeara old stars that in many respects resemble the young Sun. They both rotate very rapidly, with rotation periods of only about one and half days. For this reason, the long-baseline brightness observations contain many rotational cycles. The stars were selected as targets because they have been observed for decades and because they have both been studied actively at the University of Helsinki.

Original article: Mikko Tuomi, J. Jyri Lehtinen, W. Gregory Henry, Thomas Hackman. Characterising the stellar differential rotation based on largest-spot statistics from ground-based photometry. A&A, 2024. DOI: https://doi.org/10.1051/0004-6361/202449861

Further information

Researcher Mikko Tuomi, University of Helsinki, tel +358 40 500 7454, mikko.tuomi@helsinki.fi

University Researcher Thomas Hackman, University of Helsinki, thomas.hackman@helsinki.fi

Postdoctoral Researcher Jyri Lehtinen, University of Turku (FINCA) and University of Helsinki, jyri.j.lehtinen@helsinki.fi


What no one has seen before – simulation of gravitational waves from failing warp drive



University of Potsdam




Warp drives are staples of science fiction, and in principle could propel spaceships faster than the speed of light. Unfortunately, there are many problems with constructing them in practice, such as the requirement for an exotic type of matter with negative energy. Other issues with the warp drive metric include the difficulties for those in the ship in actually controlling and deactivating the bubble.

This new research is the result of a collaboration between specialists in gravitational physics at Queen Mary University of London, the University of Potsdam, the Max Planck Institute (MPI) for Gravitational Physics in Potsdam and Cardiff University. Whilst it doesn't claim to have cracked the warp drive code, it explores the theoretical consequences of a warp drive “containment failure” using numerical simulations. Dr Katy Clough of Queen Mary University of London, the first author of the study explains: “Even though warp drives are purely theoretical, they have a well-defined description in Einstein’s theory of General Relativity, and so numerical simulations allow us to explore the impact they might have on spacetime in the form of gravitational waves.”

The results are fascinating. The collapsing warp drive generates a distinct burst of gravitational waves, a ripple in spacetime that could be detectable by gravitational wave detectors that normally target black hole and neutron star mergers. Unlike the chirps from merging astrophysical objects, this signal would be a short, high-frequency burst, and so current detectors wouldn't pick it up. However, future higher-frequency instruments might, and although no such instruments have yet been funded, the technology to build them exists. This raises the possibility of using these signals to search for evidence of warp drive technology, even if we can't build one ourselves.

Prof Tim Dietrich from the University of Potsdam comments: “For me, the most important aspect of the study is the novelty of accurately modelling the dynamics of negative energy spacetimes, and the possibility of extending the techniques to physical situations that can help us better understand the evolution and origin of our universe, or the processes at the centre of black holes.”

Warp speed may be a long way off, but this research already pushes the boundaries of our understanding of exotic spacetimes and gravitational waves. The researchers plan to investigate how the signal changes with different warp drive models.

Link to Publication: Clough, Katy, Tim Dietrich, and Sebastian Khan. 2024. What No One Has Seen before: Gravitational Waveforms from Warp Drive Collapse. The Open Journal of Astrophysics 7 (July). https://doi.org/10.33232/001c.121868

Link to Press Release of Queen Mary University of Londonhttps://www.qmul.ac.uk/media/news/2024/se/new-study-simulates-gravitational-waves-from-failing-warp-drive.html

Link to AEI Research Highlighthttps://www.aei.mpg.de/1171367/what-no-one-has-seen-before-new-study-simulates-gravitational-waves-from-failing-warp-drive

Contact:
Prof. Dr. Tim Dietrich, Institute of Physics and Astronomy and Max Planck Institute for Gravitational Physics (Albert Einstein Institute) Potsdam
Tel.: +49 331 977-230160
E-Mail: tim.dietrich@uni-potsdam.de

Researchers discover graphene flakes in lunar soil sample




Science China Press
Structural and compositional characterization of graphene flakes in the CE-5 lunar soil sample. 

image: 

(a) Laser scanning confocal microscopy image and height distribution. (b) Backscattered electron SEM image and (c) Raman spectra corresponding to different areas. (d) TEM image, Cs-corrected HAADF-STEM image, and the corresponding EELS Fe L-edge spectra for different areas. (e) Cs-corrected HRTEM images. (f) HAADF-STEM image. (g) EDS elemental maps showing spatial distributions of the elements. (h) HRTEM images of the corresponding regions marked in (f).

view more 

Credit: ©Science China Press




A recent study, published in National Science Review, revealed the existence of naturally formed few-layer graphene, a substance consisting of carbon atoms in a special, thin-layered structure.

The team, led by professors Meng Zou, Wei Zhang and senior engineer Xiujuan Li from Jilin University and Wencai Ren from the Chinese Academy of Sciences’ Institute of Metal Research, analyzed an olive-shaped sample of lunar soil, about 2.9 millimeters by 1.6 mm, retrieved from the Chang’e 5 mission in 2020.

According to the team, scientists generally believe that some 1.9 percent of interstellar carbon exists in the form of graphene, with its shape and structure determined by the process of its formation.

Using a special spectrometer, researchers found an iron compound that is closely related to the formation of graphene in a carbon-rich section of the sample. They then used advanced microscopic and mapping technologies to confirm that the carbon content in the sample comprised “flakes” that have two to seven layers of graphene.

The team proposed that the few-layer graphene may have formed in volcanic activity in the early stages of the moon’s existence, and been catalyzed by solar winds that can stir up lunar soil and iron-containing minerals that helped transform the carbon atoms’ structure. They added that impact processes from meteorites, which create high-temperature and high-pressure environments, may also have led to the formation of graphene.

On Earth, graphene is becoming a star in materials sciences due to its special features in optics, electrics and mechanics. The team believes their study could help develop ways to produce the material inexpensively and expand its use.

“The mineral-catalyzed formation of natural graphene sheds light on the development of low-cost scalable synthesis techniques of high-quality graphene,” the paper said. “Therefore, a new lunar exploration program may be promoted, and some forthcoming breakthroughs can be expected.”

###

See the article:

Discovery of natural few-layer graphene on the Moon

https://doi.org/10.1093/nsr/nwae211

First full 2-D spectral image of aurora borealis from a hyperspectral camera



Acquisition of aurora spectral images succeeded



Peer-Reviewed Publication

National Institutes of Natural Sciences

Figure 1. Observed the different colors of the aurora borealis 

image: 

Images of observing color differences in the aurora borealis using the advanced equipment. High energy electrons make the aurora glow at lower altitudes, producing a purple light.

view more 

Credit: National Institute for Fusion Science




Auroras are natural luminous phenomena caused by the interaction of electrons falling from the sky and the upper atmosphere. Most of the observed light consists of emission lines of neutral or ionized nitrogen and oxygen atoms and molecular emission bands, and the color is determined by the transition energy levels, molecular vibrations and rotations. There is a variety of characteristic colors of auroras, such as green and red, but there are multiple theories about the emission process by which they appear in different types of auroras, and to understand the colors of auroras, the light must be broken down. Comprehensive (temporal and spatial) spectral observations are needed to study auroral emission processes and colors in detail.

Complementarily, the National Institute for Fusion Science (NIFS) has been observing the emission of light from plasma in a magnetic field in the Large Helical Device (LHD). Various systems have been developed to measure the spectrum of light emitted from the plasma, and the processes of energy transport and atomic and molecular emission have been studied. By applying this technology and knowledge to auroral observations, we can contribute to the understanding of auroral luminescence and the study of the energy production process of electrons that gives rise to auroral luminescence.

Aurora observation uses optical filters to obtain images of specific colors, which has the disadvantage of a limited acquisition wavelength with low resolution. On the other hand, a hyperspectral camera has the advantage of obtaining a spatial distribution of the spectrum with high wavelength resolution. We started a plan to develop a high-sensitivity hyperspectral camera in 2018 by combining a lens spectrometer with an EMCCD camera, which had been used in the LHD, with an image-sweep optical system using galvanometer mirrors.

It took five years from the planning stage to develop a highly sensitive system capable of measuring auroras at 1kR (1 kilo-Rayleigh). In May 2023, this system was installed at KEOPS at the Swedish Space Corporation's Esrange Space Center in Kiruna, Sweden, which is located just below the auroral belt and can observe auroras with high frequency. The system succeeded in acquiring hyperspectral images of the auroras, that is, two-dimensional images of them broken down by wavelength. Observations began in September 2023, and the data has been acquired remotely in Japan.

Auroral emission intensities and the observation positions were calibrated, based on the positions of stars obtained after installation, and the data will be made publicly available and ready to use.  Using the observation data from an aurora break-up that occurred on October 20, 2023, we clarified what kind of data could be viewed using this system. In the process, we estimated the energy of electrons from the intensity ratio of light at different wavelengths, which led to the publication of this paper.

Figure 1 shows the difference in the color of the aurora when electrons arrive at low energies and speeds and when they arrive at high energies and speeds. When the electrons are slow, they emit strong red light at high altitudes. On the other hand, when the electrons are fast, they penetrate to lower altitudes and emit a strong green or purple light. Figure 2 is a two-dimensional image of auroras resolved into each color (wavelength) observed with the state-of-the-art hyperspectral camera. The different distribution by color was observed because the elements that produce the light differ according to the height at which the light is generated. Thus, we have succeeded in developing a device that can obtain two-dimensional images of the various colors produced by the aurora borealis.

From the ratio of the intensity of the red light (630nm) to the purple light (427.8nm), we can determine the energy of the incoming electrons that caused the aurora. Using the hyperspectral camera (HySCAI), which is capable of fine spectroscopy of light, the energy of the incoming electrons during the auroral explosion observed at this time was estimated to be 1600 electron volts (an energy equivalent to the voltage of about 1000 dry-cell batteries). There were no major discrepancies with previously known values, indicating that the observations were valid. The Hyperspectral Camera (HySCAI) is expected to contribute to solving important auroral issues such as the distribution of precipitating electrons, their relationship to auroral color, and the mechanism of auroral emission.

For the first time, a detailed spatial distribution of color (a two-dimensional image), a hyperspectral image of the aurora borealis, has been obtained. Many previous auroral studies have used a system in which light is selected by a filter that passes only certain wavelengths. This system compensates for the disadvantage of observing only a limited number of wavelengths. By observing detailed changes in the spectrum, it will contribute to the advancement of auroral research. On the other hand, the system will also provide insight into energy transport due to the interaction between charged particles and waves in a magnetic field, which is also attracting attention in fusion plasmas. It is expected that this interdisciplinary study will be advanced in cooperation with universities and research institutes in Japan and abroad, and will contribute to the development of worldwide aurora research.

Scientists pin down the origins of the moon’s tenuous atmosphere



The barely-there lunar atmosphere is likely the product of meteorite impacts over billions of years, a new study finds.



Massachusetts Institute of Technology



While the moon lacks any breathable air, it does host a barely-there atmosphere. Since the 1980s, astronomers have observed a very thin layer of atoms bouncing over the moon’s surface. This delicate atmosphere — technically known as an “exosphere” — is likely a product of some kind of space weathering. But exactly what those processes might be has been difficult to pin down with any certainty.

Now, scientists at MIT and the University of Chicago say they have identified the main process that formed the moon’s atmosphere and continues to sustain it today. In a study appearing in Science Advances, the team reports that the lunar atmosphere is primarily a product of “impact vaporization.”

In their study, the researchers analyzed samples of lunar soil collected by astronauts during NASA’s Apollo missions. Their analysis suggests that over the moon’s 4.5-billion-year history its surface has been continuously bombarded, first by massive meteorites, then more recently, by smaller, dust-sized “micrometeoroids.” These constant impacts have kicked up the lunar soil, vaporizing certain atoms on contact and lofting the particles into the air. Some atoms are ejected into space, while others remain suspended over the moon, forming a tenuous atmosphere that is constantly replenished as meteorites continue to pelt the surface.

The researchers found that impact vaporization is the main process by which the moon has generated and sustained its extremely thin atmosphere over billions of years. 

“We give a definitive answer that meteorite impact vaporization is the dominant process that creates the lunar atmosphere,” says the study’s lead author, Nicole Nie, an assistant professor in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. “The moon is close to 4.5 billion years old, and through that time the surface has been continuously bombarded by meteorites. We show that eventually, a thin atmosphere reaches a steady state because it’s being continuously replenished by small impacts all over the moon.” 

Nie’s co-authors are Nicolas Dauphas, Zhe Zhang, and Timo Hopp at the University of Chicago, and Menelaos Sarantos at NASA Goddard Space Flight Center.

Weathering’s roles

In 2013, NASA sent an orbiter around the moon to do some detailed atmospheric reconnaissance. The Lunar Atmosphere and Dust Environment Explorer (LADEE, pronounced "laddie") was tasked with remotely gathering information about the moon’s thin atmosphere, surface conditions, and any environmental influences on the lunar dust. 

LADEE’s mission was designed to determine the origins of the moon’s atmosphere. Scientists hoped that the probe’s remote measurements of soil and atmospheric composition might correlate with certain space weathering processes that could then explain how the moon’s atmosphere came to be. 

Researchers suspect that two space weathering processes play a role in shaping the lunar atmosphere: impact vaporization and “ion sputtering” — a phenomenon involving solar wind, which carries energetic charged particles from the sun through space. When these particles hit the moon’s surface, they can transfer their energy to the atoms in the soil and send those atoms sputtering and flying into the air.  

“Based on LADEE’s data, it seemed both processes are playing a role,” Nie says. “For instance, it showed that during meteorite showers, you see more atoms in the atmosphere, meaning impacts have an effect. But it also showed that when the moon is shielded from the sun, such as during an eclipse, there are also changes in the atmosphere’s atoms, meaning the sun also has an impact. So, the results were not clear or quantitative.”

Answers in the soil

To more precisely pin down the lunar atmosphere’s origins, Nie looked to samples of lunar soil collected by astronauts throughout NASA’s Apollo missions. She and her colleagues at the University of Chicago acquired 10 samples of lunar soil, each measuring about 100 milligrams — a tiny amount that she estimates would fit into a single raindrop. 

Nie sought to first isolate two elements from each sample: potassium and rubidium. Both elements are “volatile,” meaning that they are easily vaporized by impacts and ion sputtering. Each element exists in the form of several isotopes. An isotope is a variation of the same element, that consists of the same number of protons but a slightly different number of neutrons. For instance, potassium can exist as one of three isotopes, each one having one more neutron, and there being slightly heavier than the last. Similarly, there are two isotopes of rubidium. 

The team reasoned that if the moon’s atmosphere consists of atoms that have been vaporized and suspended in the air, lighter isotopes of those atoms should be more easily lofted, while heavier isotopes would be more likely to settle back in the soil. Furthermore, scientists predict that impact vaporization, and ion sputtering, should result in very different isotopic proportions in the soil. The specific ratio of light to heavy isotopes that remain in the soil, for both potassium and rubidium, should then reveal the main process contributing to the lunar atmosphere’s origins.

With all that in mind, Nie analyzed the Apollo samples by first crushing the soils into a fine powder, then dissolving the powders in acids to purify and isolate solutions containing potassium and rubidium. She then passed these solutions through a mass spectrometer to measure the various isotopes of both potassium and rubidium in each sample. 

In the end, the team found that the soils contained mostly heavy isotopes of both potassium and rubidium. The researchers were able to quantify the ratio of heavy to light isotopes of both potassium and rubidium, and by comparing both elements, they found that impact vaporization was most likely the dominant process by which atoms are vaporized and lofted to form the moon’s atmosphere. 

“With impact vaporization, most of the atoms would stay in the lunar atmosphere, whereas with ion sputtering, a lot of atoms would be ejected into space,” Nie says. “From our study, we now can quantify the role of both processes, to say that the relative contribution of impact vaporization versus ion sputtering is about 70:30 or larger.” In other words, 70 percent or more of the moon’s atmosphere is a product of meteorite impacts, whereas the remaining 30 percent is a consequence of the solar wind. 

“The discovery of such a subtle effect is remarkable, thanks to the innovative idea of combining potassium and rubidium isotope measurements along with careful, quantitative modeling,” says Justin Hu, a postdoc who studies lunar soils at Cambridge University, who was not involved in the study. “This discovery goes beyond understanding the moon’s history, as such processes could occur and might be more significant on other moons and asteroids, which are the focus of many planned return missions.”

“Without these Apollo samples, we would not be able to get precise data and measure quantitatively to understand things in more detail,” Nie says. “It’s important for us to bring samples back from the moon and other planetary bodies, so we can draw clearer pictures of the solar system’s formation and evolution.”

This work was supported, in part, by NASA and the National Science Foundation.

###

Written by Jennifer Chu, MIT News

AI ‘hallucinations’ tackled by University of Bristol researchers





University of Bristol





Significant strides in addressing the issue of AI 'hallucinations' and improving the reliability of anomaly detection algorithms in Critical National Infrastructures (CNI) have been made by scientists based in Bristol’s School of Computer Science.

Recent advances in Artificial Intelligence (AI) have highlighted the technology's potential in anomaly detection, particularly within sensor and actuator data for CNIs. However, these AI algorithms often require extensive training times and struggle to pinpoint specific components in an anomalous state. Furthermore, AI's decision-making processes are frequently opaque, leading to concerns about trust and accountability.

To help combat this, the team brought in a number of measures to boost efficiency including:

  1. Enhanced Anomaly Detection: Researchers employed two cutting-edge anomaly detection algorithms with significantly shorter training times and faster detection capabilities, while maintaining comparable efficiency rates. These algorithms were tested using a dataset from the operational water treatment testbed, SWaT, at the Singapore University of Technology and Design.
  2. Explainable AI (XAI) Integration: To enhance transparency and trust, the team integrated eXplainable AI (XAI) models with the anomaly detectors. This approach allows for better interpretation of AI decisions, enabling human operators to understand and verify AI recommendations before making critical decisions. The effectiveness of various XAI models was also evaluated, providing insights into which models best aid human understanding.
  3. Human-Centric Decision Making: The research emphasizes the importance of human oversight in AI-driven decision-making processes. By explaining AI recommendations to human operators, the team aims to ensure that AI acts as a decision-support tool rather than an unquestioned oracle. This methodology introduces accountability, as human operators make the final decisions based on AI insights, policy, rules, and regulations.
  4. Scoring System Development: A meaningful scoring system is being developed to measure the perceived correctness and confidence of the AI's explanations. This score aims to assist human operators in gauging the reliability of AI-driven insights.

These advancements not only improve the efficiency and reliability of AI systems in CNIs but also ensure that human operators remain integral to the decision-making process, enhancing overall accountability and trust.

Dr Sarad Venugopalan, co-author of the study, explained: “Humans learn by repetition over a longer period of time and work for shorter hours without being error prone. This is why, in some cases, we use machines that can carry out the same tasks in a fraction of the time and at a reduced error rate.

“However, this automation, involving cyber and physical components, and subsequent use of AI to solve some of the issues brought by the automation, is treated as a black box.  This is detrimental because it is the personnel using the AI recommendation that is held accountable for the decisions made by them, and not the AI itself.

“In our work, we use explainable AI, to increase transparency and trust, so the personnel using the AI is informed why the AI made the recommendation (for our domain use case) before a decision is made."

This research is part of the MSc thesis of Mathuros Kornkamon, under the supervision of Dr Sridhar Adepu.

Dr Adepu added: “This work discovers how WaXAI is revolutionizing anomaly detection in industrial systems with explainable AI. By integrating XAI, human operators gain clear insights and enhanced confidence to handle security incidents in critical infrastructure."

The paper won Best Paper Award at the 10th ACM Cyber-Physical System Security Workshop at the ACM ASIACCS2024 conference.

Paper:

‘WaXAI: Explainable Anomaly Detection in Industrial Control Systems and Water Systems’ by Mathuros Kornkamon, Sarad Venugopalan and Sridhar Adepu in Proceedings of the 10th ACM Cyber-Physical System Security Workshop, pp. 3-15. 2024


 

Global transformation through climate-friendly energy production, AI, and bioeconomy: G20 science academies recommend measures for a sustainable economy




Leopoldina

The United Nations’ (UN) Agenda 2030 sets out clear objectives for a globally sustainable society. Scientific knowledge plays an important role in this transformation. In advance of the G20 Summit on 18 and 19 November 2024 in Rio de Janeiro/Brazil, the G20 science academies (Science20), including the German National Academy of Sciences Leopoldina, have published the joint statement “Science for Global Transformation”. In the statement the academies recommend specific measures to advance the UN’s Sustainable Development Goals in areas including energy transition, artificial intelligence, bioeconomy, health, and social justice.

“Basic research and scientific innovation can help foster sustainable and resilient societies. International cooperation is particularly important for dealing with global challenges such as climate change and AI,” says Professor (ETHZ) Dr Gerald Haug, President of the German National Academy of Sciences Leopoldina. “A particular success is the inclusion of a CO2 price in the measures recommended by the academies, as this is an important market-based tool to help reduce emissions. An affordable and clean energy system remains the foundation for a sustainable economy. We can see that science already offers numerous solutions in this area, such as innovative forms of electricity generation from renewable sources, the development of effective means of energy storage, as well as carbon capture and storage technologies.”

In their statement, the science academies of the 20 leading industrialised and emerging countries stress the need for low-emission forms of energy such as solar and wind energy. Recommended measures to help achieve climate neutrality include the use of biofuels, sustainable hydrogen, energy storage, and establishing closed-loop recycling processes for the materials used in sustainable energy systems. The G20 science academies also voice their support for the introduction of market-based controlling instruments such as a global CO2 price.

The statement emphasises that the potential offered by artificial intelligence should be used, but also stresses the importance of creating international framework conditions for this technology. All nations should use AI in a fair and transparent manner. Investment in data infrastructure and high-performance computing centres is also necessary. Furthermore, it is important to enable citizens to make informed decisions about AI and are aware of its potential, limits, and possible risks.

Bioeconomy has the potential to reconcile economic growth and environmental protection. The science academies’ statement explains how investments in research and infrastructure for a sustainable bioeconomy, as well as international cooperation and the integration of local knowledge can help humanity deal with the challenges posed by climate change, loss of biodiversity, as well as challenges relating to poverty and health.

The statement also contains recommendations on health and social justice, for example the One Health approach that takes an integrated, unifying view of the health of humans, animals, and ecosystems. In addition to global access to vaccinations, global information exchange, health monitoring, and the development of antimicrobial substances and measures to tackle antimicrobial resistance (AMR), the statement also urges greater focus on mental illnesses, as mental health still receives insufficient attention in many countries.

The statement was prepared under the leadership of the Brazilian Academy of Sciences (Academia Brasileira de Ciências) and with the participation of Leopoldina members. On Tuesday, 30 July 2024 the Brazilian Academy of Sciences officially presented the statement to the Brazilian G20 Presidency.

The joint statement can be downloaded here: www.leopoldina.org/s20

The Leaders’ Summit of the 20 major industrialised and emerging countries (G20) in Rio de Janeiro/Brazil on 18 and 19 November 2024 is the eighth summit to which the scientific community is contributing through the dialogue forum “Science20”. The scientific advice process was launched for the G20 summit in 2017 as part of the German G20 Presidency. Under the leadership of the Leopoldina, the national science academies of the G20 states developed recommendations to improve global health. The G7 summits have also been accompanied by the science academies for more than ten years.

The Leopoldina on X: www.twitter.com/leopoldina

About the German National Academy of Sciences Leopoldina
As the National Academy of Sciences, the Leopoldina provides independent and science-based policy advice on socially relevant questions. For this purpose the Academy produces interdisciplinary statements based on scientific findings. Potential courses of action are given in these publications. The ultimate decisions are for the democratically elected government to make. The experts that write the statements work on a voluntary basis and are unbiased. The Leopoldina represents the German scientific community in international committees, including, for example, the scientific consultation for the annual G7 and G20 summits. It has around 1,700 members from over 30 countries and combines expertise from almost all areas of research. The Leopoldina was founded in 1652 and was appointed the German National Academy of Sciences in 2008. The Leopoldina is an independent scientific academy and is dedicated to the common good.






Shared awareness could lead to greener, more ethical, and useful smart machines


The EMERGE project proposes a collaborative shared awareness as a more reliable, energy-efficient, and ethically tractable framework for the coordination between artificial systems and humans than an artificial general intelligence



Da Vinci Labs

Collaborative Shared Awareness 

image: 

Shared awareness makes artificial agents easier to monitor, and control for human operators. It also enables systems to work better together, even if they were designed by different companies. Shared awareness could help autonomous vehicles avoid collisions, logistics robots coordinate the delivery of packages, or AI systems analyse complex patient medical history to come up useful treatment recommendations.

view more 

Credit: EMERGE Project





The future deployment of AIs and robots in our everyday work and life, from fully automated vehicles, to delivery robots, and AI assistants, could either be done by making increasingly capable agents that can do many tasks, or simpler more narrow agents that are designed for specific tasks.

The former is most often in the spotlight: AI systems incrementally trained with generalised competencies across many domains with the eventual development of an artificial general intelligence, which is ultimately linked with the possibility - or fear - of artificial entities gaining consciousness.

This raises several concerns. An automated face recognition system may be acceptable to assist in border control or asylum request processing when working within some boundaries and with appropriate high-security criteria. Endowing domain-general AI, also capable of speaking and taking health, educational or military decisions with such capacities is a threat. Besides that, operating a general-purpose system incurs in significant energy and emission costs, as evidenced by generative architectures such as large language models.

The alternative vision is defended by researchers members of the EMERGE project consortium in their recent publication in the journal Advanced Intelligent Systems. The authors argue that when orchestrating numerous simultaneous or sequential actions across different specialised AI systems, the presence of consciousness as a private integrative mechanism within each system is neither essential nor sufficient.

They propose that specialized AI systems tailored to specific tasks can be more reliable, energy-efficient, ethically tractable, and overall more effective than general intelligence. Meanwhile, these systems raise mostly a problem of effective coordination between different systems and humans, for which, they argue, simpler ways of sharing awareness are sufficient.

“What is needed is the capacity for selectively sharing relevant states with other AI systems in order to facilitate coordination and cooperation - or a collaborative shared awareness for short. As the word ‘awareness’ is sometimes used as a synonym for consciousness, it is important to stress that collaborative awareness is significantly different from consciousness,“ says Ophelia Deroy, Professor of Philosophy and Neuroscience at the Ludwig Maximilian University in Munich, Germany.

First, shared awareness is not a private state, by definition. If a swarm of bots has a shared awareness of the whole factory floor, this shared awareness is not reducible to the representation of space that each individual agent has. It is an emergent property. Second, shared awareness can be only transient, sharing states with others only when there is a need to coordinate individual goals or cooperate on a common goal.  Third, shared awareness can be selective regarding which states are relevant to be shared with others. And while the dominant views of consciousness mean it is integrated or unified, shared awareness can be partitioned across different agents: one system may need to share spatial information with another system, energy-levels with their controller, and share other aspects such as their confidence with other systems or their users.

“Shared awareness makes artificial agents easier to monitor, and control for human operators. It also enables systems to work better together, even if they were designed by different companies. Shared awareness could help autonomous vehicles avoid collisions, logistics robots coordinate the delivery of packages, or AI systems analyse complex patient medical history to come up useful treatment recommendations” says Sabine Hauert, Professor of Swarm Engineering at the University of Bristol, UK.

About the EMERGE project

The EMERGE project will deliver a new philosophical, mathematical, and technological framework to demonstrate, both theoretically and experimentally, how a collaborative awareness – a representation of shared existence, environment and goals – can arise from the interactions of elemental artificial entities. Learn more: https://eic-emerge.eu/

An international effort to define intelligence, consciousness and more: efforts to create consensus definitions for diverse intelligent systems




Cortical Labs
Figure 1Initial key terms, most applicable fields, and core approach toward consensus 

image: 

(A) Proposed key terms to define.

(B) Proposed most applicable specific fields the nomenclature guide will be used in; however, others may also find this work useful.

(C) A mixed method approach with a modified Delphi method. This approach entails an initial round with pre-selected open-ended questions (i), strategic refinement and categorization (ii), and collaborative consultation (iii) in an iterative manner (ii and iii) until a suitable level of consensus is achieved (iv). If consensus is not reached on any specific terms, a weighted majority voting system will be implemented to reach a conclusion (v).

view more 

Credit: Cortical Labs




A call for collaboration to define the language in all AI related spaces, with a focus on 'diverse intelligent systems' that include AI (Artificial Intelligence), LLMs (Large Language Models) and biological intelligences is underway. The study was led by Cortical Labs - the biological computing startup that created 'Dishbrain' - and brought together scientists, ethicists and researchers from the United Kingdoms, Canada, USA, the EU, Australia and Singapore. Together they have proposed a pathway forward to unify the language in the rapidly growing and controversial area.  

Whether progress is silicon-based such as the use of large language models, or through synthetic biology methods such as the development of organoids, a clear need for a community-based approach to seeking consensus on nomenclature is now vital. 

Commenting on the collaboration, Brett Kagan, Chief Scientific Officer at Cortical Labs, said: “We’re extremely excited about collaborating with the industry on a study that is both timely and essential. Ultimately, the purpose of this collaboration is to create a critical field guide for researchers, across a broad range of fields, who are engaged in the development of diverse generally intelligent systems. In what is a rapidly evolving space, such a guide doesn’t yet exist. 

Other scientists involved in the effort, such as Professor Ge Wang from RPI, USA share similar excitement: “Currently, multimodal multitask foundation models are under rapid development via digital communication but this approach is subject to major limitations. In this big picture, our proposed efforts will be instrumental.”

Toward a nomenclature consensus

Language used to describe specific phenomena for scientific and public discourse is complex and can be highly contentious for emerging science and technology. Rapidly growing fields aiming to create generally intelligent systems are controversial with disagreements, confusion, and ambiguity pervading discussions around the semantics used to describe this myriad of technologies. 

Even 15 years ago, for example, at least 71 distinct definitions of “intelligence” had been identified. The diverse technologies and disciplines that contribute toward the shared goal of creating generally intelligent systems further amplify disparate definitions used for any given concept. Today it is increasingly impractical for researchers to explicitly re-define every term that could be considered ambiguous, imprecise, interchangeable or seldom formally defined in each paper. 

A common language is needed to recognise, predict, manipulate, and build cognitive (or pseudo-cognitive) systems in unconventional embodiments that do not share straightforward aspects of structure or origin story with conventional natural species. Previous work proposing nomenclature guidelines are generally highly field specific and developed by selected experts, with little opportunity for broader community engagement.  

It’s for this reason that researchers and scientists in related fields are being invited to collaborate, broadly agree upon, and adopt nomenclature for this field. The purpose of the work is to provide utility and nuance to the discussion and offer authors an option to use language explicitly, unambiguously, and consistently, insofar as rapidly emerging fields will allow, through the adoption of nomenclature adhering to a theory-agnostic standard.

The collaboration will seek to define a non-exhaustive list of key terms (Figure 1A). The study to establish nomenclature consensus will be most applicable to a number of specific fields (Figure 1B) including, but not limited to, artificial intelligence, autonomous systems, consciousness research, machine learning, organoid intelligence and robotics. However, other fields beyond those listed may also derive value from the approach toward consensus.

The collaboration will be carried out using a mixed method approach with a modified Delphi method (Figure 1C). This approach entails an initial round with pre-selected open-ended questions (i), strategic refinement and categorisation (ii), and collaborative consultation (iii) in an iterative manner (ii and iii) until a suitable level of consensus is achieved (iv). If consensus is not reached on any specific terms, a weighted majority voting system will be implemented to reach a conclusion (v).

To participate, interested collaborators can register at: https://corticallabs.com/nomenclature.html

 

When the last pit is closed



Research project focuses on places of abandonment, loss and renewal



Goethe University Frankfurt






FRANKFURT. Utopian narratives and stories about a region’s future are the focus of “Waste/Land/Futures: intergenerational relations in places of abandonment and renewal across Europe” – a project the Volkswagen Foundation is funding with €1.6 million over the next four years. Social scientists of various disciplines are involved, who come from Austria, Romania, Great Britain and Germany, i.e. the very countries where the regions under scrutiny are located. The sociological and cultural anthropological study’s scientific coordinator is Dr. Anamaria Depner, who works at Goethe University Frankfurt’s Institute for Social Pedagogy and Adult Education.

One thing in common to the regions studied is their demographic decline, caused by low birth rates, an ageing population and an imbalance in the number of people leaving the region as opposed to those coming to stay. While the causes of this partially can be found in structural change “below ground”, such as in Glasgow or the Saarland, where coal mining has ceased, other reasons lie “above ground”. An example for the latter is the Romanian Danube Delta, where tourism is booming in summer but comes to a standstill in winter, or in the Eisenerz former open-cast mining area in Austria. “These places are often the subject of conversations about the past, about decay and loss. But they also hold great potential for developing utopian visions of the future of European regions and the people who live there – and therefore for the future of Europe,” Depner explains.

To date, there has been little scientific research conducted into the situation in the four regions. This is now set to change. However, far from researching “from the outside”, one of the team’s preferred methods of choice is “participant observation”. Other co-creative methods are also employed, including Participatory Action Research (PAR), in which participants are involved in every step of the research process. The aim of such analyses is not to formulate instructions. Instead, following an initial evaluation, the researchers conduct interviews with families, including about how relationships between generations or to places and locations have and will continue to change. Literary texts, photographs and theater events will be developed with local residents as well as family members who have moved away, providing the basis for ideas for the future. At a later stage, the communities at the four different locations, each of which is accompanied by a different team of researchers, could also enter into discussions with each other.

Five researchers, including early career scientists, are part of the project. The enthusiasm of the four co-investigators, which was already evident in the extensive preliminary research, convinced the Volkswagen Foundation. The project is part of the latter’s “Challenges and Potentials for Europe” program; Goethe University Frankfurt is involved in five of the program’s projects, and thereby its most involved university in Germany. The program’s scientific coordinator is Dr. Ewa Palenga-Möllenbeck from Goethe University’s Institute of Sociology. A total of 21 international projects will present their research at a conference held at Herrenhausen Palace in Hanover on Wednesday, September 4, 2024.

Study finds many cocoa products contaminated by heavy metals



George Washington University





 July 31, 2024 

 

Dark chocolate lovers may want to limit their consumption to an ounce a day to stay on the safe side, according to the authors

WASHINGTON (July 24, 2024) - A new study from George Washington University found a disquieting percentage of cocoa products in the U.S. contain heavy metals that exceed guidelines, including higher concentrations in organic products.

GW researchers analyzed 72 consumer cocoa products, including dark chocolate, every other year over an eight year period for contamination with lead, cadmium, and arsenic, heavy metals that pose a significant health hazard in sufficient amounts.

“We all love chocolate but it’s important to indulge with moderation as with other foods that contain heavy metals including large fish like tuna and unwashed brown rice,” said Leigh Frame, director of integrative medicine and associate professor of clinical research and leadership at the GW School of Medicine and Health Sciences. “While it's not practical to avoid heavy metals in your food entirely, you must be cautious of what you are eating and how much.”

The unique study was led by Leigh Frame and the study’s lead author Jacob Hands, a medical student researcher in the Frame-Corr Lab at the GW School of Medicine and Health Sciences.

The researchers used a threshold of maximum allowable dose levels to assess the extent of heavy metal contamination in an array of chocolate products, found on grocery store shelves.

 

Key Findings: 

  • 43% of the products studied exceeded the maximum allowable dose level for lead.
  • 35% of the products studied exceeded the maximum allowable dose level for cadmium.
  • None of the products exceeded the maximum allowable dose level for arsenic.
  • Surprisingly, organic labeled products showed higher levels of both lead and cadmium compared to non-organic products. 

 

Key Takeaways: 

For the average consumer, consuming a single serving of these cocoa products may not pose significant health risks based on the median concentrations found. However, consuming multiple servings or combining consumption with other sources of heavy metals could lead to exposures that exceed the maximum allowable dose level. 

 

Foods with high lead levels may include animal foods that can bioaccumulate heavy metals (shellfish, organ meats) and foods or herbal supplements grown in contaminated soil and/or imported from countries with less regulation (e.g. China, Nigeria, India, Egypt). For cadmium, the main concerns are the same with the addition of some seaweeds, especially Hijiki seaweed. Consumers should be aware of potential cumulative exposure risks, particularly with cocoa products labeled organic, as they may have higher heavy metal concentrations. A serving size of dark chocolate is typically one ounce and has been generally suggested to have health benefits including cardiovascular healthcognitive performance, and chronic inflammation. However, the research is limited and concerns about heavy metals have yet to be taken into account. 

The study, “A Multi-Year Heavy Metal Analysis of 72 Dark Chocolate and Cocoa Products in the USA” was published on July 31, 2024 in Frontiers in Nutrition.

 

-GW-

 

Disclaimer: AAAS a

 

Beyond casualties: The enduring trauma of bereavement after armed conflicts



Max Planck Institute for Demographic Research
Lifetime bereavement for the population alive in 2023 by birth cohort/age 

image: 

Lifetime bereavement for the population alive in 2023 by birth cohort/age: The blue line shows the average number of life course conflict offspring deaths experienced by individuals at each age (per 1,000 individuals). The red lines show the same but for loss of parents. The purple line is the sum of the red and blue lines. Higher values indicate a higher prevalence of bereavement for individuals born in a given year who are alive in 2023. 

view more 

Credit: MPIDR



Armed conflicts claim more lives every day. The number of those mourning the lives loss to conflicts is an even greater number. What are the implications of a growing population of mourners, and how long will this grief persist in war-torn societies? In a recent study, researchers from the Max Planck Institute for Demographic Research (MPIDR), the CED – Centre for Demographic Studies, and the University of Washington examined the extent of conflict-related bereavement among immediate family members -parents and children- in a subset of countries experiencing high-intensity armed conflicts. The researchers also predicted how long and how intensely this bereavement is likely to persist in the population.

“When demographers study conflicts and wars, the focus is often how many people die, who the victims are, and how that affects things like life expectancy. The number of people who die in a conflict becomes a measure of its intensity,” explains Diego Alburez-Gutierrez, author and leader of the Kinship Inequalities Research Group at MPIDR. “But the human cost of war overlooks a critical aspect of the human cost. For every person killed, there are relatives and friends who survive and grieve those deaths. These survivors are impacted by these traumatic experiences for the rest of their lives.”

The researchers aimed to quantify how common it was to experience the death of a child or a parent to war across 16 countries suffering the highest population loss due to conflicts from 1989 to 2023. They used data from the United Nations World Population Prospects, the Uppsala Conflict Data Program database, the United Nations Office for the Coordination of Humanitarian Affairs, and the B’Tselem project.

The enduring impact of war

The study highlights the most lethal conflicts of the recent years: Syria, the State of Palestine, Afghanistan, and Ukraine. “We find that these populations will experience considerable levels of bereavement regardless of how these conflicts evolve in the future," says Alburez-Gutierrez.

He draws two main conclusions: "If we focus only on deaths, we overlook a vast part of the population scarred by the loss of loved ones. The number of mourners far exceeds the number of fatalities”, he explains. For instance, on average, each conflict death leaves more than two relatives (parents and/or children) bereaved in Ukraine, more than 3.5 in the State of Palestine, and roughly four in Syria and in Afghanistan. By the end of 2023, an estimated one out of every 67 Palestinians had lost one offspring to the conflict over the course of their lives, one in every 20 individuals in Syria, in Afghanistan, one in every 65 individuals, and one in every 200 in Ukraine, on average.

The projections for the future reveal another crucial finding: high levels of bereavement will remain even if all armed conflicts were to end immediately. “Looking ahead to 2050, even in a scenario in which there were no more conflict deaths after 2023, we estimate that one in every 142 Palestinians alive in 2050 will have ever experienced the death of one parent to conflict, and one in 200 the death of a child,” explains Emilio Zagheni, Co-author and Director of the MPIDR. “In populations with high youth conflict mortality, such as in the State of Palestine, a significant number of bereaved parents aged 30 and older will carry the trauma of losing a child for the rest of their lives. In settings where combatant or older-age mortality is higher, such as in Ukraine, a large population of orphaned children will go through their lives with the scar of having lost a parent,” 

"Longer and more lethal conflicts create larger populations of bereaved relatives. This has substantial negative impacts on the mental and physical health of survivors, reduces available emotional and economic support during critical life stages, and fosters adherence to extreme ideologies that hinder social and political reconciliation," adds Enrique Acosta, Co-author and researcher at the CED. "Our estimates of the population of individuals left bereaved by war can help design policies to support different groups of mourners based on their gender and age. Tailoring interventions to the needs of specific demographic groups is crucial for effective support."