Friday, June 13, 2025

 SPACE/COSMOS

Revealing the lives of planet-forming disks



New observations of 30 planet-forming disks reveal how gas and dust behave over time and shape the evolution of exoplanet systems




University of Arizona

Artist’s concept of a planet-forming disk 

image: 

Artist’s concept of a planet-forming disk, like the thirty studied for the ALMA AGE-PRO survey. The lifetime of the gas within the disk determines the timescale for planetary growth.

view more 

Credit: NSF/AUI/NSF NRAO/S.Dagnello




An international team of astronomers including researchers at the University of Arizona Lunar and Planetary Laboratory has unveiled groundbreaking findings about the disks of gas and dust surrounding nearby young stars, using the powerful Atacama Large Millimeter/submillimeter Array, or ALMA. 

The findings, published in 12 papers in a focus issue of the Astrophysical Journal, are part of an ALMA large program called the ALMA Survey of Gas Evolution of PROtoplanetary Disks, or AGE-PRO. AGE-PRO observed 30 planet-forming disks around sunlike stars to measure gas disk mass at different ages. The study revealed that gas and dust components in these disks evolve at different rates.

Prior ALMA observations have examined the evolution of dust in disks; AGE-PRO, for the first time, traces the evolution of gas, providing the first measurements of gas disk masses and sizes across the lifetime of planet-forming disks, according to the project's principal investigator, Ke Zhang of the University of Wisconsin-Madison. 

"Now we have both, the gas and the dust," said Ilaria Pascucci, a professor at planetary sciences at the U of A and one of three AGE-PRO co-principal investigators. "Observing the gas is much more difficult because it takes much more observing time, and that's why we have to go for a large program like this one to obtain a statistically significant sample." 

A protoplanetary disk swirls around its host star for several million years as its gas and dust evolve and dissipate, setting the timescale for giant planets to form. The disk's initial mass and size, as well as its angular momentum, have a profound influence on the type of planet it could form – gas giants, icy giants or mini-Neptunes – and migration paths of planets. The lifetime of the gas within the disk determines the timescale for the growth of dust particles to an object the size of an asteroid, the formation of a planet and finally the planet's migration from where it was born. 

In one of the survey's most surprising findings, the team discovered that as disks age, their gas and dust are consumed at different rates and undergo a shift in gas-to-dust mass ratio as the disks evolve: Unlike the dust, which tends to remain inside the disk over a longer time span, the gas disperses relatively quickly, then more slowly as the disk ages. In other words, planet-forming disks blow off more of their gas when they're young.

Zhang said the most surprising finding is that although most disks dissipate after a few million years, the ones that survive have more gas than expected. This would suggest that gaseous planets like Jupiter have less time to form than rocky planets. 

ALMA's unique sensitivity allowed researchers to use faint, so-called molecular lines to study the cold gas in these disks, characteristic wavelengths of a light spectrum that essentially act as "fingerprints," identifying different species of gas molecules. The first large-scale chemical survey of its kind, AGE-PRO targeted 30 planet-forming disks in three star-forming regions, ranging from 1 million to 6 million years in age: Ophiuchus (youngest), Lupus (1-3 million years old), and Upper Scorpius (oldest). Using ALMA, AGE-PRO obtained observations of key tracers of gas and dust masses in disks spanning crucial stages of their evolution, from their earliest formation to their eventual dispersal. This ALMA data will serve as a comprehensive legacy library of spectral line observations for a large sample of disks at different evolutionary stages. 

Dingshan Deng, a graduate student at LPL who is the lead author on one of the papers, provided the data reduction – essentially, the image analyses needed to get from radio signals to optical images of the disks – for the star-forming region in the constellation of Lupus (Latin for "wolf").

"Thanks to these new and long observations, we now have the ability to estimate and trace the gas masses, not only for the brightest and better studied disks in that region, but also the smaller and fainter ones," he said. "Thanks to the discovery of gas tracers in many disks where it hadn't been seen before, we now have a well-studied sample covering a wide range of disk masses in the Lupus star-forming region." 

"It took years to figure out the proper data reduction approach and analysis to produce the images used in this paper for the gas masses and in many other papers of the collaboration," Pascucci added. 

Carbon monoxide is the most widely used chemical tracer in protoplanetary disks, but to thoroughly measure the mass of gas in a disk, additional molecular tracers are needed. AGE-PRO used N2H+, or diazenylium, an ion used as an indicator for nitrogen gas in interstellar clouds, as an additional gas tracer to significantly improve the accuracy of measurements. ALMA's detections were also set up to receive spectral light signatures from other molecules, including formaldehyde, methyl cyanide and several molecular species containing deuterium, a hydrogen isotope. 

"Another finding that surprised us was that the mass ratio between the gas and dust tends to be more consistent across disks of different masses than expected," Deng said. "In other words, different-size disks will share a similar gas-to-dust mass ratio, whereas the literature suggested that smaller disks might shed their gas faster."

Funding for this study was provided by the National Science Foundation, the European Research Council, the Alexander von Humboldt Foundation, FONDECYT (Chile) among other sources. For full funding information, see the research paper.


The AGE-PRO program observed 30 protoplanetary disks around sun-like stars to measure how gas disk mass changes with age. The top row illustrates the previously known trend: the fraction of young stars with disks declines over time. The AGE-PRO study, for the first time, shows that the median gas disk mass of the surviving disks also decreases with age. Disks younger than 1 million years typically have several Jupiter masses of gas, but this drops rapidly to below 1 Jupiter mass in older systems. Interestingly, the surviving disks in the 1–3 million and 2–6 million-year age ranges appear to maintain similar median gas masses.

Credit

Carolina Agurto-Gangas and the AGE-PRO collaboration

Earth-based telescopes offer a fresh look at cosmic dawn



Small telescopes in Chile are first on Earth to cut through the cosmic noise



Johns Hopkins University

CLASS Telescopes in Chile 

image: 

CLASS telescopes can detect cosmic microwave light signals from the Cosmic Dawn. 

view more 

Credit: Deniz Valle and Jullianna Couto





For the first time, scientists have used Earth-based telescopes to look back over 13 billion years to see how the first stars in the universe affect light emitted from the Big Bang. 

Using telescopes high in the Andes mountains of northern Chile, astrophysicists have measured this polarized microwave light to create a clearer picture of one of the least understood epochs in the history of the universe, the Cosmic Dawn.

“People thought this couldn’t be done from the ground. Astronomy is a technology-limited field, and microwave signals from the Cosmic Dawn are famously difficult to measure,” said Tobias Marriage, project leader and a Johns Hopkins professor of physics and astronomy. “Ground-based observations face additional challenges compared to space. Overcoming those obstacles makes this measurement a significant achievement.”

Cosmic microwaves are mere millimeters in wavelength and very faint. The signal from polarized microwave light is about a million times fainter. On Earth, broadcast radio waves, radar, and satellites can drown out their signal, while changes in the atmosphere, weather, and temperature can distort it. Even in perfect conditions, measuring this type of microwave requires extremely sensitive equipment.

Scientists from the U.S. National Science Foundation’s Cosmology Large Angular Scale Surveyor, or CLASS, project used telescopes uniquely designed to detect the fingerprints left by the first stars in the relic Big Bang light—a feat that previously had only been accomplished by technology deployed in space, such as the U.S. National Aeronautics and Space Administration Wilkinson Microwave Anisotropy Probe (WMAP) and European Space Agency Planck space telescopes.

The new research, led by Johns Hopkins University and the University of Chicago, was published today in The Astrophysical Journal.

By comparing the CLASS telescope data with the data from the Planck and WMAP space missions, the researchers identified interference and narrowed in on a common signal from the polarized microwave light. 

Polarization happens when light waves run into something and then scatter.

“When light hits the hood of your car and you see a glare, that’s polarization. To see clearly, you can put on polarized glasses to take away glare,” said first author Yunyang Li, who was a PhD student at Johns Hopkins and then a fellow at University of Chicago during the research. “Using the new common signal, we can determine how much of what we’re seeing is cosmic glare from light bouncing off the hood of the Cosmic Dawn, so to speak.” 

After the Big Bang, the universe was a fog of electrons so dense that light energy was unable to escape. As the universe expanded and cooled, protons captured the electrons to form neutral hydrogen atoms, and microwave light was then free to travel through the space in between. When the first stars formed during the Cosmic Dawn, their intense energy ripped electrons free from the hydrogen atoms. The research team measured the probability that a photon from the Big Bang encountered one of the freed electrons on its way through the cloud of ionized gas and skittered off course. 

The findings will help better define signals coming from the residual glow of the Big Bang, or the cosmic microwave background, and form a clearer picture of the early universe. 

“Measuring this reionization signal more precisely is an important frontier of cosmic microwave background research,” said Charles Bennett, a Bloomberg Distinguished Professor at Johns Hopkins who led the WMAP space mission. “For us, the universe is like a physics lab. Better measurements of the universe help to refine our understanding of dark matter and neutrinos, abundant but elusive particles that fill the universe. By analyzing additional CLASS data going forward, we hope to reach the highest possible precision that’s achievable.”

Building on research published last year that used the CLASS telescopes to map 75% of the night sky, the new results also help solidify the CLASS team’s approach.

"No other ground-based experiment can do what CLASS is doing," says Nigel Sharp, program director in the NSF Division of Astronomical Sciences which has supported the CLASS instrument and research team since 2010. "The CLASS team has greatly improved measurement of the cosmic microwave polarization signal and this impressive leap forward is a testament to the scientific value produced by NSF's long-term support."

The CLASS observatory operates in the Parque Astronómico Atacama in northern Chile under the auspices of the Agencia Nacional de Investigación y Desarrollo.  

Other collaborators are at Villanova University, the NASA Goddard Space Flight Center, the University of Chicago, the National Institute of Standards and Technology, the Argonne National Laboratory, the Los Alamos National Laboratory, the Harvard-Smithsonian Center for Astrophysics, the University of Oslo, Massachusetts Institute of Technology, and the University of British Columbia. Collaborators in Chile are at the Universidad de Chile, Pontificia Universidad Católica de Chile, Universidad de Concepción, and the Universidad Católica de la Santísima Concepción.

The observatory is funded by the National Science Foundation, Johns Hopkins, and private donors.

Solar Orbiter gets world-first views of the Sun’s poles




European Space Agency

EUI video SolarOrbiter Sun south pole 

video: 

Solar Orbiter zooms into the Sun’s south pole

From Earth, we always look towards the Sun's equator. This year, the ESA-led Solar Orbiter mission broke free of this ‘standard’ viewpoint by tilting its orbit to 17° – out of the ecliptic plane where the planets and all other Sun-watching spacecraft reside. Now for the first time ever, we can clearly see the Sun’s unexplored poles.  

This video starts with the Sun as viewed from Earth. The grey images were taken by the SWAP extreme ultraviolet telescope on ESA’s Proba-2 spacecraft. The dashed red-green lines show the solar latitudes and longitudes (Stonyhurst grid), while the solid yellow lines show the centre of Earth’s view.  

We then rotate to Solar Orbiter’s tilted view, shown in yellow, and zoom in to the Sun’s south pole. Solar Orbiter used its Extreme Ultraviolet Imager (EUI) instrument to take these images.  

What you see is million-degree charged gas moving in the Sun’s outer atmosphere, the corona. Every now and then, a bright jet or plume lights up this gas.  

On 23 March 2025, Solar Orbiter was viewing the Sun from an angle of 17° below the Sun’s equator. Each orbit around the Sun, the spacecraft swings between solar latitudes of -17° and +17°, so it can study both the Sun’s south and north poles, and everything in between.  

Solar Orbiter is a space mission of international collaboration between ESA and NASA. The Extreme Ultraviolet Imager (EUI) instrument is led by the Royal Observatory of Belgium (ROB). ESA’s Proba-2 is a space mission dedicated to the demonstration of innovative technologies. Its extreme ultraviolet telescope (SWAP) is led by the Royal Observatory of Belgium.

view more 

Credit: ESA & NASA/Solar Orbiter/EUI Team, D. Berghmans (ROB) & ESA/Royal Observatory of Belgium




Thanks to its newly tilted orbit around the Sun, the European Space Agency-led Solar Orbiter spacecraft is the first to image the Sun’s poles from outside the ecliptic plane. Solar Orbiter’s unique viewing angle will change our understanding of the Sun’s magnetic field, the solar cycle and the workings of space weather.

Any image you have ever seen of the Sun was taken from around the Sun’s equator. This is because Earth, the other planets, and all other modern spacecraft orbit the Sun within a flat disc around the Sun called the ecliptic plane. By tilting its orbit out of this plane, Solar Orbiter reveals the Sun from a whole new angle.  

The video titled 'EUI video SolarOrbiter Sun south pole' compares Solar Orbiter’s view (in yellow) with the one from Earth (grey), on 23 March 2025. At the time, Solar Orbiter was viewing the Sun from an angle of 17° below the solar equator, enough to directly see the Sun’s south pole. Over the coming years, the spacecraft will tilt its orbit even further, so the best views are yet to come.

“Today we reveal humankind’s first-ever views of the Sun’s pole” says Prof. Carole Mundell, ESA's Director of Science. “The Sun is our nearest star, giver of life and potential disruptor of modern space and ground power systems, so it is imperative that we understand how it works and learn to predict its behaviour. These new unique views from our Solar Orbiter mission are the beginning of a new era of solar science.” 

All eyes on the Sun’s south pole

 

The collage titled 'Collage_SolarOrbiter_FirstPolarObservations' shows the Sun’s south pole as recorded on 16–17 March 2025, when Solar Orbiter was viewing the Sun from an angle of 15° below the solar equator. This was the mission’s first high-angle observation campaign, a few days before reaching its current maximum viewing angle of 17°.

The images shown in the collage were taken by three of Solar Orbiter’s scientific instruments: the Polarimetric and Helioseismic Imager (PHI), the Extreme Ultraviolet Imager (EUI), and the Spectral Imaging of the Coronal Environment (SPICE) instrument. Click on the image to zoom in and see video versions of the data.

“We didn’t know what exactly to expect from these first observations – the Sun’s poles are literally terra incognita,” says Prof. Sami Solanki, who leads the PHI instrument team from the Max Planck Institute for Solar System Research (MPS) in Germany.

The instruments each observe the Sun in a different way. PHI images the Sun in visible light (top left of the collage) and maps the Sun’s surface magnetic field (top centre). EUI images the Sun in ultraviolet light (top right), revealing the million-degree charged gas in the Sun’s outer atmosphere, the corona. The SPICE instrument (bottom row) captures light coming from different temperatures of charged gas above the Sun’s surface, thereby revealing different layers of the Sun's atmosphere.

By comparing and analysing the complementary observations made by these three imaging instruments, we can learn about how material moves in the Sun’s outer layers. This may reveal unexpected patterns, such as polar vortices (swirling gas) similar to those seen around the poles of Venus and Saturn.

These groundbreaking new observations are also key to understanding the Sun’s magnetic field and why it flips roughly every 11 years, coinciding with a peak in solar activity. Current models and predictions of the 11-year solar cycle fall short of being able to predict exactly when and how powerfully the Sun will reach its most active state.

Messy magnetism at solar maximum 

One of the first scientific findings from Solar Orbiter’s polar observations is the discovery that at the south pole, the Sun’s magnetic field is currently a mess. While a normal magnet has a clear north and south pole, the PHI instrument’s magnetic field measurements show that both north and south polarity magnetic fields are present at the Sun’s south pole.  

This happens only for a short time during each solar cycle, at solar maximum, when the Sun’s magnetic field flips and is at its most active. After the field flip, a single polarity should slowly build up and take over the Sun’s poles. In 5–6 years from now, the Sun will reach its next solar minimum, during which its magnetic field is at its most orderly and the Sun displays its lowest levels of activity.   

“How exactly this build-up occurs is still not fully understood, so Solar Orbiter has reached high latitudes at just the right time to follow the whole process from its unique and advantageous perspective,” notes Sami.

PHI’s view of the full Sun’s magnetic field puts these measurements in context (see 'PHI_south-pole-Bmap' and 'PHI_global-Bmap_20250211-20250429'). The darker the colour (red/blue), the stronger the magnetic field is along the line of sight from Solar Orbiter to the Sun.

The strongest magnetic fields are found in two bands either side of the Sun’s equator. The dark red and dark blue regions highlight active regions, where magnetic field gets concentrated in sunspots on the Sun’s surface (photosphere).

Meanwhile, both the Sun’s south and north poles are speckled with red and blue patches. This demonstrates that at small scales, the Sun’s magnetic field has a complex and ever-changing structure.  

SPICE measures movement for the first time

Another interesting ‘first’ for Solar Orbiter comes from the SPICE instrument. Being an imaging spectrograph, SPICE measures the light (spectral lines) sent out by specific chemical elements – among which hydrogen, carbon, oxygen, neon and magnesium – at known temperatures. For the last five years, SPICE has used this to reveal what happens in different layers above the Sun’s surface.

Now for the first time, the SPICE team has also managed to use precise tracking of spectral lines to measure how fast clumps of solar material are moving. This is known as a ‘Doppler measurement’, named after the same effect that makes passing ambulance sirens change pitch as they drive by.

The resulting velocity map reveals how solar material moves within a specific layer of the Sun. By comparing the SPICE doppler and intensity maps, you can directly compare the location and movement of particles (carbon ions) in a thin layer called the 'transition region’, where the Sun's temperature rapidly increases from 10 000 °C to hundreds of thousands of degrees.  

The SPICE intensity map reveals the locations of clumps of carbon ions. The SPICE doppler map includes the blue and red colours to indicate how fast the carbon ions are moving towards and away from the Solar Orbiter spacecraft, respectively. Darker blue and red patches are related to material flowing faster due to small plumes or jets.

Crucially, Doppler measurements can reveal how particles are flung out from the Sun in the form of solar wind. Uncovering how the Sun produces solar wind is one of Solar Orbiter’s key scientific goals.

“Doppler measurements of solar wind setting off from the Sun by current and past space missions have been hampered by the grazing view of the solar poles. Measurements from high latitudes, now possible with Solar Orbiter, will be a revolution in solar physics,” says SPICE team leader, Frédéric Auchère from the University of Paris-Saclay (France).

The best is yet to come

These are just the first observations made by Solar Orbiter from its newly inclined orbit, and much of this first set of data still awaits further analysis. The complete dataset of Solar Orbiter's first full ‘pole-to-pole' flight past the Sun is expected to arrive on Earth by October 2025. All ten of Solar Orbiter’s scientific instruments will collect unprecedented data in the years to come.

“This is just the first step of Solar Orbiter's 'stairway to heaven': in the coming years, the spacecraft will climb further out of the ecliptic plane for ever better views of the Sun's polar regions. These data will transform our understanding of the Sun’s magnetic field, the solar wind, and solar activity,” notes Daniel Müller, ESA’s Solar Orbiter project scientist.

Notes for editors

Solar Orbiter is the most complex scientific laboratory ever to study our life-giving star, taking images of the Sun from closer than any spacecraft before and being the first to look at its polar regions.

In February 2025, Solar Orbiter officially began the ‘high latitude’ part of its journey around the Sun by tilting its orbit to an angle of 17° with respect to the Sun’s equator. In contrast, the planets and all other Sun-observing spacecraft orbit in the ecliptic plane, tilted at most 7° from the solar equator.

The only exception to this is the ESA/NASA Ulysses mission (1990–2009), which flew over the Sun's poles but did not carry any imaging instruments. Solar Orbiter's observations will complement Ulysses’ by observing the poles for the first time with telescopes, in addition to a full suite of in-situ sensors, while flying much closer to the Sun. Additionally, Solar Orbiter will monitor changes at the poles throughout the solar cycle.

Solar Orbiter will continue to orbit around the Sun at this tilt angle until 24 December 2026, when its next flight past Venus will tilt its orbit to 24°. From 10 June 2029, the spacecraft will orbit the Sun at an angle of 33°. (Overview of Solar Orbiter's journey around the Sun.) 

Solar Orbiter is a space mission of international collaboration between ESA and NASA, operated by ESA. Solar Orbiter's Polarimetric and Helioseismic Imager (PHI) instrument is led by the Max Planck Institute for Solar System Research (MPS), Germany. The Extreme Ultraviolet Imager (EUI) instrument is led by the Royal Observatory of Belgium (ROB). The Spectral Imaging of the Coronal Environment (SPICE) instrument is a European-led facility instrument, led by the Institut d'Astrophysique Spatiale (IAS) in Paris, France.


  

Solar Orbiter's world-first views of the Sun's south pole

This collage shows Solar Orbiter's view of the Sun's south pole on 16–17 March 2025, from a viewing angle of around 15° below the solar equator. This was the mission’s first high-angle observation campaign, a few days before reaching its current maximum viewing angle of 17°.  

Until now, spacecraft (and ground-based telescopes) have never been able to clearly see the Sun's poles, because none ever reached further than 7° from the Sun's equator. (The ESA/NASA Ulysses mission (1990–2009) flew over the Sun's poles but did not carry any imaging instruments.) 

These data were recorded by three of Solar Orbiter’s scientific instruments: the Polarimetric and Helioseismic Imager (PHI), the Extreme Ultraviolet Imager (EUI), and the Spectral Imaging of the Coronal Environment (SPICE) instrument. The instruments each observe the Sun in a different way. 

PHI captures the visible light sent out by iron particles (617.3 nanometre wavelength, top left), revealing the Sun's surface (photosphere). PHI also maps the Sun’s surface magnetic field along the spacecraft's line of sight (top centre). In this map, blue indicates positive magnetic field, pointing towards the spacecraft, and red indicates negative magnetic field.  

EUI images the Sun in ultraviolet light (17.4 nanometre wavelength, top right), revealing the million-degree charged gas in the Sun’s outer atmosphere, the corona. This high-energy light is sent out by charged iron particles.  

The SPICE instrument (various wavelengths, bottom row) captures light coming from different layers above the Sun's surface, from the chromosphere right above the Sun's surface all the way to the Sun's corona. Each image captured by SPICE shows different temperatures of charged gas, at 10 000 °C, 32 000 °C, 320 000 °C, 630 000 °C and 1 000 000 °C. 

By comparing and analysing the complementary observations made by these three imaging instruments, we can learn about how material moves in the Sun’s outer layers. This may reveal unexpected patterns, such as polar vortices (swirling gas) similar to those seen around the poles of Venus and Saturn.  

These groundbreaking new observations are also key to understanding the Sun’s magnetic field and why it flips roughly every 11 years, coinciding with a peak in solar activity. Current models and predictions of the 11-year solar cycle fall short of being able to predict exactly when and how powerfully the Sun will reach its most active state. 

Solar Orbiter is a space mission of international collaboration between ESA and NASA. Solar Orbiter's Polarimetric and Helioseismic Imager (PHI) instrument is led by the Max Planck Institute for Solar System Research (MPS), Germany. The Extreme Ultraviolet Imager (EUI) instrument is led by the Royal Observatory of Belgium (ROB). The Spectral Imaging of the Coronal Environment (SPICE) instrument is a European-led facility instrument, led by the Institut d'Astrophysique Spatiale (IAS) in Paris, France.   

Credit

ESA & NASA/Solar Orbiter/PHI, EUI and SPICE Teams



PHI sees mixed-up magnetism at the Sun's south pole


Since 2025, Solar Orbiter is the first Sun-watching spacecraft to ever get a clear look at the Sun's poles. It discovered that at the south pole, the Sun’s magnetic field is currently a mess.  

This image shows a magnetic field map from Solar Orbiter's Polarimetric and Helioseismic Imager (PHI) instrument, centred on the Sun's south pole. Blue indicates positive magnetic field, pointing towards the spacecraft, and red indicates negative magnetic field.  

There are clear blue and red patches visible right up to the Sun's south pole, indicating that there are different magnetic polarities present (north and south). This happens only for a short time during each solar cycle, at solar maximum, when the Sun’s magnetic field flips and is at its most active. After the field flip, a single magnetic polarity should slowly build up and take over the Sun’s poles.  

Solar Orbiter will be watching the Sun throughout its calming-down phase. In 5–6 years from now, the Sun will reach its next solar minimum, during which its magnetic field is at its most orderly and the Sun has the lowest levels of activity.  

Solar Orbiter is a space mission of international collaboration between ESA and NASA. Solar Orbiter's Polarimetric and Helioseismic Imager (PHI) instrument is led by the Max Planck Institute for Solar System Research (MPS), Germany.  

Credit

ESA & NASA/Solar Orbiter/PHI Team, J. Hirzberger (MPS)


PHI_global-Bmap_20250211-20250429 [VIDEO] | 

PHI's pole-to-pole view of the Sun's magnetic field


This video shows a magnetic map of the Sun's surface, recorded by the ESA-led Solar Orbiter mission between 11 February and 29 April 2025. Thanks to its newly and uniquely tilted orbit, the spacecraft got its first-ever clear views of the Sun's south and north pole in this period.  

The darker the colour (red/blue), the stronger the magnetic field is along the line of sight from Solar Orbiter to the Sun. These maps were recorded by the mission's Polarimetric and Helioseismic Imager (PHI) instrument

The strongest magnetic fields are found in two bands on either side of the Sun’s equator. The dark red and dark blue regions highlight active regions, where magnetic field gets concentrated in sunspots on the Sun’s surface.  

Meanwhile, both the Sun’s south and north poles are speckled with red and blue patches. This demonstrates that at small scales, the Sun’s magnetic field has a complex and ever-changing structure. 

Typically, you would expect to see a single magnetic polarity (north/south) dominate at each pole. The fact that both polarities are visible right up to the poles is thanks to the Sun being at ‘solar maximum’, the phase of the solar cycle where the Sun's magnetic field flips.  

Over the next few years, Solar Orbiter will witness how the Sun's magnetic field calms down to a more ordered state.  

Solar Orbiter is a space mission of international collaboration between ESA and NASA. Solar Orbiter's Polarimetric and Helioseismic Imager (PHI) instrument is led by the Max Planck Institute for Solar System Research (MPS), Germany.

Credit

ESA & NASA/Solar Orbiter/PHI Team, J. Hirzberger (MPS)

SPICE sees the Sun's south pole

The Spectral Imaging of the Coronal Environment (SPICE) instrument on the ESA-led Solar Orbiter spacecraft got its first good look at the Sun's south pole in March 2025.

With this intensity map, and the associated doppler map, we compare two of SPICE's views of the Sun's south pole, both based on measurements of the light sent out by charged particles (ions) of carbon at a temperature of 32 000 °C. These ions live in the transition region, a thin layer around the Sun where the temperature rapidly increases from around 10 000 °C to hundreds of thousands of degrees.  

This intensity map reveals the locations of clumps of carbon ions.

Solar Orbiter is a space mission of international collaboration between ESA and NASA. The Spectral Imaging of the Coronal Environment (SPICE) instrument is a European-led facility instrument, led by the Institut d'Astrophysique Spatiale (IAS) in Paris, France.

Credit

ESA & NASA/Solar Orbiter/SPICE Team, M. Janvier (ESA) & J. Plowman (SwRI)

SPICE sees the Sun's south pole


The Spectral Imaging of the Coronal Environment (SPICE) instrument on the ESA-led Solar Orbiter spacecraft got its first good look at the Sun's south pole in March 2025.  

With this doppler map and the associated intensity map, we compare two of SPICE's views of the Sun's south pole, both based on measurements of the light sent out by charged particles (ions) of carbon at a temperature of 32 000 °C. These ions live in the transition region, a thin layer around the Sun where the temperature rapidly increases from around 10 000 °C to hundreds of thousands of degrees.  

This velocity map uses blue and red to indicate how fast the carbon ions are moving towards and away from the Solar Orbiter spacecraft, respectively. Darker blue and red patches are related to plasma flowing faster due to small plumes or jets.

Solar Orbiter is a space mission of international collaboration between ESA and NASA. The Spectral Imaging of the Coronal Environment (SPICE) instrument is a European-led facility instrument, led by the Institut d'Astrophysique Spatiale (IAS) in Paris, France.   

Credit

ESA & NASA/Solar Orbiter/SPICE Team, M. Janvier (ESA) & J. Plowman (SwRI)


NASA’s CODEX captures unique views of Sun’s outer atmosphere



NASA/Goddard Space Flight Center
CODEX coronal streamers 

image: 

The Sun continuously radiates material in the form of the solar wind. The Sun’s magnetic field shapes this material, sometimes creating flowing, ray-like formations called coronal streamers. In this view from NASA’s CODEX instrument, large dark spots block much of the bright light from the Sun. Blocking this light allows the instrument's sensitive equipment to capture the faint light of the Sun’s outer atmosphere.

view more 

Credit: NASA/KASI/INAF/CODEX

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.




Field of View Comparisons - SOHO's LASCO vs. CODEX 

NASA missions use coronagraphs to study the Sun in various ways, but that doesn’t mean they all see the same thing. Coronagraphs on the joint NASA-ESA Solar and Heliospheric Observatory (SOHO) mission look at visible light from the solar corona with both a wide field of view and a smaller one. The CODEX instrument’s field of view is somewhere in the middle, but looks at blue light to understand temperature and speed variations in the background solar wind.

 
In this composite image of overlapping solar observations, the center and left panels show the field-of-view coverage of the different coronagraphs with overlays and are labeled with observation ranges in solar radii. The third panel shows a zoomed-in, color-coded portion of the larger CODEX image. It highlights the temperature ratios in that portion of the solar corona using CODEX 405.0 and 393.5 nm filters.

Credit

NASA’s Coronal Diagnostic Experiment, or CODEX, has just delivered its first images — and they’re stunning! Mounted on the exterior of the International Space Station, CODEX is a solar coronagraph designed to block out bright light from the Sun to reveal our star’s outer atmosphere, or corona.

This mission gives scientists an unprecedented look at solar dynamics right from low Earth orbit. Watch the video to see these amazing images and find out what makes CODEX so unique!

Credit

Video Credit: NASA/Beth Anthony Music Credit: “Aglow and Just So – Instrumental” by Jay Price [PRS] via Universal Production Music Sound Effects: pixabay.com Additional Graphics: vecteezy.com

Rice students develop an award-winning adaptive exercise harness for astronauts to use in space



Rice University

engineering students 

image: 

A team of Rice engineering students has designed an innovative space exercise harness that won this year's Technology Collaboration Center’s Wearables Workshop and University Challenge. Their design answered a challenge posed by the HumanWorks Lab and Life Science Labs at NASA and Johnson Space Center.

view more 

Credit: Rice University





In the reduced-gravity space environment, human muscles and bones atrophy faster than they do on Earth. To slow down that process, astronauts need several hours of vigorous exercise each day they are on a space mission. This requirement for regular rigorous exercise is expected to become more stringent in future manned space missions, which are expected to last longer, involve more challenging conditions and require astronauts to perform more demanding and complex spacewalks.

A team of Rice University students mentored by Vanessa Sanchez at the George R. Brown School for Engineering and Computing have designed an innovative space exercise harness that is comfortable, responsive and adaptable to new exercise modalities.

“Exercise harnesses that astronauts use now have notable limitations — they are uncomfortable and can cause chafing and bruising. So a primary goal of this challenge was to design an adaptive harness with better fit and comfort,” said Sanchez, assistant professor of mechanical engineering. “Our student-led team addressed this issue by adding pneumatic padding that offers a customized fit, distributes pressure over a large surface area to reduce discomfort or injuries and also seamlessly adapts to load shifts — all of which together improved astronauts’ performance.”

Current space-based exercise harnesses also have outdated technology. The team of undergraduates Emily Yao, Nikhil Ashri, Jose Noriega and Ben Bridges and graduate student Jack Kalicak had a secondary goal to modernize the exercise harness with sensors for astronauts to customize their workouts using real-time data and feedback. The Rice researchers added two sensors to measure astronauts’ comfort and exercise performance: The first measures temperature and humidity changes during exercise, while the second measures load distribution at common pressure points such as the shoulder and hips.

“Taking the lead on the electrical hardware and software systems and working closely with the rest of the team to seamlessly integrate them gave me a great introduction to collaborative engineering and how different parts come together to build a unified system,” Ashri said.

As space missions get longer and more complex, it is expected that astronauts will need a variety of new kinds of exercise routines to counteract the negative effects of reduced or no gravity. Future spacecrafts are also expected to be more compact with more weight restrictions. Thus, an important goal of the challenge was to design a lighter harness that can be adapted to new exercise modalities in the future. The team did this by adding more modular attachments and increased attachment points to target more muscle groups and better balance the load distribution.

The Rice engineering students developed this new harness in response to a challenge posted by the HumanWorks Lab and Life Science Labs at NASA and Johnson Space Center for the 2025 Technology Collaboration Center’s (TCC) Wearables Workshop and University Challenge.

“The TCC University Challenge is a highly anticipated annual competition organized by TCC, a multi-institutional coalition that facilitates industry, government and academic collaborations to solve real-world problems,” Kalicak said. “It was exciting and enriching to participate and compete with more than a dozen teams from other universities around the country to develop novel design solutions to 11 real-life technical challenges identified by industry leaders like ExxonMobil and NASA.”

This spring, experts from diverse fields ranging from biotech and oil and gas to space gathered at the Johnson Space Center in Houston for the competition and chose this adaptive harness as the winner for the Best Challenge Response Award.

“This challenge gave us the freedom to innovate and explore possibilities beyond the current harness technology,” Yao said. “I’m especially proud of how our team worked together to build a working prototype that not only has real-world impact but also provides a foundation that NASA and space companies can build and iterate upon. This makes the entire experience incredibly rewarding. It’s moments like these that remind me why I love designing with and for people.”

“It was very fulfilling to watch these young engineers work together to find innovative and tangible solutions to real-world problems,” Sanchez said. “They did impressive work researching the challenge, exploring potential approaches and coming up with creative design solutions to address a challenge that NASA and other space agencies around the world face. This innovative adjustable exercise harness transforms how astronauts exercise in space and will significantly improve their health and safety during spaceflights.”

This project was funded by the National Science Foundation and Rice’s Office of Undergraduate Research and Inquiry.

- By Raji Natarajan, science writer for the George R. Brown School of Engineering and Computing

Why the moon shimmers with shiny glass beads




Washington University in St. Louis




The Apollo astronauts didn’t know what they’d find when they explored the surface of the moon, but they certainly didn’t expect to see drifts of tiny, bright orange glass beads glistening among the otherwise monochrome piles of rocks and dust.

The beads, each less than 1 mm across, formed some 3.3 to 3.6 billion years ago during volcanic eruptions on the surface of the then-young satellite. “They’re some of the most amazing extraterrestrial samples we have,” said Ryan Ogliore, an associate professor of physics in Arts & Sciences at Washington University in St. Louis, home to a large repository of lunar samples that were returned to Earth. “The beads are tiny, pristine capsules of the lunar interior.”

Using a variety of microscopic analysis techniques not available when the Apollo astronauts first returned samples from the moon, Ogliore and a team of researchers have been able to take a close look at the microscopic mineral deposits on the outside of lunar beads. The unprecedented view of the ancient lunar artifacts was published in Icarus. The investigation was led by Thomas Williams, Stephen Parman and Alberto Saal from Brown University.

The study relied, in part, on the NanoSIMS 50, an instrument at WashU that uses a high-energy ion beam to break apart small samples of material for analysis. WashU researchers have used the device for decades to study interplanetary dust particles, presolar grains in meteorites, and other small bits of debris from our solar system.

The study combined a variety of techniques — atom probe tomography, scanning electron microscopy, transmission electron microscopy and energy dispersive X-ray spectroscopy — at other institutions to get a closer look at the surface of the beads. “We’ve had these samples for 50 years, but we now have the technology to fully understand them,” Ogliore said. “Many of these instruments would have been unimaginable when the beads were first collected.”

As Ogliore explained, each glass bead tells its own story of the moon’s past. The beads — some shiny orange, some glossy black — formed when lunar volcanoes shot material from the interior to the surface, where each drop of lava solidified instantly in the cold vacuum that surrounds the moon. “The very existence of these beads tells us the moon had explosive eruptions, something like the fire fountains you can see in Hawaii today,” he said. Because of their origins, the beads have a color, shape and chemical composition unlike anything found on Earth.

Tiny minerals on the surface of the beads could react with oxygen and other components of Earth’s atmosphere. To avoid this possibility, the researchers extracted beads from deep within samples and kept them protected from air exposure through every step of the analysis. “Even with the advanced techniques we used, these were very difficult measurements to make,” Ogliore said.

The minerals (including zinc sulfides) and isotopic composition of the bead surfaces serve as probes into the different pressure, temperature and chemical environment of lunar eruptions 3.5 billion years ago. Analyses of orange and black lunar beads have shown that the style of volcanic eruptions changed over time. “It’s like reading the journal of an ancient lunar volcanologist,” Ogliore said.


Originally published on the Ampersand website

 

What’s really in our food? A global look at food composition databases and the gaps we need to fix



To build healthier food systems, we need better food data. A new research shows where the gaps are—and how innovations like PTFI are helping to close them



The Alliance of Bioversity International and the International Center for Tropical Agriculture

Food market01 

image: 

A food market Himachal Pradesh, India.

view more 

Credit: Neil Palmer (CIAT)





In today’s world, we hear a lot about what we should eat: more vegetables, less sugar and salt, and to obtain locally sourced, sustainable, and nutrient-rich food. But there’s a fundamental question most people don’t think about: How do we actually know what is in our food? The answer lies in food composition databases (FCDBs), which are collections of data about the nutritional content of different foods, from macronutrients like protein and fat, to vitamins, minerals and specialized biomolecules like antioxidants and phytochemicals. 

But a new global review, published in Frontiers in Nutrition, reveals that many of these databases are outdated, inconsistent, or difficult to access altogether—especially in the places that need them most. 

The study, titled The state of food composition databases: Data attributes and FAIR data harmonization in the era of digital innovation, reviewed 101 FCDBs across 110 countries to assess their quality and usefulness. These databases are supposed to help everyone, from dietitians and researchers to governments and consumers, understand food diversity and improve food systems. But the review found that while most databases can be found online (meeting the “Findability” standard), they often fall short in key areas: 

  • Only 30% of databases were truly accessible—meaning users could retrieve and use the data. 

  • Just 69% were interoperable, or compatible with other systems. 

  • Only 43% met the standard for reusability, limiting their long-term value.

More troubling, the databases were not evenly spread across the world. While Europe, North America, and parts of Asia had well-developed food data systems, many countries in Africa, Central America, and Southeast Asia had outdated or incomplete data—or no database at all. 

Why does this matter? FCDBs play a vital role in public health, agriculture, and nutrition policy. Without accurate, up-to-date data, it’s impossible to make informed decisions about nutrient deficiencies in national populations; school feeding programs or dietary guidelines; crop breeding strategies for more nutritious foods or labeling laws and food safety regulations. 

The lack of coverage also poses a deeper threat: it hides the richness of local diets and traditional foods, particularly in Indigenous and rural communities. If those foods aren’t included in official databases, they risk being ignored in nutrition programs or policy discussions, and eventually not being cultivated anymore, posing a threat to agricultural biodiversity

What’s missing from today’s databases? 

The review outlined several serious gaps that limit the effectiveness of most food composition databases: Instead of analyzing local foods directly, many FCDBs borrow data from other countries. That’s a problem because nutrients can vary depending on climate, soil, cooking methods, and crop variety. Another challenge is that there’s no unified global system for naming foods, defining nutrients, or measuring content. Without standardization, it’s hard to compare or combine data across countries. 

On the other hand, across all 101 databases, only 38 food components were commonly reported—meaning that most databases only track basic information like calories and protein. Modern science shows food contains thousands of biomolecules that can affect health, but most FCDBs don’t include them. 

Another limitation is that databases aren’t regularly updated: About 39% of the databases hadn’t been updated in more than five years (in Ethiopia and Sri Lanka, their database has not updated since its creation, more than 50 years ago). That means they don’t reflect how food systems—and diets—are changing due to climate, migration, and new technologies. Maintaining high-quality FCDBs requires labs, experts, and funding—which many low- and middle-income countries lack. This contributes to a growing gap between regions with the most food data and those with the least. 

While the paper points out these limitations, it also highlights what’s possible when food data systems are done right. The Periodic Table of Food Initiative (PTFI) is a groundbreaking effort managed by The American Heart Association and the Alliance of Bioversity and CIAT, and designed to overcome the very challenges most databases face. 

What sets PTFI apart? 

  • Unprecedented Molecular Detail: PTFI goes far beyond the 38 commonly tracked nutrients. Using advanced techniques like metabolomics and mass spectrometry, food is analyzed for over 30,000 biomolecules. 

  • Truly Global Scope: Unlike most databases focused on national diets, PTFI is profiling foods from every continent, with special attention to underrepresented and Indigenous foods that are often left out of traditional systems. 

  • 100% FAIR-Compliant: PTFI is designed to be Findable, Accessible, Interoperable, and Reusable—the gold standard for data sharing and transparency. 

  • Open and Standardized: All of PTFI’s data is freely available online, using globally accepted protocols so that anyone—from a government to a food startup—can use it. 

This study makes one thing clear: we can’t fix food systems if we don’t know what’s in our food. The current patchwork of food composition databases leaves too many people—and too many foods—out of the conversation. We need global collaboration, smarter technology, and above all, equity in data access and representation. Everyone, everywhere, deserves access to the kind of food knowledge that helps nourish people and the planet. Initiatives like PTFI aren’t just updating databases, they’re redefining how we understand food itself—as a complex, diverse, and dynamic source of health, culture, and resilience. 

  

FAIR Data Principles criteria for FCDBs. Bar graph illustrating the percentage of databases meeting the criteria for each principle


Presence of 13 food groups across the evaluated food composition databases, presented as the percentage of databases that include each food group.

A timeline illustrating the creation dates (blue dots) and most recent updates (red dots) for the 97 FCDBs analyzed. Lower graph: A timeline displaying the last update dates for the 97 FCDBs, categorized by database interface: Table (green), Web interface (purple), or both Table and Web interface (blue).

Credit

Frontiers in Nutrition

Racial differences in tumor collagen structure may impact cancer prognosis



A collagen-based optical marker associated with cancer spread varies by race, suggesting the need for race-specific approaches in cancer prognosis



SPIE--International Society for Optics and Photonics

collagen - 1000 

image: 

Researchers obtained second-harmonic-generation (SHG) images of more than 300 tumor samples. Their analysis found significant racial differences in a prognostic marker known as the forward-to-backward scattering ratio (F/B), which is associated with risk of tumor metastasis. 

view more 

Credit: T. Elias et al., doi 10.1117/1.BIOS.2.2.022703





In cancer care, accurate tools for predicting whether a tumor will spread (metastasize) can help patients receive the most appropriate treatments. But existing prediction methods don’t always work equally well for everyone. In particular, Black patients with breast or colon cancer often experience worse outcomes than White patients, despite receiving similar care. A new study from researchers at the University of Rochester, published in Biophotonics Discovery, suggests that differences in the structure of collagen—the main protein in connective tissue—may help explain part of this disparity.

The study focused on two types of cancer: invasive ductal carcinoma (a common form of breast cancer) and stage I colon adenocarcinoma. Researchers used a special imaging technique called second-harmonic generation (SHG) to examine collagen fibers in tumor samples from over 300 patients. This technique allowed them to measure how collagen is organized in and around tumors—information that could be used to predict the risk of metastasis.

Two features were of particular interest: the ratio of light scattered forward versus backward by the collagen fibers (known as the F/B ratio), and how varied the angles of the fibers were (fiber angle variability, or FAV). Prior studies have shown that these measurements are associated with the likelihood of metastasis.

In this study, researchers found that the F/B ratio differed significantly between Black and White patients. Among breast cancer patients, the F/B ratio in the tumor-stroma interface—a key region where cancer cells interact with surrounding tissue—was lower in Black patients. This pattern has previously been associated with a higher risk of metastasis. In colon cancer patients, Black individuals tended to have higher F/B ratios, which may also indicate a more aggressive tumor behavior based on earlier research.

Interestingly, the variability in fiber angle (FAV) did not differ by race in either type of cancer. This suggests that not all collagen-based features are influenced by racial differences.

The findings raise important questions about how race-related biological differences—possibly influenced by genetics, environment, or other factors—could affect cancer progression and the effectiveness of diagnostic tools. They also point to the need for more diverse patient representation in clinical trials. If prognostic tools are developed using mostly data from White patients, they may not work as well for others.

The researchers recommend that future clinical studies include more participants from underrepresented groups to ensure that new prediction methods are accurate for all patients. They also emphasize the importance of continuing to study how tumor biology may differ across racial lines to improve health outcomes for everyone.

 

For details, see the original Gold Open Access article by T. Elias et al., “Exploring racial differences in second-harmonic-generation–based prognostic indicators of metastasis in breast and colon cancer,” Biophoton. Discovery 2(2) 022703 (2025), doi 10.1117/1.BIOS.2.2.022703

 

Museomics highlights the importance of scientific museum collections



At a symposium in Paris, a University of São Paulo professor of zoology explains how new technologies allow for the use of degraded DNA from specimens preserved for decades, contributing to the advancement of scientific knowledge and conservation 




Fundação de Amparo à Pesquisa do Estado de São Paulo

Museomics highlights the importance of scientific museum collections 

image: 

Taran Grant is an associate curator of amphibians at the University of São Paulo’s Museum of Zoology (photo: Daniela Gennari/MZUSP)

view more 

Credit: Daniela Gennari/MZUSP





In 1831, Charles Darwin embarked on a five-year voyage to South America aboard the HMS Beagle, which was conducting hydrographic surveys. During the expedition, Darwin explored remote regions of the continent, collecting plants, animals, and fossils and recording detailed observations. These materials were fundamental to the development of his ideas on evolution by natural selection, which are a pillar of modern scientific development. Today, the collection Darwin gathered on his most famous voyage is in the care of the Natural History Museum in London, where it has been organized and preserved for two centuries.

Natural history museums have played a fundamental role in preserving scientific memory. However, many of these museums’ scientific collections have remained underutilized in recent decades. The emergence of sequencing techniques that require recent tissues and intact DNA has made historical collections irrelevant. But this scenario is changing, driven primarily by museomics.

“Museomics can be defined as the application of molecular biology, genomics, and bioinformatics techniques to the study of specimens preserved in museum collections, involving the extraction, sequencing, and analysis of degraded DNA from historical samples [hDNA] from museums, enabling investigations into evolution, biodiversity, population genetics, phylogeny, taxonomy, and conservation,” said Taran Grant, who is a full professor at the Department of Zoology at the Institute of Biosciences and an associate curator of amphibians at the Museum of Zoology, both at the University of São Paulo (USP).

Grant is one of the speakers at the France-Brazil Museology Seminar, which began on June 12th at the Museum of Man in Paris. The seminar was organized by the National Museum of Natural History (MNHN) of France in partnership with FAPESP and the University of São Paulo (USP). It is part of the program for FAPESP Week France 2025.

Grant spoke about how museomics is revolutionizing the way scientific collections are used and valued. By enabling the extraction and analysis of DNA from historical specimens, it opens up possibilities for research in areas such as evolution, extinction, adaptation, environmental change, and biodiversity conservation. “With museomics, we can access genetic information from materials collected over a hundred years ago,” Grant said.

The scientist says he started extracting and sequencing DNA from old museum samples in the 1990s, but he faced major technological limitations in his work. At the time, the 0 method was used, which required long, well-preserved DNA fragments. This was difficult because the genetic material in museum specimens is often highly fragmented and degraded.

“DNA degradation is linked to the age of the sample and the conditions of the museum. In the alcohol used to preserve specimens, the problem is the water content. DNA’s worst enemy is water, which corrupts genetic material. In museums located in warmer regions with high temperatures and no air conditioning, evaporation is greater, so the alcohol must be changed more frequently,” Grant explained to Agência FAPESP.

Technological advances, particularly with the Illumina platform and other next-generation sequencing technologies, have made it possible to work with fragmented or degraded DNA, favoring the use of museum material. However, a new challenge has emerged: the amount of endogenous DNA – that is, authentic DNA from the organism – in tissue samples is extremely small. This makes the samples highly susceptible to contamination from environmental DNA or from handling. Therefore, controlled environments, such as clean room laboratories with sterile conditions, are required for the extraction and analysis of this material to prevent contamination and loss of original genetic information.

With support from FAPESP, Grant and his colleagues set up a clean room at the Department of Zoology at USP. “As far as I know, it’s the only facility of its kind in Latin America dedicated to taxonomy studies. Without FAPESP’s support, this advancement in our research wouldn’t have been possible. This equipment allows us to develop museomics and bring natural history museums back to the epicenter of biodiversity studies,” he said.

DNA collections

“For two centuries, museums were the place where biodiversity research was conducted. Now, museomics allows us to study entire collections that previously had no genetic value. It is estimated that at least 3 billion specimens are preserved in museums worldwide, and now we can access the DNA of many of them,” said Grant.

The results are already starting to show up. For example, the current issue of the Bulletin of the American Museum of Natural History features a 78-page article by Grant, Mariana LyraMiguel Trefaut RodriguesVanessa Kruth Verdade, and researchers from Germany and the United Kingdom.

In the article, the authors describe how they used museomics to answer a decades-old question about the classification of amphibians in the Atlantic Rainforest. By sequencing portions of genomes extracted from tiny amounts of ancient DNA preserved in museum specimens, the scientists reclassified rocket frogs into 12 species, three of which are extinct, and proposed a new genus. Previously, it was believed that they were a single species.

“We knew from acoustic and molecular data that there were more species, but we couldn’t compare them with specimens described in the past because there was no way to extract DNA from them. Now this is possible,” said Grant.

The field of taxonomy, which was previously paralyzed by a lack of reliable genetic data from preserved specimens, is moving forward again. With it, the ability to formulate effective conservation policies is advancing as well.

“Without knowing how many and which species exist, there’s no way to protect what’s at risk. Historical material is extremely important for resolving taxonomy, and without taxonomy, there’s no conservation. Because the paradigm for conservation is species, not populations or individuals. If we don’t know how many and which species exist, we can’t formulate conservation policies and strategies,” the USP professor pointed out.

“With advances in technology, we’ve gained a much more accurate picture of diversity over time, which informs not only conservation actions but also biodiversity-related policies.”

According to Grant, the work published in the Bulletin of the American Museum of Natural History began in 2018. It involved collaborating with collections in Brazil and abroad, particularly with the University of Potsdam in Germany.

“The first step was to gain experience with pioneering laboratories. My main collaborator in hDNA is Mariana Lyra, who helped set up our clean room laboratory. Today she works at New York University in Abu Dhabi, but maintains strong ties with Brazil. Her work in Potsdam was fundamental in ensuring the quality and credibility of our work,” he said.

Grant said that, in addition to scientific advancement, museomics represents a symbolic and practical revaluation of natural history museums. “For a long time, museums were seen as exhibition spaces or warehouses. But the true value of museums lies in their collections, which are invisible to the public but now have a new scientific role.”

Not only does museology rekindle the role of natural history museums, it also poses new challenges in terms of preservation, infrastructure, and funding. “We need to think of specimens as sources of DNA, not just as specimens for morphological study. Conserving genetic material must be a priority from the beginning,” he said.

“This requires adequate infrastructure, such as climate control, to prevent the degradation of genetic material. We also need to start extracting and preserving tissue samples from the specimens already in our collections. A ten-year-old specimen will be 110 years old in a century. If we preserve the samples properly, we can halt the process of DNA degradation. And museomics is just getting started,” said Grant.

FAPESP’s participation in VivaTech is part of the FAPESP Week France program. For more information, visit: fapesp.br/week/2025/france.


 

Global mercury levels in rivers have doubled since Industrial Revolution




Tulane University




Mercury levels in the world's rivers have more than doubled since the pre-industrial era, according to new research from Tulane University that establishes the first known global baseline for riverine mercury pollution.

The study, published in Science Advances, developed a process-based model to simulate mercury transport in rivers and found that global rivers carried approximately 390 metric tons of mercury to oceans annually before 1850. Today, that figure has jumped to about 1,000 metric tons per year.

Primary drivers of the increase are wastewater discharge, soil erosion and mercury releases from industrial activities and mining, said lead author Yanxu Zhang, associate professor of Earth and environmental sciences at Tulane School of Science and Engineering.

“Human activities have disrupted the global mercury cycle in every aspect,” Zhang said. “While previous studies have focused on mercury concentrations in the atmosphere, soils and seawater, they have largely overlooked rivers, a major pathway for mercury that has effectively become a pipeline for wastewater from both municipal and industrial sources.”

The findings have significant implications for human health and wildlife, as mercury compounds are potent neurotoxins that can accumulate in fish and pose health risks through consumption. The researchers noted that rivers near critical wildlife habitats, including major bird migration flyways in East Asia and North America, have experienced concerning increases in mercury levels.

"The establishment of a baseline for riverine mercury during the pre-industrial era can serve as a key reference point," said Zhang, noting that it could provide targets for international agreements like the Minamata Convention on Mercury, which aims to reduce global mercury pollution.

The Tulane research team includes postdoctoral associate Tengfei Yuan, collaborating with scientists Dong Peng from Nanjing University and Zeli Tan from the Pacific Northwest National Laboratory.

They created a detailed computer model called MOSART-Hg to simulate pre-industrial mercury transport from land to oceans through river systems. Their findings closely matched mercury concentrations found in dated sediment core samples from coastal areas around the world, validating their approach.

Regional patterns revealed the most dramatic increases in mercury pollution occurred in North and South America, which contribute 41% of the global growth in riverine mercury since 1850, followed by Southeast Asia (22%) and South Asia (19%).

The study identified artisanal and small-scale gold mining (ASGM) as a particularly significant contributor to mercury pollution in South America, Southeast Asia and parts of Africa. In the Amazon region, for instance, mercury levels have soared due to both increased soil erosion from deforestation and mercury releases from mining activities.

"The Amazon River's mercury budget now exceeds 200 metric tons per year, with three-quarters of this originating from human activities and primarily ASGM," Zhang said.

Industrial mercury releases were identified as the main driver of increases in regions like East Asia, where rivers in China contribute more than 70% of the regional mercury. The Yangtze River's mercury flux has more than doubled from pre-industrial levels.

Not all regions saw increases, however. The Mediterranean region experienced lower mercury levels compared to pre-industrial times, which researchers attribute to the construction of dams like the Aswan High Dam on the Nile River that trap mercury-laden sediment.

Riverine mercury concentrations could serve as a rapid-response metric to assess the effectiveness of mercury pollution governance as countries work to reduce mercury emissions and restore polluted environments, he said.

The study also involves international collaborators from the University of California San Diego, Beijing Forestry University, the U.S. Geological Survey and Géosciences Environnement Toulouse.