Saturday, October 12, 2024

 

Study probes how eating less can extend lifespan



Researchers at The Jackson Laboratory conduct pivotal study into aging and lifespan to uncover new details about how diets might make people live longer — but also their negative side effects.



Jackson Laboratory

Genetic diversity key to The Jackson Laboratory longevity study. 

image: 

A graphic representing the power of genetic diversity in mice to study longevity and healthspan.

view more 

Credit: The Jackson Laboratory




For nearly a century, laboratory studies have shown consistent results: eat less food, or eat less often, and an animal will live longer. But scientists have struggled to understand why these kinds of restrictive diets work to extend lifespan, and how to best implement them in humans. Now, in a long-awaited study to appear in the Oct. 9 issue of Nature, scientists at The Jackson Laboratory (JAX) and collaborators tracked the health of nearly one thousand mice on a variety of diets to make new inroads into these questions.

The study was designed to ensure that each mouse was genetically distinct, which allowed the team to better represent the genetic diversity of the human population. By doing so, the results are made more clinically relevant, elevating the study to one of the most significant investigations into aging and lifespan to date.

The study concluded that eating fewer calories had a greater impact on lifespan than periodic fasting, revealing that very-low-calorie diets generally extended the mice’s lifespan regardless of their body fat or glucose levels — both typically seen as markers of metabolic health and aging. Surprisingly, the mice that lived the longest on the restrictive diets were those that lost the least weight despite eating less. Animals that lost the most weight on these diets tended to have low energy, compromised immune and reproductive systems, and shorter lives.

“Our study really points to the importance of resilience,” said Gary Churchill, Karl Gunnar Johansson Chair and professor at JAX who led the study. “The most robust animals keep their weight on even in the face of stress and caloric restriction, and they are the ones that live the longest. It also suggests that a more moderate level of calorie restriction might be the way to balance long-term health and lifespan.”

Churchill and his colleagues assigned female mice to any of five different diets: one in which the animals could freely eat any amount of food at any time, two in which the animals were provided only 60% or 80% of their baseline calories each day, and two in which the animals were not given any food for either one or two consecutive days each week but could eat as much as they wanted on the other days. Then, the mice were studied for the rest of their lives with periodic blood tests and extensive evaluation of their overall health.

Overall, mice on unrestricted diets lived for an average of 25 months, those on the intermittent fasting diets lived for an average of 28 months, those eating 80% of baseline lived for an average of 30 months, and those eating 60% of baseline lived for 34 months. But within each group, the range of lifespans was wide; mice eating the fewest calories, for example, had lifespans ranging from a few months to four and a half years.

When the researchers analyzed the rest of their data to try to explain this wide range, they found that genetic factors had a far greater impact on lifespan than diets, highlighting how underlying genetic features, yet to be identified, play a major role in how these diets would affect an individual person’s health trajectory. Moreover, they pinpointed genetically-encoded resilience as a critical factor in lifespan; mice that naturally maintained their body weight, body fat percentage and immune cell health during periods of stress or low food intake, as well as those that did not lose body fat late in life, survived the longest.

“If you want to live a long time, there are things you can control within your lifetime such as diet, but really what you want is a very old grandmother,” Churchill said.

The study also cast doubt on traditional ideas about why certain diets can extend life in the first place. For example, factors like weight, body fat percentages, blood glucose levels and body temperature did not explain the link between cutting calories and living a longer life.  Instead, the study found that immune system health and traits related to red blood cells were more clearly connected to lifespan. Importantly, those findings mean that human studies of longevity – which often use metabolic measurements as markers for aging or youthfulness – may be overlooking more important aspects of healthy aging.

“While caloric restriction is generally good for lifespan, our data show that losing weight on caloric restriction is actually bad for lifespan,” Churchill explained. “So when we look at human trials of longevity drugs and see that people are losing weight and have better metabolic profiles, it turns out that might not be a good marker of their future lifespan at all.”

 

 

Consumer Food Insights Report highlights increasing use of food-ordering apps



Survey shows food spending per person has increased 15% since January 2022



Purdue University

Purdue agricultural economist Joseph Balagtas 

image: 

In the latest Consumer Food Insights Report, Joseph Balagtas, professor of agricultural economics at Purdue University and director of the Center for Food Demand Analysis and Sustainability, explored consumers’ online food ordering app usage. (Purdue Agricultural Communications photo/Kate Jacobson)

view more 

Credit: Purdue Agricultural Communications photo/Kate Jacobson






WEST LAFAYETTE, Ind. — Around two-thirds of consumers have used a food-ordering app at least once for takeout, delivery or both, according to the September 2024 Consumer Food Insights Report (CFI). Over half have used an app for a delivery order. Of those who say they have used an app to order food, nearly half report using one for either delivery or takeout at least once a week.

The survey-based report out of Purdue University’s Center for Food Demand Analysis and Sustainability(CFDAS) assesses food spending, consumer satisfaction and values, support of agricultural and food policies, and trust in information sources. Purdue experts conducted and evaluated the survey, which included 1,200 consumers across the U.S.

“The COVID-19 pandemic changed the economy in many ways, particularly in the service economy,” said the report’s lead author, Joseph Balagtas, professor of agricultural economics at Purdue and director of CFDAS. 

Earlier this year, the U.S. Department of Agriculture (USDA) reported that spending on food-ordering apps for deliveries from full-service restaurants quadrupled between prepandemic months and 2022. The trend prompted the CFDAS team to partner with Valerie Kilders, assistant professor of agribusiness marketing at Purdue, to measure and evaluate consumer usage of the apps.

When ordering food online, 68% of consumers say they “sometimes,” “often” or “always” use discounts or promo codes.

Food purchased away from home is typically more costly than food prepared at home with groceries. Understandably, many consumers seek cost reductions when paying for the convenience of a prepared meal, Balagtas said. This is particularly true for consumers who spend the least on food. Half of them used discounts and promo codes “often” or “always” when ordering food online. 

The report breaks down per-person weekly food expenditure [BDS1] [EHB2] into three groups: thrifty (less than $50 a week), moderate ($50 to $85 a week) and liberal (more than $85 a week) spenders. “Consumers who spend the most on food tend to seek out discounts less frequently,” Balagtas said.

The CFI survey also asked consumers about the additional fees associated with many food-ordering apps. Many attribute the fee to operating expenses of the service, whether it’s to cover fuel and time for delivery services or administration and maintenance of the app itself.

The survey further revealed that on average, consumers say they tip between 10% and 19% for a food delivery order. “Interestingly, 15% say they tip less than 10% of the total order, and 14% say they do not tip at all for this service,” Balagtas said. “We see little difference in the tipping percentages when disaggregating the responses by per-person weekly food spending.”

The sustainable food purchasing index remained unchanged from the CFI survey’s last assessment in June 2024. 

“Consumers continue to purchase food that they feel is safe and fits their tastes, budgets and nutritional needs,” said Elijah Bryant, a survey research analyst at CFDAS and a co-author of the report. And fewer of them currently buy or plan to buy foods with environmental and social sustainability in mind.

“Even though consumers may value the environmental impact and social responsibility of their food, when it comes to purchasing factors, more immediate priorities like food security, taste, economic factors and nutrition drive their decisions,” he said.

Since its inception in January 2022, the CFI survey has documented a gradual positive trend in per-person weekly food expenditures. In January 2022, the figure was around $72. Last month, consumers reported an average per-person weekly spending total of $83, a 15% increase.

“Consumers are having to adjust their budgets to accommodate higher food prices to purchase the same groceries,” Bryant said. “Wage growth will be a key determinant in food purchasing behavior changes as food prices remain higher after inflation spiked in 2022.”

Based on the USDA’s questionnaire for measuring food insecurity, the CFDAS researchers estimate the national food insecurity rate to be 13%, unchanged from last month. The rate of food insecurity is highest among households that spend less than $50 on food per person per week.

“We have seen a clear correlation between income and food security in the past and see that many households that spend less on food are likely doing so due to income constraints,” Bryant said. Around 29% said they use free food resources, such as food banks, to supplement their diets. This shows the importance of these resources for people who struggle with food insecurity due to a lower food budget, he said.

Around 14% of thrifty food spenders adhere to either a vegetarian or vegan diet, relative to just 6% of moderate and liberal food spenders. Thrifty spenders also report growing their own food in either a home or community garden at a higher rate (32%) than moderate (24%) and liberal (21%) food spenders.

“We do not observe many substantial differences in the frequency of a variety of surveyed food behaviors between the spending groups,” Bryant said.

“However, we do observe thrifty food spenders choosing generic foods over brand-name foods more frequently than moderate and liberal spenders,” he said. In line with the larger share of vegans and vegetarians in the thrifty group, they are also more likely to choose plant-based proteins over animal proteins.

The Center for Food Demand Analysis and Sustainability is part of Purdue’s Next Moves in agriculture and food systems and uses innovative data analysis shared through user-friendly platforms to improve the food system. In addition to the Consumer Food Insights Report, the center offers a portfolio of online dashboards.

Writer: Steve Koppes

 

About Purdue Agriculture
Purdue University’s College of Agriculture is one of the world’s leading colleges of agricultural, food, life and natural resource sciences. The College is committed to: preparing students to make a difference in whatever careers they pursue; stretching the frontiers of science to discover solutions to some of our most pressing global, regional and local challenges; and, through Purdue Extension and other engagement programs, educating the people of Indiana, the nation and the world to improve their lives and livelihoods. To learn more about Purdue Agriculture, visit this site.

 

About Purdue University

Purdue University is a public research institution demonstrating excellence at scale. Ranked among top 10 public universities and with two colleges in the top four in the United States, Purdue discovers and disseminates knowledge with a quality and at a scale second to none. More than 105,000 students study at Purdue across modalities and locations, including nearly 50,000 in person on the West Lafayette campus. Committed to affordability and accessibility, Purdue’s main campus has frozen tuition 13 years in a row. See how Purdue never stops in the persistent pursuit of the next giant leap — including its first comprehensive urban campus in Indianapolis, the Mitch Daniels School of Business, Purdue Computes and the One Health initiative — at https://www.purdue.edu/president/strategic-initiatives

 

How do Americans use food-ordering applications? 

How do consumers interact with food ordering applications? 




 

Another step towards decoding smell



Researchers from Bonn and Aachen elucidate the role of individual brain neurons in human odor perception




Universitatsklinikum Bonn

Another step towards decoding smell 

image: 

(from left) Prof. Florian Mormann and Marcel Kehl are on the trail of the neuronal mechanisms of human odor perception.

view more 

Credit: University Hospital Bonn / Rolf Müller




We often only realize how important our sense of smell is when it is no longer there: food hardly tastes good, or we no longer react to dangers such as the smell of smoke. Researchers at the University Hospital Bonn (UKB), the University of Bonn and the University of Aachen have investigated the neuronal mechanisms of human odor perception for the first time. Individual nerve cells in the brain recognize odors and react specifically to the smell, the image and the written word of an object, for example a banana. The results of this study close a long-standing knowledge gap between animal and human odor research and have now been published in the renowned journal Nature.

Imaging techniques such as functional magnetic resonance imaging (fMRI) have previously revealed which regions of the human brain are involved in olfactory perception. However, these methods do not allow the sense of smell to be investigated at the fundamental level of individual nerve cells. "Therefore, our understanding of odor processing at the cellular level is mainly based on animal studies, and it has not been clear to what extent these results can be transferred to humans," says co-corresponding author Prof. Florian Mormann from the Department of Epileptology at the UKB, who is also a member of the Transdisciplinary Research Area (TRA) "Life & Health" at the University of Bonn.

Nerve cells in the brain identify odors

Prof. Mormann's research group has now succeeded for the first time in recording the activity of individual nerve cells during smelling. This was only possible because the researchers worked together with patients from the Clinic for Epileptology at the UKB, one of the largest epilepsy centers in Europe, who had electrodes implanted in their brains for diagnostic purposes. They were presented with both pleasant and unpleasant scents, such as old fish. "We discovered that individual nerve cells in the human brain react to odors. Based on their activity, we were able to precisely predict which scent was being smelled," says first author Marcel Kehl, a doctoral student at the University of Bonn in Prof. Mormann's working group at the UKB. The measurements showed that different brain regions such as the primary olfactory cortex, anatomically known as the piriform cortex, and also certain areas of the medial temporal lobe, specifically the amygdala, the hippocampus and the entorhinal cortex, are involved in specific tasks. While the activity of nerve cells in the olfactory cortex most accurately predicted which scent was smelled, neuronal activity in the hippocampus was able to predict whether scents were correctly identified. Only nerve cells in the amygdala, a region involved in emotional processing, reacted differently depending on whether a scent was perceived as pleasant or unpleasant.

Nerve cells react to the smell, image and name of the banana

In a next step, the researchers investigated the connection between the perception of scents and images. To do this, they presented the participants in the Bonn study with the matching images for each odor, for example the scent and later a photo of a banana, and examined the reaction of the neurons. Surprisingly, nerve cells in the primary olfactory cortex responded not only to scents, but also to images. "This suggests that the task of the human olfactory cortex goes far beyond the pure perception of odors," says co-corresponding author Prof. Marc Spehr from the Institute of Biology II at RWTH Aachen University.

The researchers discovered individual nerve cells that reacted specifically to the smell, the image and the written word of - for example - the banana. This discovery indicates that semantic information are processed early on in human olfactory processing. The results not only confirm decades of animal studies, but also show how different brain regions are involved in specific human odor processing functions. "This is an important contribution on the way to decoding the human olfactory code," says Prof. Mormann. "Further research in this area is necessary in order to one day develop olfactory aids that we can use in everyday life as naturally as glasses or hearing aids."

Funding: The study was funded by the German Research Foundation (DFG), the Federal Ministry of Education and Research (BMBF) and the state of North Rhine-Westphalia (NRW) as part of the iBehave project.

 

NYU Tandon School of Engineering study maps pedestrian crosswalks across entire cities, helping improve road safety and increase walkability



NYU Tandon School of Engineering




As pedestrian fatalities in the United States reach a 40-year high, a novel approach to measuring crosswalk lengths across entire cities could provide urban planners with crucial data to improve safety interventions. 

NYU Tandon School of Engineering researchers Marcel Moran and Debra F. Laefer published the first comprehensive, city-wide analysis of crosswalk distances in the Journal of the American Planning Association. Moran is an Urban Science Faculty Fellow at the Center for Urban Science + Progress (CUSP), and Laefer is a Professor of Civil and Urban Engineering and CUSP faculty member.

"In general, lots of important data related to cities’ pedestrian realm is analog (so it exists only in old diagrams and is not machine readable), is not comprehensive, or both," said lead author Moran, highlighting the gap this study fills. "We know that longer crosswalks pose increased safety risks to pedestrians, but rarely are cities sitting on up-to-date, comprehensive data about their own crosswalks. So even answering the question,‘what are the 100 longest crossings in our city?' is not easy. We want to change that.”

This study's unique contribution lies in its scale and methodology, potentially providing a powerful new tool for city planners to identify and address high-risk areas.

The team analyzed nearly 49,000 crossings in three diverse cities: a European city (Paris), a dense American city (San Francisco), and a less-dense, more car-centric American city (Irvine). To accomplish this, they employed a combination of data sources and techniques. 

"We combined crosswalk distance measurements from two different datasets," Laefer said. "The first is from OpenStreetMap, which comes from a community of users who have crowdsourced and built a map of the world."

However, OpenStreetMap data alone wasn't comprehensive enough. "If we had only used OpenStreetMap, we would have been left with a lot of crosswalks missing," Laefer explained. "So we also used satellite imagery tools to measure the remaining crosswalk distances."

Their technique revealed distinct patterns in each urban environment. According to the published paper, the average crosswalk lengths were approximately 26 feet in Paris (.03% at 70 feet or longer), about 43 feet in San Francisco (4.4% at 70 feet or longer), and about 58 feet in Irvine (with about 20% at 70 feet or longer).  Crossings over 50 to 60 feet start to show a higher concentration of pedestrian collisions, according to Moran.

The study confirmed a significant correlation between crosswalk length and pedestrian safety in all three cities examined. Longer crosswalks were associated with higher probabilities of pedestrian-vehicle collisions, with each additional foot increasing collision likelihood by 0.8% to 2.11%. Crossings where recent collisions occurred were 15% to 43% longer than city averages. 

Moran sees this research as a powerful tool for city planners and policymakers. "The three cities we have mapped now have these datasets, and can evaluate different investments and make informed decisions in pedestrian infrastructure," he explained.

The potential for this research to inform public policy extends beyond these three cities. Moran and his team are planning to scale up their approach to the 100 largest cities in the United States, potentially creating a public resource for exploring crosswalk distances.

According to Moran, simple measures could significantly improve pedestrian safety on crosswalks. "Small low-tech ways to improve the pedestrian environment can really lead to safety benefits. These can include extending the sidewalks out from each side and putting pedestrian refuge islands in the middle," Moran noted.

This study is part of Moran's broader effort to improve urban transportation. He explains, "I'm trying to make urban transportation safer, more sustainable and more equitable. I use a variety of methods like mining data, satellite imagery and field collection to understand our streets, how they can change, and how those changes can lead to these improved outcomes."

 

 FREE LABOUR

Citizen scientists will be needed to meet global water quality goals



University College London
Collecting data in the River Lea, London 

image: 

Collecting data in the River Lea, Hackney Downs, London, surveying for freshwater invertebrates using the ‘Riverfly’ citizen science method, which is used across the UK by volunteers. Invertebrates are sensitive to changes in water quality, so they are a good indicator of pollution. Credit: Dr Izzy Bishop

view more 

Credit: Dr Izzy Bishop, UCL




Sustainable development goals for water quality will not be met without the involvement of citizen scientists, argue an international team led by a UCL researcher, in a new policy brief.

The policy brief and attached technical brief are published by Earthwatch Europe on behalf of the United Nations Environment Programme (UNEP)-coordinated World Water Quality Alliance that has supported citizen science projects in Kenya, Tanzania and Sierra Leone. The reports detail how policy makers can learn from examples where citizen scientists (non-professionals engaged in the scientific process, such as by collecting data) are already making valuable contributions.

The report authors focus on how to meet one of the UN’s Sustainable Development Goals around improving water quality, which the UN states is necessary for the health and prosperity of people and planet.

Lead author Dr Izzy Bishop (UCL Centre for Biodiversity & Environment Research, UCL Biosciences) said: “Progress towards meeting water quality targets remains dangerously off track. In order to meet global goals on water quality, we need more data to understand the problem and how we can tackle it.

“Locals who know the water and use the water are both a motivated and knowledgeable resource, so citizen science networks can enable them to provide large amounts of data and act as stewards of their local water bodies and sources.

“Citizen science has the potential to revolutionise the way we manage water resources to improve water quality.”

Earlier this year, the United Nations Environment Assembly resolved that there was a need for better collection of water quality data in order to strengthen water policies and improve the provision of clean water.

The report authors argue that improving water quality data will require governments and organisations to work collaboratively with locals who collect their own data, particularly where government monitoring is scarce, but also where there is government support for citizen science schemes.

Water quality improvement has a particularly high potential for citizen scientists to make an impact, as professionally collected data is often limited by a shortage of funding and infrastructure, while there are effective citizen science monitoring methods that can provide reliable data.

The authors write that the value of citizen science goes beyond the data collected, as there are other benefits pertaining to education of volunteers, increased community involvement, and greater potential for rapid response to water quality issues.

In presenting their report at a launch webinar this month, the team said that policy makers can learn from case studies where citizen science is already effectively contributing to water quality monitoring, and scale up the methods to be used more widely.

One positive example is in the Mara River basin in Tanzania and Kenya, where the World Water Quality Alliance has supported the governments of both countries to work with local water user associations, comprised of local citizens who rely on the river for drinking, cooking, washing, and fishing. People in the communities have been collecting data that has yielded useful insights into the river system, including how pollution from agriculture is impacting the water quality.

Another case study described in the technical report is from Sierra Leone, where the country has reported on progress towards the SDG goal on water quality using a combination of government agency data and citizen science data that fills in the gaps, particularly for remote river tributaries that can be difficult to reach. In the Rokel River basin, citizen scientists have more than doubled the amount of data available, and locals are actively involved in developing the river’s management plan.

The report also describes successful case studies from countries with greater government resources, such as a Canadian open access data platform that enables Indigenous and non-Indigenous community groups to supply data and access training, with data covering 50,000 sites in rivers, lakes, streams and wetlands across the vast country. In the UK, the Catchment Systems Thinking Cooperative brings together stakeholders who are co-designing a consistent approach and standardising data collection methods, with links to government bodies to establish guidelines on the use of data.

Dr Steven Loiselle of Earthwatch Europe commented: “It is alarming how little information is available about the state of our rivers, lakes and groundwater, but fortunately, citizen scientists are well-placed to provide extensive data, in a low-cost and responsive manner.”

Stuart Warner, of the UN Environment Programme Global Environment Monitoring System for freshwater (GEMS/Water), said: “We are calling on policy makers across the globe to work with local communities to improve data collection that will help them to improve water quality using a readily available approach – citizen science.”

Citizen science is increasingly being incorporated into mainstream science and policy, but is still a developing field that requires frameworks and guidelines for its success, and opportunities for citizen scientists to improve their skills. At UCL, graduate students on the MSc Ecology & Data Science course can specialise in citizen science, to learn about its challenges, opportunities, and applications.

 GOOD NEWS

Study finds mercury pollution from human activities is declining



Models show that an unexpected reduction in human-driven emissions led to a 10 percent decline in atmospheric mercury concentrations



Massachusetts Institute of Technology





Cambridge, MA – MIT researchers have some good environmental news: Mercury emissions from human activity have been declining over the past two decades, despite global emissions inventories that indicate otherwise.

In a new study, the researchers analyzed measurements from all available monitoring stations in the Northern Hemisphere and found that atmospheric concentrations of mercury declined by about 10 percent between 2005 and 2020.

They used two separate modeling methods to determine what is driving that trend. Both techniques pointed to a decline in mercury emissions from human activity as the most likely cause.

Global inventories, on the other hand, have reported opposite trends. These inventories estimate atmospheric emissions using models that incorporate average emission rates of polluting activities and the scale of these activities worldwide.

“Our work shows that it is very important to learn from actual, on-the-ground data to try and improve our models and these emissions estimates. This is very relevant for policy because, if we are not able to accurately estimate past mercury emissions, how are we going to predict how mercury pollution will evolve in the future?” says Ari Feinberg, a former postdoc in the Institute for Data, Systems, and Society (IDSS) and lead author of the study.

The new results could help inform scientists who are embarking on a collaborative, global effort to evaluate pollution models and develop a more in-depth understanding of what drives global atmospheric concentrations of mercury.

However, due to a lack of data from global monitoring stations and limitations in the scientific understanding of mercury pollution, the researchers couldn’t pinpoint a definitive reason for the mismatch between the inventories and the recorded measurements.

“It seems like mercury emissions are moving in the right direction, and could continue to do so, which is heartening to see. But this was as far as we could get with mercury. We need to keep measuring and advancing the science,” adds co-author Noelle Selin, an MIT professor in the IDSS and the Department of Earth, Atmospheric and Planetary Sciences (EAPS).

Feinberg and Selin, his MIT postdoctoral advisor, are joined on the paper by an international team of researchers that contributed atmospheric mercury measurement data and statistical methods to the study. The research appears this week in the Proceedings of the National Academy of Sciences.

Mercury mismatch

The Minamata Convention is a global treaty that aims to cut human-caused emissions of mercury, a potent neurotoxin that enters the atmosphere from sources like coal-fired power plants and small-scale gold mining.

The treaty, which was signed in 2013 and went into force in 2017, is evaluated every five years. The first meeting of its conference of parties coincided with disheartening news reports that said global inventories of mercury emissions, compiled in part from information from national inventories, had increased despite international efforts to reduce them.

This was puzzling news for environmental scientists like Selin. Data from monitoring stations showed atmospheric mercury concentrations declining during the same period.

Bottom-up inventories combine emission factors, such as the amount of mercury that enters the atmosphere when coal mined in a certain region is burned, with estimates of pollution-causing activities, like how much of that coal is burned in power plants.

“The big question we wanted to answer was: What is actually happening to mercury in the atmosphere and what does that say about anthropogenic emissions over time?” Selin says.

Modeling mercury emissions is especially tricky. First, mercury is the only metal that is in liquid form at room temperature, so it has unique properties. Moreover, mercury that has been removed from the atmosphere by sinks like the ocean or land can be re-emitted later, making it hard to identify primary emission sources.

At the same time, mercury is more difficult to study in laboratory settings than many other air pollutants, especially due to its toxicity, so scientists have limited understanding of all chemical reactions mercury can undergo. There is also a much smaller network of mercury monitoring stations, compared to other polluting gases like methane and nitrous oxide.

“One of the challenges of our study was to come up with statistical methods that can address those data gaps, because available measurements come from different time periods and different measurement networks,” Feinberg says.

Multifaceted models

The researchers compiled data from 51 stations in the Northern Hemisphere. They used statistical techniques to aggregate data from nearby stations, which helped them overcome data gaps and evaluate regional trends.

By combining data from 11 regions, their analysis indicated that Northern Hemisphere atmospheric mercury concentrations declined by about 10 percent between 2005 and 2020.

Then the researchers used two modeling methods — biogeochemical box modeling and chemical transport modeling — to explore possible causes of that decline.  Box modeling was used to run hundreds of thousands of simulations to evaluate a wide array of emission scenarios. Chemical transport modeling is more computationally expensive but enables researchers to assess the impacts of meteorology and spatial variations on trends in selected scenarios.

For instance, they tested one hypothesis that there may be an additional environmental sink that is removing more mercury from the atmosphere than previously thought. The models would indicate the feasibility of an unknown sink of that magnitude.

“As we went through each hypothesis systematically, we were pretty surprised that we could really point to declines in anthropogenic emissions as being the most likely cause,” Selin says.

Their work underscores the importance of long-term mercury monitoring stations, Feinberg adds. Many stations the researchers evaluated are no longer operational because of a lack of funding.

While their analysis couldn’t zero in on exactly why the emissions inventories didn’t match up with actual data, they have a few hypotheses.

One possibility is that global inventories are missing key information from certain countries. For instance, the researchers resolved some discrepancies when they used a more detailed regional inventory from China. But there was still a gap between observations and estimates.

They also suspect the discrepancy might be the result of changes in two large sources of mercury that are particularly uncertain: emissions from small-scale gold mining and mercury-containing products.

Small-scale gold mining involves using mercury to extract gold from soil and is often performed in remote parts of developing countries, making it hard to estimate. Yet small-scale gold mining contributes about 40 percent of human-made emissions.

In addition, it’s difficult to determine how long it takes the pollutant to be released into the atmosphere from discarded products like thermometers or scientific equipment.

“We’re not there yet where we can really pinpoint which source is responsible for this discrepancy,” Feinberg says.

In the future, researchers from multiple countries, including MIT, will collaborate to study and improve the models they use to estimate and evaluate emissions. This research will be influential in helping that project move the needle on monitoring mercury, he says.

###

This research was funded by the Swiss National Science Foundation, the U.S. National Science Foundation, and the U.S. Environmental Protection Agency.

 

Disclaimer: AAAS and EurekAlert! 

 GEOLOGICAL SEXISM

What's in a mineral name? Not very many women, U-M study finds



University of Michigan





ANN ARBOR—The mineral scottyite was named after Michael Scott.

Not the Michael Scott of the television series "The Office"—the Michael Scott who co-founded the technology company Apple. Billionaire George Soros likewise has a mineral, called sorosite, named after him. There's rooseveltite, after Franklin Delano Roosevelt and leifite after Leif Erickson. Another mineral, dewindtite, was named after a budding 23-year-old geologist named Jean Charles Louis De Windt, who drowned while swimming in a quarry.

Notably absent from this list are any minerals named after women. In fact, of the minerals named after people, 94% have been named after men, according to a study led by recent University of Michigan doctoral graduate Chris Emproto. 

Emproto combed through nearly 6,000 mineral names and found that while 50.7% of all minerals are named after men, just 2.8% of all minerals are named after women. 

Emproto and his co-authors were interested in understanding how this proportion has changed over time and determining when gender parity could be expected in the earth sciences. 

"Gender doesn't pertain to how rocks and minerals form. In the absence of any systemic barriers, we would expect gender equity or gender demographics that are consistent. But we don't see that," he said.

Instead, the researchers found that growth in the proportion of women among new minerals named for people had stalled decades ago—even as more women were becoming scientists. Growth began to slow by the mid-1980s, and by the 2000s, that proportion had reached an equilibrium: only about 10% of minerals named after people in a given year were named for women. If the current rate of naming conventions holds, women will never achieve gender parity in terms of new minerals named after them. 

Emproto's results are published in the journal American Mineralogist.

"What’s interesting is that we slowly stopped making progress towards gender parity in new mineral names over the last 20 years or so, even though it feels like we’ve been making steady progress towards gender parity in the field itself," Emproto said. 

Minerals are most often named by the collectors and scientists who find them. Once the properties of the new mineral are determined, a proposal is submitted for approval to an organization called the International Mineralogical Association Commission on New Minerals, Nomenclature, and Classification. Only a few rules are imposed on new mineral names: reuse of an obsolete or discredited name should not happen within 50 years, and new names must not be too similar to existing names.

Minerals are often named after a specific property of the mineral, where it's found, or for arbitrary reasons: Emproto points out "olympite," named for the 1980 Olympic Games. Naming minerals after people is the largest category of arbitrary names. As of December 2022, a total of 3,294 minerals are named for people, representing 2,742 individuals. These people are "scientists, miners, engineers, mineral collectors, poets, politicians, philosophers, philanthropists, entrepreneurs and explorers from every inhabited continent," the researchers note. 

Just 167 of these people with a mineral honorific are women. However, this also includes 23 minerals that share their names with a man. No minerals in the dataset were named after multiple women.

To evaluate these naming conventions, the research team recorded and categorized all 5,901 minerals approved by the IMA or "grandfathered" into use as of December 2022 into a database. They used a binary gender system in the study, but acknowledge that system does not represent gender diversity in the geosciences, Emproto said.

The first two minerals named after women are marialite and laurite, both discovered and named in 1866. Marialite was named for Maria vom Rath, the wife of the German mineralogist Gerhard vom Rath, who discovered the mineral. Friedrich Wöhler named laurite after Laura Rupe Joy, the wife of his close friend, the American chemist Charles Arad Joy. 

It wasn't until 1924 that a mineral was named after a female scientist: the radioactive mineral sklodowskite was named for Marie Skłodowska Curie. But her husband, Pierre Curie, beat her to the name game: the mineral curite was named after Pierre in 1921.

"There was steady progress from the 1950s to about 1985 or 1990, and then things started to trail off," Emproto said. "But the most interesting thing is that this increase in rate is mostly driven by just a few regions."

Russians comprise 15% of all minerals named after people, but are responsible for nearly half—43%—of minerals named for women. The United States ranks second, but Russia has nearly three times the number of women with minerals named for them, 72, compared to the U.S.

Researchers may expect the rate of minerals named after women to increase as more women become involved in the sciences, but that hasn't been the case. By 1985, women earned 26% of geosciences undergraduate degrees and 24% of geosciences graduate degrees. By 2017, those numbers had increased to 43% and 30%. Even so, the increase in the rate of minerals named after women plateaued in the mid-1980s, Emproto said.

The researchers projected how long it would take for men and women to be equally represented by new mineral names—that is, there is an equal proportion of women and men among only new mineral names—assuming that the rate of new mineral discoveries increases as it has since 1950. They found that gender parity would occur around the year 2266. At this rate, people would need to find more than 44,000 as yet undiscovered minerals—something researchers don't expect to happen.

Alternatively, to reach equal representation on a year-on-year basis by the year 2057, the proportion of minerals named for women would need to increase by about 12% annually.

"It's not time to pat ourselves on the back. Not only do we have the unsurprising result that women are significantly outnumbered by men in a field that has nothing to do with gender, but the progress we have made came to a halt a while ago," Emproto said. "We will probably run out of minerals before we reach equity, and if we do achieve that gender equity, it will be at a time when there are very few minerals left to name."

Emproto's co-authors include Gabriela Farfan, Tyler Spano, Marko Bermanec, Mike Rumsey, Barbara Dutrow, Raquel Alonso-Perez, Jessica Riaño and U-M professor of earth and environmental sciences Adam Simon.