Friday, April 14, 2023

Living through high inflation increases home ownership

UC San Diego Rady School of Management Study reveals implications of today’s high inflation will have a lasting impact on housing markets

Peer-Reviewed Publication

UNIVERSITY OF CALIFORNIA - SAN DIEGO

Figure 1 

IMAGE: FIGURE 1 view more 

CREDIT: UC SAN DIEGO

People who experience periods of high inflation are more likely to buy a home, according to a new study from the University of California San Diego’s Rady School of Management.

The paper, to be published in The Journal of Finance, uses various sources of data which reveal households that have been exposed to high inflation are more likely to invest in real estate. The study suggests many homeowners buy because they are motivated to protect themselves from possible future price hikes.

The study is the first to reveal that personal experience with inflation is a driver of home ownership.

“We think one reason people choose to buy instead of rent is because they are worried about future inflation, which may drive up both rent and house prices,” said Alex Steiny Wellsjo, study co-author and assistant professor of economics and strategy at the Rady School. “People who have lived through high inflation in the past may expect higher inflation in the future, causing them to wish they were a homeowner. This is especially true if they can finance with a fixed-rate mortgage, further protecting them from future inflation.”

Wellsjo added that the implications of the high inflation people are currently experiencing around the world will have a lasting impact on housing markets.

“Our paper suggests that cohorts living through the current inflationary period will have a higher demand for housing for years to come,” she said.

To find out how people make home ownership decisions, Wellsjo and co-author Ulrike Malmendier, a professor with a joint appointment at the Haas School of Business and economics department at UC Berkeley, conducted a novel survey of 700 homeowners in six European countries (Austria, Germany, Ireland, Italy, Portugal and Spain).

Respondents on the survey were asked: what are good reasons to buy a home, whether they have personally experienced high inflation, whether they were worried about future inflation and whether inflation impacted their own decision to buy a home.

Of those surveyed, 50% indicated that “real estate is a good investment if there is inflation.” People who had lived through high inflation were 21% more likely to be worried about inflation in the future and 74% more likely to say that inflation affected their own decision to buy a home.

The authors also used data from the European Central Bank’s Household Finance and Consumption Survey of 220,000 households across 22 European countries, which revealed that the effects of experienced inflation are large. For example, for the typical household, increasing their inflation experiences from 2% to 5.4%, would increase their likelihood of owning from 65% to 75%.

Households’ exposure to past episodes of higher or lower inflation can help to explain differences in the composition of homeownership both within and across countries.

For example, in Germany and Austria, less than half of households own a home. But 85% or more own in Lithuania, Slovakia and Croatia, countries that have histories of high inflation. Similarly, only 57% own their home in France, which has had more price stability, but 82% do in neighboring Spain—a country with a long history of inflation.

“These households with similar demographics and in similar financial situations make systematically different tenure decisions,” write Wellsjo  and Malmendier. “While financial institutions play an important role, as do house prices, housing supply and demographics, we show that economic histories experienced by potential homeowners and especially inflation experiences, strongly predict investment in housing.”

The effect of personal experiences appears to be powerful and long-lasting enough to influence even the homeownership decisions of immigrants who move to a new housing market and still respond to the inflation exposure they experienced in their home countries.

Using data from the American Community Survey, Wellsjo and Malmendier identified household heads who immigrated to the U.S. from outside the country. They were able to calculate the household’s lifetime inflation experiences during their time in their home country and in the U.S. and how that impacted their purchasing decisions after immigrating. Once again, they find that household heads who experienced higher inflation over their lifetime were more likely to be homeowners.

“We show that the relationship between prior inflation and home purchasing choices is not explained by housing market conditions, nor by indicators of current economic conditions or other economic experiences,” the authors write. “The impacts of experiencing high inflation have a long-lasting effect on home ownership.”

Gentle method allows for eco-friendly recycling of solar cells

Peer-Reviewed Publication

CHALMERS UNIVERSITY OF TECHNOLOGY

Thin-film solar cells on roof tiles 

IMAGE: THIN-FILM SOLAR CELLS ON ROOF TILES view more 

CREDIT: MIDSUMMER

By using a new method, precious metals can be efficiently recovered from thin-film solar cells. This is shown by new research from Chalmers University of Technology, Sweden. The method is also more environmentally friendly than previous methods of recycling and paves the way for more flexible and highly efficient solar cells.

Today there are two mainstream types of solar cells. The most common is silicon-based and accounts for 90 percent of the market. The other type is called thin-film solar cells which in turn uses three main sub-technologies, one of which is known as CIGS (Copper Indium Gallium Selenide), and consists of a layer of different metals, including indium and silver. Thin-film solar cells are by far the most effective of today's commercially available technologies. They can also be made bendable and adaptable, which means that they can be used in many different areas. The problem is that the demand for indium and silver is high, and increased production is accompanied by a growing amount of production waste, which contains a mixture of valuable metals and hazardous substances. Being able to separate attractive metals from other substances, therefore, becomes extremely valuable, both economically and environmentally, as they can be reused in new products.

“It is crucial to remove any contamination and recycle, so that the material becomes as clean as possible again. Until now, high heat and a large amount of chemicals have been used to succeed, which is an expensive process that is also not environmentally friendly”, says Ioanna Teknetzi, PhD student at the Department of Chemistry and Chemical Engineering, who together with Burcak Ebin and Stellan Holgersson published the new results in the journal Solar Energy Materials and Solar Cells.

Now their research shows that a more environmentally friendly recycling process can have the same outcome.

“We took into account both purity and environmentally friendly recycling conditions and studied how to separate the metals in the thin-film solar cells in acidic solutions through a much ‘kinder’ way of using a method called leaching. We also have to use chemicals, but nowhere near as much as with previous leaching methods. To check the purity of the recovered indium and silver, we also measured the concentrations of possible impurities and saw that optimisation can reduce these”, says Ioanna Teknetzi.

The researchers showed that it is possible to recover 100 percent of the silver and about 85 percent of the indium. The process takes place at room temperature without adding heat.

“It takes one day, which is slightly longer than traditional methods, but with our method, it becomes more cost-effective and better for the environment. Our hopes are that our research can be used as a reference to optimise the recycling process and pave the way for using the method on a larger scale in the future”, says Burcak Ebin.

 

The method

1. The film from the solar cell is analysed with respect to material, chemical composition, particle size and thickness. The solar cell is placed in a container with an acid solution at the desired temperature. Agitation is used to facilitate dissolution of metals in the acid solution. This process is called leaching.

2. Leaching effectiveness and chemical reactions are assessed by analysing samples taken at specific times during the leaching process. The different metals are leached at different times. This means that the process can be stopped before all the metals begin to dissolve, which in turn contributes to achieving higher purity.

3. When the leaching is complete, the desired metals are in the solution in the form of ions and can be easily purified to be reused in the manufacture of new solar cells.

 

More about the study

Valuable metal recycling from thin film CIGS solar cells by leaching under mild conditions has been published in Solar Energy Materials and Solar Cells. The authors are Ioanna Teknetzi, Burcak Ebin and Stellan Holgersson at the Department of Chemistry and Chemical Engineering at Chalmers University of Technology. The study has been carried out at Chalmers Material Analysis Laboratory, CMAL, and the research has received funding from the Swedish Energy Agency.

 

For more information, please contact:

Ioanna Teknetzi, PhD student, Department of Chemistry and Chemical Engineering, Chalmers University of Technology, ioanna.teknetzi@chalmers.se

Dr. Burcak Ebin, researcher, Department of Chemistry and Chemical Engineering, Chalmers University of Technology, +46 31 772 17 29, burcak@chalmers.se

Dr. Stellan Holgersson, researcher, Department of Chemistry and Chemical Engineering, Chalmers University of Technology, +46 31 772 28 02, stehol@chalmers.se

 

 

Caption: Thin-film solar cells are highly efficient and can be made bendable and adaptable, meaning they can be used in a wide range of areas, such as here on roof tiles. Photo of solar cells: Midsummer

How did the Andes Mountains get so huge? A new geological research method may hold the answer


Peer-Reviewed Publication

UNIVERSITY OF COPENHAGEN - FACULTY OF SCIENCE

Valentina Espinoza and Giampiero Iaffaldano 

IMAGE: VALENTINA ESPINOZA AND GIAMPIERO IAFFALDANO view more 

CREDIT: UNIVERSITY OF COPENHAGEN

How did the Andes – the world's longest mountain range – reach its enormous size? This is just one of the geological questions that a new method developed by researchers at the University of Copenhagen may be able to answer. With unprecedented precision, the method allows researchers to estimate how Earth's tectonic plates changed speed over the past millions of years.

The Andes is Earth’s longest above-water mountain range. It spans 8900 kilometres along South America’s western periphery, is up to 700 kilometres wide, and in some places, climb nearly seven kilometres into the sky. But exactly how this colossal mountain range emerged from Earth's interior remains unclear among geologists.

University of Copenhagen researchers come with a new hypothesis. Using a novel method developed by one of the researchers, they closely studied the tectonic plate upon which the range is saddled. Their finding has shed new light on how the Andes came into being.

Tectonic plates cover Earth's surface like massive puzzle pieces. They shift a few centimeters each year, at about the same pace as our nails grow. From time to time, these plates can suddenly speed up or slow down. However, we know little about the fierce forces behind these events. The UCPH researchers arrived at estimates that are more precise than ever, both with regards to how much and how often the plates changed velocity historically.

The researchers' new calculations demonstrate that the South American plate suddenly and spectacularly shifted gears and slowed on two significant occasions over the past 15 million years. And this may have contributed to the widening of the enormous chain. The study’s results have been published in the journal Earth and Planetary Science Letters.

Remarkably, the two sudden slowdowns occurred between periods when the Andean range was under compression and growing rapidly taller:

“In the periods up until the two slowdowns, the plate immediately to the west, the Nazca Plate, plowed into the mountains and compressed them, causing them to grow taller. This result could indicate that part of the preexistent range acted as a brake on both the Nazca and the South-American  plate. As the plates slowed down their speed, the mountains instead grew wider,” explains first author and PhD student Valentina Espinoza of the Department of Geosciences and Natural Resource Management.

Mountains made the plate heavier

According to the new study, the South American plate slowed down by 13% during a period that occurred 10-14 million years ago, and 20% during another period 5-9 million years ago. In geologic time, these are very rapid and abrupt changes. According to the researchers, there are mainly two possible reasons for South America’s sudden slowdowns.

One could, as mentioned, be related to the extension of the Andes, where the pressure relaxed and the mountains grew wider. The researchers' hypothesis is that the interaction between the expansion of the mountains and the lower speed of the plate was due to a phenomenon called delamination. That is, a great deal of unstable material beneath the Andes tore free and sank into the mantle, causing major readjustments in the plate’s configuration.

This process caused the Andes to change shape and grow laterally. It was during these periods that the mountain chain expanded into Chile to the west and Argentina to the east. As the plate accumulated more mountain material and became heavier, the plate’s movement slowed.

"If this explanation is the right one, it tells us a lot about how this huge mountain range came to be. But there is still plenty that we don't know. Why did it get so big? At what speed did it form? How does the mountain range sustain itself? And will it eventually collapse?" says Valentina Espinoza.

According to the researchers, another possible explanation for why the plate slowed is that there was a change in the pattern flow of heat from the Earth's interior, known as convection, that moved up into the uppermost viscous layer of the mantle which tectonic plates float on top of. That change manifested itself as a change in the plate’s movement.

The researchers now have the information and tools to begin testing their hypotheses through modelling and experimentation.

May become a new standard model

The method to calculate the changes of tectonic plate motion builds upon the previous work of associate professor and study co-author Giampiero Iaffaldano and Charles DeMets in 2016. The special thing about the method is that it utilises high-resolution geological data, typically used only to calculate the motion of plates relative to each other. Here, the same data has been used to calculate changes in the motion of plates relative to the planet itself. It provides estimates with unprecedented accuracy. 

After testing the method with a combination of six other tectonic plates, the researchers believe that it could become a new standard method:

"This method can be used for all plates, as long as high-resolution data are available. My hope is that such method will be used to refine historic models of tectonic plates and thereby improve the chance of reconstructing geological phenomena that remain unclear to us," says Giampiero Iaffaldano, who concludes:

"If we can better understand the changes that have occurred in the motions of plates over time, we can have a chance at answering some of the greatest mysteries of our planet and its evolution. We still know so little about, for example: the temperature of Earth's interior, or about when plates began moving. Our method can most likely be used to find pieces for this great big puzzle."

FACT BOX: ABOUT THE METHOD

  • Tectonic plates change speed often, but high-resolution data is needed to identify their rapid changes over time spans of less than a couple million years.
     
  • One key aspect of the method developed by Giampiero Iaffaldano and Charles DeMets in 2016 differs from others. Typically, high-resolution data is only used to calculate the relative motion of plates, i.e. their motion relative to other plates. Their method uses this same kind of data to calculate the absolute motion of plates, i.e. the movement of plates relative to Earth itself. This results in far more accurate estimates than the ones currently obtained through hotspot volcanic chains.

FACT BOX: ABOUT PLATE TECTONICS

  • The theory of plate tectonics, first recognized in the 1960s, states that Earth is covered by an outer shell (the lithosphere), divided into a number of rigid plates that float on top the upper part of Earth's mantle (the asthenosphere).
  • Observations show that plates come in all sorts of sizes, from the Pacific plate covering an area of 100 million square meters, to microplates with a hundredth times smaller. Tectonic plates may comprise a continental portion, that can reach up to 350 kilometers thick, and oceanic part, which rarely exceeds 100 kilometers thick.

Infant formulas promise too much

Not all infant formulas are equally nutritious

Peer-Reviewed Publication

NORWEGIAN UNIVERSITY OF SCIENCE AND TECHNOLOGY

Many infant formulas promise a lot. Several products claim that they help develop the brain, increase immunity and promote children's growth and development, among other things.

Now a research group led by Imperial College London has looked at whether these promises have any substance to them. The article has recently been published in BMJ.

“Most of the claims about the health-giving and nutritional properties of breast milk substitutes seem to be based on little or no evidence,” the research group says.

Claims surrounding these replacement milk products are controversial. They can give the impression that infant formulas are just as good as breast milk, and perhaps even better, without any scientific basis for the claim.

Many breastfeeding mums in Norway

The researchers examined products from 15 countries with different social and economic conditions. Norwegian data are also included.

Norway has a tradition of breastfeeding infants for a long time. Four out of five infants in Norway still receive breast milk when they are six months old, and only two per cent never receive any breast milk (in Norwegian).

“Supportive social arrangements and long parental leave contribute to allowing many mothers in Norway to breastfeed,” says Melanie Rae Simpson, an associate professor at NTNU’s Department of Public Health and Nursing.

Simpson has contributed data to the new survey. She is happy about the social arrangements.

“Strict rules for marketing breast milk substitutes mean that advertising doesn’t influence how long women in Norway breastfeed,” says Simpson.

At the same time, some infant formulas make a lot of promises.

Norwegian claims maybe not so crazy, but could be better

“A relatively high proportion of the products available in Norway include one or more claims about being beneficial for health,” says Simpson.

But that doesn’t necessarily mean that the situation in Norway is that bad.

“With so many women who breastfeed, we don't have as many different types of infant formula in our grocery stores compared to some of the other countries in the study,” she says.

This means that a relatively high proportion of the products in Norway are sold in pharmacies.

“These are basically made for children with special needs,” Simpson points out.

The claims of these products are therefore often linked precisely to the special needs of children, but not always.

Norway has clear legislation to prevent undocumented claims from being used in connection with breast milk substitutes. Nevertheless, the documentation was characterized by the same challenges around transparency, independence from industry and scientific quality that the research group saw in the other countries.

608 out of 757 made claims

The research group examined the websites of the various companies that make infant formula. They also inspected the packaging of the products and checked all the health and nutrition claims against the documentation.

The research group found 41 different ingredients linked to these claims, but several companies also market their products without referring to specific ingredients.

The group tested a total of 757 products, and 608 of them were included at least one of a total of 31 different claims about nutrition and health.

Industry runs its own research

Only 161 of the 608 products referred to scientific research to support their claims. But only a small number, about 14 per cent of the investigations, were clinical investigations carried out on humans.

Of these, the researchers found that 90 per cent had a high risk of biased research. This was either because they had received money from the industry or the research was simply carried out by the industry itself.

Much of the so-called "research" consists of reviews, opinions and other forms of research that do not meet high enough quality requirements, such as research on non-human species.

On average, the products included two claims. But the aggressiveness of the marketing varies greatly, from an average of one claim in Australia to as many as four claims in the USA.

Calls for stricter rules

The research group wants stricter rules, and quickly. This is to better protect users, and to avoid aggressive marketing having unwanted consequences for children’s health.

The researchers are supported by Professor Nigel Rollins from the World Health Organization (WHO). He believes that self-regulation, where the industry itself largely runs the research on product effectiveness, is clearly not good enough. Regulatory authorities in the various countries should therefore consider whether they need to do something to improve conditions.

Products from Norway, Australia, Canada, Germany, India, Italy, Japan, Nigeria, Pakistan, Russia, Saudi Arabia, South Africa, Spain, Great Britain and the USA were included in the study.

Reference: Cheung K Y, Petrou L, Helfer B, Porubayeva E, Dolgikh E, Ali S et al. Health and nutrition claims for infant formula: international cross sectional survey, BMJ 2023; 380 doi:10.1136/bmj-2022-071075

Adaptations allow Antarctic icefish to see under the sea ice

Peer-Reviewed Publication

SMBE JOURNALS (MOLECULAR BIOLOGY AND EVOLUTION AND GENOME BIOLOGY AND EVOLUTION)

Cover of Molecular Biology and Evolution 

IMAGE: THE ARTICLE APPEARS IN A RECENT ISSUE OF MOLECULAR BIOLOGY AND EVOLUTION. THE COVER IMAGE SHOWS HOW ICEFISH RHODOPSIN DISPLAYS KINETIC AND SPECTRAL ADAPTATION TO THE COLD DARK SEAS OF THE ANTARCTIC. view more 

CREDIT: OXFORD UNIVERSITY PRESS

Antarctica may seem like a desolate place, but it is home to some of the most unique lifeforms on the planet. Despite the fact that land temperatures average around -60°C and ocean temperatures hover near the freezing point of saltwater (-1.9°C), a number of species thrive in this frigid habitat. Antarctic icefishes (Cryonotothenioidea) are a prime example, exhibiting remarkable adaptations that allow them to survive in the icy waters surrounding the continent. For example, these fish have evolved special “antifreeze” glycoproteins that prevent the formation of ice in their cells. Some icefishes are “white-blooded” due to no longer making hemoglobin, and some have lost the inducible heat shock response, a nearly universal molecular response to high temperatures. Adding to this repertoire of changes, a recent study published in Molecular Biology and Evolution reveals the genetic mechanisms by which the visual systems of Antarctic icefishes have adapted to both the extreme cold and the unique lighting conditions under Antarctic sea ice.

A team of researchers, led by Gianni Castiglione (now at Vanderbilt University) and Belinda Chang (University of Toronto), set out to explore the impact of sub-zero temperatures on the function and evolution of the Antarctic icefish visual system. The authors focused on rhodopsin, a temperature-sensitive protein involved in vision under dim-light conditions. As noted by Castiglione, a key role for rhodopsin in cold adaptation was suggested by their previous research. “We had previously found cold adaptation in the rhodopsins of high-altitude catfishes from the Andes mountains, and this spurred us into investigating cold adaptation in rhodopsins from the Antarctic icefishes.”

Indeed, the authors observed evidence of positive selection and accelerated rates of evolution in rhodopsins among Antarctic icefishes. Taking a closer look at the specific sites identified as candidates for positive selection, Castiglione and coauthors found two amino acid variants that were absent from other vertebrates. These changes are predicted to have occurred during two key periods in Antarctic icefish history: the evolution of antifreeze glycoproteins and the onset of freezing polar conditions. This timing suggests that these variants were associated with icefish adaptation and speciation in response to climatic events.

To confirm the functional effects of these two amino acid variants, the researchers performed in vitro assays in which they created versions of rhodopsin containing each variant of interest. Both amino acid variants affected rhodopsin’s kinetic profile, lowering the activation energy required for return to a “dark” conformation and likely compensating for a cold-induced decrease in rhodopsin’s kinetic rate. In addition, one of the amino acid changes resulted in a shift in rhodopsin’s light absorbance toward longer wavelengths. This dual functional change came as a surprise to Castiglione and his co-authors. “We were surprised to see that icefish rhodopsin has evolved mutations that can alter both the kinetics and absorbance of rhodopsin simultaneously. We predict that this allows the icefish to adapt their vision to red-shifted wavelengths under sea ice and to cold temperatures through very few mutations.”

Interestingly, the amino acid changes observed in the Antarctic icefishes were distinct from those conferring cold adaptation in the high-altitude catfishes previously studied by the team, suggesting multiple pathways to adaptation in this protein. To continue this line of study, Castiglione and his colleagues hope to investigate cold adaptation in the rhodopsins of other cold-dwelling fish lineages, including Arctic fishes. “Arctic fishes share many of the cold-adapted phenotypes found in the Antarctic icefishes, such as antifreeze proteins. However, this convergent evolution appears to have been accomplished through divergent molecular mechanisms. We suspect this may be the case in rhodopsin as well.”

Unfortunately, acquiring the data needed to conduct such an analysis may prove difficult. “A major obstacle to our research is the difficulty of collecting fishes from Antarctic and Arctic waters,” says Castiglione, “which limits us to publicly available datasets.” This task may become even more challenging in the future as these cold-adapted fish are increasingly affected by warming global temperatures. As Castiglione points out, “Climate change may alter the adaptive landscape of icefishes in the very near future, as sea ice continues to melt, forcing the icefish to very likely find themselves at an evolutionary ‘mismatch’ between their environment and their genetics.”

Microplastics can help dangerous bacteria to survive on Scottish beaches 

Reports and Proceedings

MICROBIOLOGY SOCIETY

It has been understood for some time that microplastics provide a protective environment (the so-called ‘plastisphere’) in which bacteria can survive in wastewater. For the first time, researchers at the University of Stirling, Scotland, have tracked how that could enable bacteria to survive the journey to the sea and make their way onto our beaches, where they can come into contact with humans.   

Lead researcher, Rebecca Metcalf, supervised by Professor Richard Quilliam, subjected microplastics colonised by bacteria in wastewater to the different environments that they would likely pass through on their way to our beaches. Metcalf and her team found that, not only could bacteria such a E. coli survive the entire journey, but that viable bacteria also survived for 7 days on the sand.   

“The plastic is providing a substrate for transferring pathogens from wastewater, and through river water, estuary and seawater, and finally up onto the beaches where they are much more likely to come into contact with humans” explains Metcalf. “Other surfaces where bacteria colonise, such as seaweed, wouldn’t necessarily go through that transfer route.”   

Concerned by their findings, Metcalf wanted to see if this theoretical survival was happening on real beaches in Scotland. They collected polyethene and polystyrene plastic waste from 10 Scottish beaches and screened them for 7 target bacteria that cause disease in humans. Alarmingly, they found that these bacteria were present in virtually all of the samples, with some showing resistance to our most commonly used antibiotics.   

This is worrying in light of sewage leaks and wastewater overflows onto our beaches: “We already have sewage ending up in the environment that contains harmful bacteria. But the plastics are transporting bacteria into places where they are more likely to come into contact with people” according to Metcalf.   

“We hope that our research will add to the growing overarching evidence and support for increasing public awareness and ultimately pushes towards legislative changes for plastic discharge to the environment.”    

Research still needs to take place to fully understand the potential risk that this may pose to those bathing at Britain’s beaches, as the likelihood for these pathogens to cause disease in humans is unknown. Researchers still urge the public to take care around plastic pollution but stress the importance of removing plastic from our beaches. “Don’t be afraid of taking part in a beach clean, it is vital that we remove the plastics from our beaches and dispose of them correctly, but I would encourage the public to wash their hands or use gloves.”   

New look at climate data shows substantially wetter rain and snow days ahead

Research shows that by the end of the century the biggest rain and snow days will be 20 to 30% wetter than they are today


Peer-Reviewed Publication

DOE/LAWRENCE BERKELEY NATIONAL LABORATORY

Map 

IMAGE: THE LOCA2 DATA ESTIMATE HOW OFTEN A “ONCE-IN-A-CENTURY” DAY OF RAIN OR SNOW WILL HIT IN DIFFERENT CLIMATE CHANGE SCENARIOS BETWEEN NOW AND 2100. COLORS ON THE MAPS SHOW HOW FREQUENTLY RESEARCHERS EXPECT SUCH AN EXTREME PRECIPITATION EVENT TO OCCUR, WITH THE DARKEST BROWN INDICATING EVERY 30 TO 40 YEARS. view more 

CREDIT: DAVE PIERCE/SCRIPPS INSTITUTION OF OCEANOGRAPHY

A key source of information underpinning the upcoming National Climate Assessment suggests that heavy precipitation days historically experienced once in a century by Americans could in the future be experienced on several occasions in a lifetime. 

Scientists at Scripps Institution of Oceanography at UC San Diego and the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) report that extremely intense days of rain or snow will be more frequent by the end of this century than previously thought – as often as once every 30 or 40 years in the Pacific Northwest and southeastern United States.

The conclusions come from analyzing a 30-terabyte data set that models temperature and precipitation at scales roughly the size of urban ZIP codes: six kilometers (3.9 miles). Researchers developed the data set, called Localized Constructed Analogs Version 2 (LOCA2), to provide climate information that is useful for local planners. In contrast, most of the existing advanced climate models look at regions that range from 50 to 250 kilometers (30 to 400 miles).  

“With this data set, we’re able to look at the impacts of actual weather pattern changes across the United States at an extremely granular level,” said Dan Feldman, staff scientist at Berkeley Lab and the project’s principal investigator. “We see that there is a lot more extreme weather that is likely to happen in the future – and by looking at actual weather patterns, we show that changes in extreme precipitation will actually be more extreme than previously estimated. Land use managers and planners should expect more extremes, but location matters.” 

The LOCA2 data set updates a similar analysis conducted in 2016 in advance of the Fourth National Climate Assessment (NCA), which was released in 2018 by the U.S. Global Change Research Program. The NCA is intended to assist the U.S. government with planning for, mitigating, and adapting to changes in climate that will affect the country. The Fifth NCA is expected to be issued later this year.

LOCA2 projections cover the lower 48 states of the United States, southern Canada, and northern Mexico. The data set draws on more than 70 years of weather data and incorporates 27 updated climate models from the Coupled Model Intercomparison Project (CMIP6), the latest iteration of an international effort to simulate climate that includes the “coupling” of natural systems such as the ocean and atmosphere to understand how they will act in concert as climate changes. 

“We've spent a lot of effort improving the representation of extreme wet days, which is important for understanding both the likelihood of flooding and the availability of water for agricultural, commercial, and residential use,” said David Pierce, a scientist at Scripps Oceanography and the developer of LOCA and LOCA2.

The LOCA2 climate projections are available through the end of the century down to the daily level, and for three different greenhouse gas emissions scenarios known as SSPs, or Shared Socioeconomic Pathways. The three scenarios are a medium level of emissions that is slightly less than current levels (SSP 245), medium-high (SSP 370), and high, where emissions greatly increase (SSP 585). The data set is freely available for planners and decision makers to use. 

The projection reinforces what climate scientists have long predicted: Future weather events will become more extreme in a warming world. LOCA2 finds that the heaviest days of rain and snowfall across much of North America will likely release 20 to 30 percent more moisture than they do now. Much of the increased precipitation will occur in winter, potentially exacerbating flooding in regions such as the upper Midwest and the west coast. 

“The big picture is clear: it’s getting warmer and wetter,” Feldman said. “This research translates that bigger picture into more practical data for infrastructure and operations planning. With this more detailed look at local impacts, we can help local officials make better-informed decisions, such as how long to make an airport runway, how much resilience to include for constructing buildings or bridges, or where to put crops or culverts.”

The improved set of LOCA2 data was created by better identifying and preserving extreme weather events in the past, training models to more accurately reflect extremes in simulations of the future. 

“We undertook a Herculean effort of personnel and computer time not just to produce a bunch of numbers, but to produce local projections that are relevant and useful,” Feldman said. “We do so by recognizing how heat waves and storms have occurred and will occur at the local level, and projecting those forward.”

Seasonal and regional predictions

While the data varies at the local level, researchers found substantial trends across the area covered by LOCA2 at the end of the century.  

Across most seasons, a major part of North America will see roughly the same or fewer number of days with precipitation, roughly the same or fewer number of days with light and medium amounts of precipitation, and a large increase in the number of days with the most extreme precipitation (the top 1 percent and 0.1 percent of storms).

“People will be more affected by the really rare and most extreme events, because those are showing the biggest increase,” said Pierce, who is the lead author of the paper on extreme precipitation published in the Journal of Hydrometeorology. “The wettest day you would expect to see in five years, or 50 years, or 500 years – those extreme events are going to be substantially wetter, and that’s a really big issue, because it has implications for flooding and run-off.”

Southern Canada and most of the United States will see increases in extreme precipitation days that occur primarily in winter. The wettest days of precipitation will increase by 20-30 percent, depending on the emissions scenario and how extreme the storm is.

Arizona, New Mexico, and northern Mexico can expect increases in extreme precipitation days that occur primarily in autumn. The wettest days of precipitation increase by 10-30 percent, depending on which emissions scenarios come to be and how extreme the storms are. While the region becomes drier overall, the number of days with extreme precipitation events still goes up, meaning the precipitation that does come will often do so in larger storms.

“It’s quite interesting that you see the same kind of pattern of fewer low- and medium- precipitation days and more extreme precipitation days across pretty much the entire country,” Pierce said. Knowing the changing character of precipitation and the frequency of extreme events is useful in two ways, Pierce added. “One is for building new infrastructure in the future, and one is for understanding impacts upon existing facilities already there.”

Funding for this research was provided by the Department of Defense and Department of Energy through the Strategic Environmental Research and Development Program (SERDP). The NASA High-End Computing Capability (HECC) Program provided resources supporting this work through the NASA Earth Exchange (NEX), Earth Science Division, and the NASA Advanced Supercomputing (NAS) Division at Ames Research Center.

###

Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 16 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energy’s Office of Science.

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.

One brain, multiple and simultaneous alternative decision strategies

Peer-Reviewed Publication

CHAMPALIMAUD CENTRE FOR THE UNKNOWN

Choosing a checkout line in a supermarket might seem like a no-brainer, but it can actually involve a complex series of cerebral computations. Maybe you count the number of shoppers in each line and pick the shortest, or estimate the number of items on each conveyor belt. Perhaps you quickly weigh up both shoppers and items and maybe even the apparent speed of the cashier... In fact, there are a multiplicity of strategies for solving this problem. 

So how does the brain know how to make decisions in situations like this where there are multiple possible strategies to choose from?

A study published today, April 13th, in the journal Nature Neuroscience provides a surprising answer to this question by showing that, rather than committing to a single strategy, the brain can compute multiple alternative decision strategies simultaneously. The study, led by Fanny Cazettes and senior authors Zachary Mainen and Alfonso Renart, at the Champalimaud Foundation in Lisbon, Portugal, performed a specially-designed experiment which used a kind of “virtual reality” setup for mice, in which the animals were tasked with searching for water in a virtual world. 

Specifically, the authors’ designed a "virtual mouse world" containing the kind of foraging problem that animal brains have evolved to be good at, allowing them to study the complex decision strategies used by mice. Any given location in the virtual world could provide water unreliably and, at some point, would “dry up” and cease giving water altogether. The mice had to decide when to leave a given location and move to another in search of more water. 

To solve the task optimally, the best strategy would be for the mice to learn to count the number of consecutive missed attempts to get water at a given site, and to switch locations when the number of consecutive misses was sufficiently large. But there were multiple alternative strategies for processing the series of successful and unsuccessful tries, including, for instance, calculating the difference between the number of successful and unsuccessful attempts. Each strategy combines misses and successful tries across time in a particular way, and thus has a signature time course – which is called the “decision variable” – that can be matched against the time course of brain activity patterns.

The researchers recorded activity from large ensembles of individual brain cells in a part of the brain known as the premotor cortex while the mice performed the task. They then looked for combinations of the temporal profiles of activity of recorded premotor neurons which resembled the decision variables associated with the different strategies.

To the authors’ surprise, data showed that, while each mouse focused on their own strategy, their brains did not. Fanny Cazettes explains, “We found that, while activity in the premotor cortex reflected the computation that the mouse was actually using, it also reflected alternative decision variables useful for the same task, and even decision variables useful for other tasks.” Zach Mainen, one of the study’s senior authors, adds that, “Contrary to our experience in checkout lines, we found that the brain can actually perform several different counting strategies at the same time, which is reminiscent of the concept of superposition in quantum mechanics."

Although there is still much to be explored in this area, this study provides an important foundation for future research, "Our findings suggest the need for new ways of thinking about the core processes involved in decision-making and action selection. One of our next steps will be to investigate how the brain selects between different decision variables and how these decisions are translated into action”, says Fanny Cazettes. 

What could be the usefulness of representing both used and unused strategies simultaneously? “This arrangement might facilitate cognitive flexibility and learning, because changing strategies only requires attending to the right precomputed decision variable, rather than having to construct it from scratch”, argues Alfonso Renart, the other senior author. “These findings have important implications for our understanding of how the brain processes and selects decision variables in complex environments. There could be implications for the development of more flexible and adaptable machine learning systems, which might be particularly useful in situations where there is a high degree of uncertainty or complexity”, concludes Zach Mainen.