Saturday, August 17, 2024

 CHANGING THE QUANTUM UNIVERSE

Large Hadron Collider pipe brings search for elusive magnetic monopole closer than ever




University of Nottingham





New research using a decommissioned section of the beam pipe from the Large Hadron Collider (LHC) at CERN has bought scientists closer than ever before to test whether magnetic monopoles exist.

Scientists from the University of Nottingham, in collaboration with an international team have revealed the most stringent constraints yet on the existence of magnetic monopoles, pushing the boundaries of what is known about these elusive particles. Their research has been published today in Physical Review Letters.

In particle physics, a magnetic monopole is a hypothetical elementary particle that is an isolated magnet with only one magnetic pole (a north pole without a south pole or vice versa)

Oliver Gould, Dorothy Hodgkin Fellow at the School of Physics and Astronomy at the University of Nottingham is the lead theorist for the study, he said: “Could there be particles with only a single magnetic pole, either north or south? This intriguing possibility, championed by renowned physicists Pierre Curie, Paul Dirac, and Joseph Polchinski, has remained one of the most captivating mysteries in theoretical physics. Confirming their existence would be transformative for physics, yet to date experimental searches have come up empty handed.” 

The team focused their search on a decommissioned section of the beam pipe from the LHC at CERN, the European Organisation for Nuclear Research. Conducted by physicists from the Monopole and Exotics Detector at the LHC (MoEDAL) experiment, the study examined a beryllium beam pipe section that had been located at the particle collision point for the Compact Muon Solenoid (CMS) experiment. This pipe had endured radiation from billions of ultra-high-energy ion collisions occurring just centimetres away. 

"The proximity of the beam pipe to the collision point of ultra-relativistic heavy ions provides a unique opportunity to probe monopoles with unprecedentedly high magnetic charges," explained Aditya Upreti, a Ph.D. candidate who led the experimental analysis while working in Professor Ostrovskiy's MoEDAL group at the University of Alabama. "Since magnetic charge is conserved, the monopoles cannot decay and are expected to get trapped by the pipe's material, which allows us to reliably search for them with a device directly sensitive to magnetic charge". 

The researchers investigated the production of magnetic monopoles during heavy ion collisions at the LHC, which generated magnetic fields even stronger than those of rapidly spinning neutron stars. Such intense fields could lead to the spontaneous creation of magnetic monopoles through the Schwinger mechanism. 

Oliver added: “Despite being an old piece of pipe destined for disposal, our predictions indicated it might be the most promising place on Earth to find a magnetic monopole,” 

The MoEDAL collaboration used a superconductive magnetometer to scan the beam pipe for signatures of trapped magnetic charge. Although they found no evidence of magnetic monopoles, their results exclude the existence of monopoles lighter than 80 GeV/c² (where c is the speed of light) and provide the world-leading constraints for magnetic charges ranging from 2 to 45 base units. 

The research team now plans to extend their search, Oliver concludes: “The beam pipe that we used was from the first run of the Large Hadron Collider, which was carried out before 2013 and at lower energies. Extending the study to a more recent run at higher energies could double our experimental reach. We are also now considering completely different search strategies for magnetic monopoles.” 

New research using a decommissioned section of the beam pipe from the Large Hadron Collider (LHC) at CERN has bought scientists closer than ever before to test whether magnetic monopoles exist.

Scientists from the University of Nottingham, in collaboration with an international team have revealed the most stringent constraints yet on the existence of magnetic monopoles, pushing the boundaries of what is known about these elusive particles. Their research has been published today in Physical Review Letters.

In particle physics, a magnetic monopole is a hypothetical elementary particle that is an isolated magnet with only one magnetic pole (a north pole without a south pole or vice versa)

Oliver Gould, Dorothy Hodgkin Fellow at the School of Physics and Astronomy at the University of Nottingham is the lead theorist for the study, he said: “Could there be particles with only a single magnetic pole, either north or south? This intriguing possibility, championed by renowned physicists Pierre Curie, Paul Dirac, and Joseph Polchinski, has remained one of the most captivating mysteries in theoretical physics. Confirming their existence would be transformative for physics, yet to date experimental searches have come up empty handed.” 

The team focused their search on a decommissioned section of the beam pipe from the LHC at CERN, the European Organisation for Nuclear Research. Conducted by physicists from the Monopole and Exotics Detector at the LHC (MoEDAL) experiment, the study examined a beryllium beam pipe section that had been located at the particle collision point for the Compact Muon Solenoid (CMS) experiment. This pipe had endured radiation from billions of ultra-high-energy ion collisions occurring just centimetres away. 

"The proximity of the beam pipe to the collision point of ultra-relativistic heavy ions provides a unique opportunity to probe monopoles with unprecedentedly high magnetic charges," explained Aditya Upreti, a Ph.D. candidate who led the experimental analysis while working in Professor Ostrovskiy's MoEDAL group at the University of Alabama. "Since magnetic charge is conserved, the monopoles cannot decay and are expected to get trapped by the pipe's material, which allows us to reliably search for them with a device directly sensitive to magnetic charge". 

The researchers investigated the production of magnetic monopoles during heavy ion collisions at the LHC, which generated magnetic fields even stronger than those of rapidly spinning neutron stars. Such intense fields could lead to the spontaneous creation of magnetic monopoles through the Schwinger mechanism. 

Oliver added: “Despite being an old piece of pipe destined for disposal, our predictions indicated it might be the most promising place on Earth to find a magnetic monopole,” 

The MoEDAL collaboration used a superconductive magnetometer to scan the beam pipe for signatures of trapped magnetic charge. Although they found no evidence of magnetic monopoles, their results exclude the existence of monopoles lighter than 80 GeV/c² (where c is the speed of light) and provide the world-leading constraints for magnetic charges ranging from 2 to 45 base units. 

The research team now plans to extend their search, Oliver concludes: “The beam pipe that we used was from the first run of the Large Hadron Collider, which was carried out before 2013 and at lower energies. Extending the study to a more recent run at higher energies could double our experimental reach. We are also now considering completely different search strategies for magnetic monopoles.” 

 

Combining genetic diversity data with demographic information more reliably reveals extinction risks of natural populations




University of Helsinki
Glanville fritillary butterfly 

image: 

The researchers utilized the well-known ecological model system of the Glanville fritillary butterfly metapopulation in the Åland islands, SW Finland.

view more 

Credit: Marjo Saastamoinen




Genetic diversity, a key pillar of biodiversity, is crucial for conservation. But can snapshot estimates of genetic diversity reliably indicate population extinction risk? New research shows that genome-wide genetic diversity is a strong predictor of extinction risk, but only when confounding, demographic, factors are accounted for.

As species face increasing environmental pressures, their populations often decline, leading to a loss of genetic diversity. This reduction in genetic variation can have serious consequences, including increased inbreeding and a diminished capacity to adapt to changing conditions. Genome-wide genetic diversity is often used as an indicator of species' vulnerability to extinction. However, recent studies have suggested that genetic diversity does not always predict population viability.

The collaborative research sought to clarify under what circumstances genetic diversity can accurately predict extinction risk. The findings suggest that while genetic diversity is indeed linked to extinction risk, the strength of this relationship varies depending on other factors such as population size and the potential for rescue through dispersal.

The importance of integrating demographic data

The study highlights the dangers of relying solely on genetic data to assess population viability. "Our research demonstrates that inferences about the role of genetic diversity in extinction risk must be informed by demographic and environmental data," explains Professor Marjo Saastamoinen, senior author of the paper.

 "For instance, we observed a strong negative relationship between genetic diversity and extinction risk, but this correlation was largely driven by underlying population size. Without accounting for demographic factors, we would have drawn misleading conclusions."

Dr. Michelle DiLeo, the leading author of the study, further cautions against oversimplification: "Had we focused only on genetic diversity, we might have incorrectly assumed its effects on extinction risk were uniform across different populations and environments. Conversely, ignoring the interactions between genetics and demographics would have led us to underestimate the importance of genetic diversity in explaining extinction risk. Our results suggest that both genetic diversity and demographic factors, such as population size, population trends and immigration, must be considered in conservation strategies.”

“Not all populations with low genetic diversity were doomed to extinction, as they were rescued by dispersal from other populations." continues DiLeo. 

Recommendations for conservation

Given that most species are data-deficient, the researchers emphasize the need for strategic data collection to inform conservation efforts. They recommend focusing on three key pieces of information: estimates of genome-wide or neutral genetic diversity, population size trends, and the potential for rescue via dispersal. Population size trends and population connectivity are already used in some global biodiversity frameworks, but more work is needed to integrate these metrics with genetic data for a comprehensive assessment of species vulnerability. The study also underscores the importance of maintaining connectivity among populations to mitigate the risks associated with low genetic diversity in the face of environmental change.

Utilizing long-term monitoring data of the Glanville fritillary butterfly

Assessing the relationship between genetic diversity and extinction risk in natural populations is not trivial as it requires high resolution population level genetic and demographic data. The researchers utilized the well-known ecological model system of the Glanville fritillary butterfly metapopulation in the Åland islands, SW Finland. For this system the survival of thousands of local populations and overwintering larval nests have been systematically monitored across over 30 years, data which has been more recently been accompanied with genetic data. 

In this study genetic data from 7,501 individuals with extinction data from 279 meadows and mortality of 1,742 larval nests in a butterfly metapopulation were analyzed. This allowed researchers to assess the effect of genetic diversity on extinction rates across years and under different conditions such as between small and large populations, in populations with longer history of decline and when extinctions could not be rescued by dispersal from nearby populations. 

 

Researchers discover smarter way to recycle polyurethane



Researchers at Aarhus University have found a better method to recycle polyurethane foam. This is great news for the budding industry that aims to chemically recover the original components of the material – making their products cheaper and better.



Aarhus University

Acidolysis graphical abstract 

image: 

The graphic shows how the new combination of acidolysis and hydrolysis can recover up to 82 weight percent of the original material from flexible PUR foam – used in mattresses – as two separate fractions of diamines and polyols. The diamine is shown as TDA (toluene diamine).

view more 

Credit: Thomas Balle Bech, Aarhus University


Polyurethane (PUR) is an indispensable plastic material used in mattresses, insulation in refrigerators and buildings, shoes, cars, airplanes, wind turbine blades, cables, and much more. It could be called a wonder material if it weren’t also an environmental and climate burden. Most of the PUR products discarded  worldwide end up being incinerated or dumped in landfills.

This is problematic because the main components of the material are primarily extracted from fossil oil. And we’re talking about significant quantities. In 2022, the global market for PUR reached almost 26 million tons, and a forecast for 2030 predicts nearly 31.3 million tons, with about 60% being foam in various forms.

However, there is a small but growing industry that chemically breaks down (depolymerizes) PUR into its original main components, polyol and isocyanate, with the aim of reusing them as raw materials in new PUR products.

Still, there is a long way to go before their output can actually compete with “virgin” materials. It is expensive to separate and purify the desired elements.

Breaking down and separating in one go

This is where a research team from Aarhus University comes in with a smart idea. They base their method on what the companies in question already use, namely breaking down PUR foam with acid (acidolysis).

But the companies do not separate the broken-down PUR into polyol and isocyanate. This results in a mixture that cannot be directly recycled but requires their customers to use new recipes.

The AU researchers are not only able to break down PUR and separate the two main components – they can do it in one go. They heat flexible PUR foam to 220°C in a reactor with a bit of succinic acid (see fact box). Afterwards, they use a filter that catches one material and lets the other pass through.

It is the polyols that pass through, and they do so in a quality comparable to virgin polyol, making it possible to use them in new production of polyurethane. The solid part of the product mixture that is filtered out is transformed into a so-called diamine in a simple hydrolysis process, which is used in the production of isocyanates and thus PUR.

In this way, the researchers are able to recover up to 82 weight percent of the original material from flexible PUR foam – used in mattresses – as two separate fractions of diamines and polyols. The researchers have recently published their findings  in the scientific journal Green Chemistry.

Enormous potential in the industry

"The method is easy to scale up," points out one of the study’s authors, Steffan Kvist Kristensen, assistant professor at the Interdisciplinary Nanoscience Center (iNANO) at Aarhus University.

He sees enormous potential for recycling the PUR foam waste at the factories that use it as raw material (slabstock) in their production.

"But the prospect of also handling PUR waste from consumers requires further development," he adds.

... but still a long way to go to a circular economy

Manufacturers in the PUR industry each use unique formulae to achieve specific material properties for their products.

Therefore, a number of challenges need to be solved before a real economy in recycling polyurethane can be achieved:

- Waste sorting

- Logistics

- Sorting PUR into types

Depolymerization is thus only a small part of the solution.

Not just soft foam

AU researchers have also tested the combination of acidolysis and hydrolysis on regenerated PUR foam and rigid PUR foam. And it works.

But the paths to a circular economy are even longer here.

Rigid PUR foam is primarily used as insulation materials, but the endeavour to transform it into valuable raw materials is still in its infancy.

Right now, the researchers are testing the new technology on other polyurethane materials to see how these can be recycled. They are also investigating how the dicarboxylic acid, which is part of the process, can be reused. Additionally, they will test the recycled materials to create new products to show that the technology can truly create a circular economy.


FACTS:

Polyurethane is typically created in a chemical reaction between the main components polyol and isocyanate, which are mostly derived from fossil oil. Polyurethane is a polymer (a long chain of molecules), so when talking about chemically breaking down polyurethane, it is called depolymerization.

Succinic acid is a natural acid, an antioxidant that plays an important role in the body's energy production. It is used both as a food additive and as a starting material for several types of plastics. In this case, succinic acid can break down polyurethane (PUR) without the use of other chemicals.

 

How some states help residents avoid costly debt during hard times



Study finds generous unemployment insurance benefits may be key



Ohio State University







COLUMBUS, Ohio – A new national study provides the best evidence to date that generous unemployment insurance benefits during the COVID-19 pandemic helped reduce reliance on high-cost credit use.

 

Researchers found that lower-income residents of states with more generous benefits were significantly less likely than those living in less-generous states to take out new credit cards, personal finance loans and payday loans or other alternative financial service offerings.

 

The study, published recently in the journal Nature Human Behaviour, was led by Rachel Dwyer, professor of sociology at The Ohio State University, and Stephanie Moulton, professor in Ohio State’s John Glenn College of Public Affairs.

 

The findings provide evidence that programs like unemployment insurance can play a powerful role in keeping low-income Americans from falling further behind financially, said Moulton.

 

“Providing more generous unemployment benefits helps people avoid these really expensive types of debt that are costly not only for individuals, but eventually for society,” Moulton said.

 

A key strength of the study was its large sample of 2.3 million Americans, who were studied from the end of 2019 through the end of 2021.  The researchers used data from the consumer credit bureau Experian to determine if those in the study took out a credit card, personal finance loan or an alternative financial service (AFS) loan during the time of the research.

 

The researchers took advantage of the fact that some states were more generous in how much unemployment insurance benefits they provided to residents than others as well as differences in the timing of benefit expansion and contraction within a state. This variation allowed them to see if the level of benefit generosity had an impact on whether Americans avoided costly debt.

 

“Unemployment insurance is a key part of the safety net in the United States, and it affects a lot of people,” Dwyer said. “We were able to test how it affected people of various income levels during the COVID recession.”

 

Results showed that more generous unemployment insurance benefits did indeed result in less use of costly credit use, mostly among those in the lowest-income households.

 

There was a 9.7% lower probability of the lowest-income consumers taking out a new credit card when unemployment insurance benefits in a state were the most generous compared to when benefits in a state were the least generous, the study found.

 

The difference was even more stark when the researchers examined AFS loans.  These are loans outside the traditional banking institutions, such as payday loans, where interest rates may be significantly higher than traditional forms of borrowing.

 

“A lot of these are online loans, and they’re quite accessible because of that, but they are not regulated in the same way as traditional financial institutions,” Dwyer said. “And they have very high interest rates.”

 

The study found that the lowest-income consumers were 24% more likely to take out AFS loans when state unemployment benefits were the least generous compared to when state unemployment benefits were the most generous.

 

When high and middle-income households lost their jobs because of the pandemic, they were able to rely on savings, or they could use credit cards, to help them get through for a few weeks or months of unemployment, Moulton said.  But the lowest-income consumers often don’t have savings and aren’t eligible for credit cards, or have already maxed out their credit limit.

 

“For low-income consumers, AFS loans may be the only place they can go as their last resort, so they turn to these very expensive ways to make ends meet,” Moulton said.

 

“We found that more generous benefits really seemed to save at least some low-income consumers from having to make that choice.”

 

In addition to their main analysis, the researchers also examined alternative measures of how consumers may have been coping with the COVID recession, such as spending on existing credit cards or applying for loans regardless of whether or not they were approved, Dwyer said.

 

“Our results were quite consistent, with lower-income consumers faring better in states when benefits were generous,” she said.

 

One question that taxpayers often ask is whether government programs like unemployment insurance provide society with a good return on investment.  These findings suggest another benefit that should be considered, Moulton said.

 

“If we are preventing some share of this really high-cost borrowing from happening, we are preventing a cost that might ultimately be borne by society,” she said.

 

“There are domino effects where consumers might end up in bankruptcy if they have to use high-interest loans to make ends meet, which isn’t just bad for their own credit, it can raise the cost of credit for others as well.”

 

Other co-authors of the study were Lawrence Berger, J. Michael Collins and Alec Rhodes from the University of Wisconsin-Madison; Meta Brown from Ohio State; Jason Houle from Dartmouth College; and Davon Norris from the University of Michigan.

 

Support for the study came from the Russell Sage Foundation, the National Institute of Child Health and Human Development (including through a seed grant from the Ohio State Institute for Population Research), the National Science Foundation, the U.S. Department of Health and Human Services, and the U.S. Social Security Administration’s Retirement and Disability Research Consortium.

 

Rural migration links to land use, climate change need more attention, scientists say



Colorado State University





Climate and other environmental changes sometimes drive people to migrate, especially if the land no longer supports a population’s way of life. In turn, mobile populations alter the environment in which they settle.  

Migration dynamics and their interactions with climate and environment in rural areas are poorly understood, according to a new perspective paper led by Colorado State University. The paper, published in Nature Sustainability, proposes that greater focus on these processes is needed to develop sustainability policies for dealing with inevitable climate and land change and related migration. 

Displacement of large numbers of people caused by disaster or conflict captures the public’s attention, and studies of migration typically focus on international or urban migration. However, most migration occurs within national borders in response to slow changes unrelated to disaster or conflict. 

Lead author Jonathan Salerno, an ecologist who studies human behavior and adaptation, said that rural-to-rural migration will become increasingly relevant in the coming years under climate and land change. People will choose smaller, less costly moves over big, expensive moves if they can help it, but shorter moves can compound local and regional environmental changes. 

"Initially, people are going to move and adapt within their current livelihoods in ways that they know how to do,” said Salerno, an associate professor in the Warner College of Natural Resources. "We need to understand these rural-to-rural migration systems if we're going to better address large-scale urban and international moves in a sustainable way." 

The authors suggest that land system science – which studies the land itself as well as how people use it – should be integrated into migration studies because they are interconnected, and understanding these relationships will lead to better policies. 

Climate change is expected to disproportionately impact low- and middle-income countries, and anticipating and managing future land use will be key to adaptation, the paper states. Policy options could include giving rural people better tools to adapt in place to climate and environmental changes, so migration becomes less necessary, or better land-use planning in receiving areas. 

"Recent estimates of international migrants globally are around 280 million, and internal migrants are perhaps two to three times that," Salerno said, adding that few rural-to-rural migration data exist because regional moves aren’t commonly tracked by government entities. “We focus on these dynamics of internal migration, paying specific attention to rural areas in low- and middle-income countries where people are particularly reliant on and impacted by changing climates and environments.” 

Factors that influence migration 

Salerno said that migration decisions are based on a combination of large-scale, structural factors – like politics, economy or environmental change – and personal resources such as social networks, money, land and livestock.  

“Everybody is a potential migrant, it’s just a matter of if and how a decision threshold is crossed,” he added, noting that his team’s research focuses on adaptive migration and not involuntary migration. 

Rural migration is frequently tied to a slow decline in land productivity. Extended drought or a decrease in soil quality can motivate people to move. Salerno and his collaborators are looking at these understudied, slow-onset changes, particularly rainfall. 

“Small changes in a marginal dryland area can make agriculture no longer tenable,” he said. "One bad rainfall year, one bad harvest or a pest outbreak could be the trigger that bumps you over the threshold.” 

Salerno’s team developed a model to simulate migration decisions based on the interplay between broad structural forces and individual agency. The agent-based Migration-Land Systems Model helps to illustrate the integration of migration and land system fields.  

Co-author and model developer Rekha Warrier, a postdoctoral fellow in the Department of Human Dimensions of Natural Resources, said that the model conveys the complexity of migration and shows how different factors can be explored systematically.  

The team conducted simulations with unique combinations of factors that influence migration, including rainfall, social conflict, land ownership, local social networks and non-local social networks. They revealed that social networks are key in migration decisions, but networks act differently when households experience acute drought versus slow rainfall decline. Strong local networks and less significant rainfall decline were less likely to yield the decision to move. 

The model also identified the importance of links between areas, or telecouplings. Livelihood activities and changing rainfall patterns in one area can predict land change in the corresponding area receiving an influx of people. 

“The model allows exploration of landscape teleconnections where environmental or social events at a location cascade via migration to impact land function and ecosystem services in other distant landscapes,” Warrier said. 

Salerno has been collecting data on rural migration among agropastoralist communities in Tanzania since 2009. The team plans to apply the model to the data to analyze migration scenarios in Tanzania in response to climate change.  

Randall Boone, a research scientist with CSU’s Natural Resource Ecology Laboratory, and Atmospheric Science Assistant Professor Patrick Keys co-authored the perspective paper, along with collaborators Andrea Gaughan (University of Louisville), Forrest Stevens (University of Louisville), Lazaro Johana Mangewa (Sokoine University of Agriculture), Felister Michael Mombo (Sokoine University of Agriculture), Alex de Sherbinin (Columbia University), Joel Hartter (CU Boulder) and Lori Hunter (CU Boulder). 

 

Why isn't Colorado's snowpack ending up in the Colorado River? New research suggests the problem might be the lack of spring rainfall




University of Washington
Daniel Hogan at the East River watershed 

image: 

The Colorado River and its tributaries (including the East River watershed, shown here) provide water for hydropower, irrigation and drinking water in seven U.S. states and Mexico. But since 2000, water managers have struggled to predict how much water will come from the snowpack. The problem lies with the lack of rainfall in the spring, according to new research from the University of Washington. At first, the researchers wondered if the snowpack was decreasing because the snow was simply turning into water vapor — a process called sublimation. Shown here is Daniel Hogan at the East River watershed setting up a "snow pillow," which measures the weight of snowpack over the winter. Data from this equipment helped the team measure how much of the snow was sublimating.

view more 

Credit: Mark Stone/University of Washington




The Colorado River and its tributaries provide water for hydropower, irrigation and drinking water in seven U.S. states and Mexico. Much of this water comes from the snowpack that builds up over the winter and then melts each spring. Every year in early April, water managers use the snowpack to predict how much water will be available for the upcoming year.

But since 2000, these predictions have been incorrect, with the actual streamflow being consistently lower than the predicted streamflow. That’s left water managers and researchers flummoxed — where's the water going?

The problem lies with the lack of rainfall in the spring, according to new research from the University of Washington. The researchers found that recent warmer, drier springs account for almost 70% of the discrepancy. With less rain, the plants in the area rely more on the snowmelt for water, leaving less water to make its way into the nearby streams. Decreased rain also means sunny skies, which encourages plant growth and water evaporation from the soil.

The researchers published these results Aug. 16 in Geophysical Research Letters.

"The period of time when we were wondering, 'Oh no, where's our water going?' started around the same time when we saw this drop in spring precipitation — the beginning of the 'Millennium drought,' which started in 2000 and has been ongoing to the current day," said lead author Daniel Hogan, a UW doctoral student in the civil and environmental engineering department. "We wanted to focus on the cascading consequences of this. Less springtime rain means you likely have fewer clouds. And if it's going to be sunny, the plants are going to say, 'Oh, I'm so happy. The snow just melted and I have a ton of water, so I'm going to grow like gangbusters.' This research really centers the importance of studying the whole snow season, not just when the snowpack is the deepest."

Hogan and senior author Jessica Lundquist, a UW professor of civil and environmental engineering, studied this phenomenon as part of a bigger project to solve the big "whodunit" of where the water is going. At first, the researchers wondered if the snowpack was decreasing because the snow was simply turning into water vapor — a process called sublimation. But the team recently discovered that only 10% of the missing water was lost due to sublimation, meaning something else was the main culprit.

"There are only so many possible culprits, so I started to compare things that might be important," Hogan said. "And we saw that springtime changes are a lot more exaggerated than they are in other seasons. It's this really dramatic shift where you're going from feet of snowpack to wildflowers blooming over a very short amount of time, relatively speaking. And without spring rains, the plants — from wildflowers to trees — are like giant straws, all drawing on the snowpack."

The researchers looked at springtime changes in 26 headwater basins at various elevations in the Upper Colorado River Basin. To paint a picture of what was happening at each basin over time, the team used a variety of datasets, including streamflow and precipitation measurements dating back to 1964. The researchers then modeled how much water the plants at each basin would likely consume.

"We make an important assumption in the paper," Hogan said. "We assume that the plants have an unlimited amount of water even with less-than-average precipitation, because they have access to snowmelt."

All the basins the team studied showed reduced streamflow without springtime rain. But basins at lower elevations had an even more pronounced deficit in streamflow. This is because the snow at these basins is likely to melt earlier in the season, giving the plants more time to grow and consume the snowmelt, the researchers said.

Now that spring rain has been identified as the main culprit, the researchers are working to further refine their understanding of what's happening during this season. For example, one project is investigating whether residual patches of snow are acting as mini-reservoirs that can provide a constant stream of water to nearby plants.

Regardless, the longer the Millennium drought continues, the more these results will affect the water calculations that happen each April.

"April is when everybody wants to know how much water is in the snowpack each year," Lundquist said. "But the problem with doing these calculations in April is that obviously spring hasn't occurred yet. Now that we know spring rain is actually more important than rain any other times of the year, we're going to have to get better at predicting what's going to happen rainwise to make these April predictions more accurate."

This research was funded by the National Science Foundation, the Sublimation of Snow Project and the Department of Energy Environmental System Science Division (the Seasonal Cycles Unravel Mysteries of Missing Mountain Water project).

 

What does the EU's recent AI Act mean in practice?



Saarland University
What does the EU's recent AI Act mean in practice? 

image: 

Holger Hermanns, Professor of Computer Science, Saarland University

view more 

Credit: Oliver Dietze




The European Union's law on artificial intelligence came into force on 1 August. The new AI Act essentially regulates what artificial intelligence can and cannot do in the EU. A team led by computer science professor Holger Hermanns from Saarland University and law professor Anne Lauber-Rönsberg from Dresden University of Technology has examined how the new legislation impacts the practical work of programmers. The results of their analysis will be published in the autumn.

'The AI Act shows that politicians have understood that AI can potentially pose a danger, especially when it impacts sensitive or health-related areas,' said Holger Hermanns, professor of computer science at Saarland University. But how does the AI Act affect the work of the programmers who actually create AI software? According to Hermanns, there is one question that almost all programmers are asking about the new law: 'So what do I actually need to know?'. After all, there aren't many programmers with the time or inclination to read the full 144-page regulation from start to finish.

But an answer to this frequently asked question can be found in the research paper 'AI Act for the Working Programmer', which Holger Hermanns has written in collaboration with his doctoral student Sarah Sterz, postdoctoral researcher Hanwei Zhang, professor of law at TU Dresden Anne Lauber-Rönsberg and her research assistant Philip Meinel. Sarah Sterz summarized the main conclusion of the paper as follows: 'On the whole, software developers and AI users won't really notice much of a difference. The provisions of the AI Act only really become relevant when developing high-risk AI systems.'

The European AI Act aims to protect future users of a system from the possibility that an AI could treat them in a discriminatory, harmful or unjust manner. If an AI does not intrude in sensitive areas, it is not subject to the extensive regulations that apply to high-risk systems. Holger Hermanns offered the following concrete example as an illustration of what this means in practice: 'If AI software is created with the aim of screening job applications and potentially filtering out applicants before a human HR professional is involved, then the developers of that software will be subject to the provisions of the AI Act as soon as the program is marketed or becomes operational. However, an AI that simulates the reactions of opponents in a computer game can still be developed and marketed without the app developers having to worry about the AI Act.'

But high-risk systems, which in addition to the applicant tracking software referred to above, also include algorithmic credit rating systems, medical software or programs that manage access to educational institutions such as universities, must conform to a strict set of rules set out in the AI Act now coming into force. 'Firstly, programmers must ensure that the training data is fit for purpose and that the AI trained from it can actually perform its task properly,' explained Holger Hermanns. For example, it is not permissible that a group of applicants is discriminated against because of representational biases in the training data. 'These systems must also keep records (logs) so that it is possible to reconstruct which events occurred at what time, similar to the black box recorders fitted in planes,' said Sarah Sterz. The AI Act also requires software providers to document how the system functions – as in a conventional user manual. The provider must also make all information available to the deployer so that the system can properly be overseen during its use in order to detect and correct errors. (Researchers have recently discussed the search for effective 'human oversight' strategies in another paper.)

Holger Hermanns summarized the impact of the AI Act in the following way: 'The AI Act introduces a number of very significant constraints, but most software applications will barely be affected.' Things that are already illegal today, such as the use of facial recognition algorithms for interpreting emotions, will remain prohibited. Non-contentious AI systems such as those used in video games or in spam filters will be hardly impacted by the AI Act. And the high-risk systems mentioned above will only be subject to legislative regulation when they enter the market or become operational,' added Sarah Sterz. There will continue to be no restrictions on research and development, in either the public or private spheres.

'I see little risk of Europe being left behind by international developments as a result of the AI Act,' said Hermanns. In fact, Hermanns and his colleagues take an overall favourable view of the AI Act – the first piece of legislation that provides a legal framework for the use of artificial intelligence across an entire continent. 'The Act is an attempt to regulate AI in a reasonable and fair way, and we believe it has been successful.'

Original publication
Preprint, the paper will appear in AISoLA 2024, Springer LNCS:

Hermanns, H., Lauber-Rönsberg, A., Meinel, P., Sterz, S., Zhang, H. (2024). AI Act for the Working Programmer: https://doi.org/10.48550/arXiv.2408.01449

Questions can be addressed to:

Prof. Dr. Holger Hermanns
Tel.: +49 681 302-5630
Email: hermanns@cs.uni-saarland.de
 

Sarah Sterz
Tel.: +49 681 302-5589
Email: sterz@depend.uni-saarland.de