Saturday, July 29, 2023

 

Q&A: You've heard the annoyingly catchy song—but did you know these incredible facts about baby sharks?

You've heard the annoyingly catchy song—but did you know these incredible facts about baby sharks?
Credit: Shutterstock

"Baby shark doo-doo doo-doo doo-doo, baby shark doo-doo doo-doo doo-doo …" If you're the parent of a young child, you're probably painfully familiar with this infectious song, which now has more than 13 billion views on YouTube.

The Baby Shark song, released in 2016, has got hordes of us singing along, but how much do you really know about baby sharks? Do you know how a baby shark is born, or how it survives to become an apex predator?

I study coastal marine ecology. I believe baby sharks are truly fascinating, and I hope greater  about these creatures will help protect them in the wild.

So sink your teeth into this Q&A on the weird and wonderful world of baby sharks.

How are baby sharks conceived and born?

To the , shark courtship practices may seem barbaric. Males typically attract the attention of a female by biting her. If successful, this is generally followed by even toothier bites to hold on during copulation. Females can carry the scars of these encounters long after the mating season is over.

The act of copulation itself is comparable to that of humans. The male inserts its sexual organ, known as a "clasper", into the female and releases sperm to fertilize the eggs.

However, in extremely rare cases, sharks can reproduce asexually—in other words, embryos develop without being fertilized. This occurred at a Queensland aquarium in 2016, when a zebra shark gave birth to a litter of pups despite not having had the chance to mate in several years.

Sharks give birth in a variety of ways. Some species produce live pups, which swim away to fend for themselves as soon as they're born. Others hatch from eggs outside the mother's body. Remnants of these egg cases have been found washed up on beaches across the world.

How big is a litter of shark pups?

Litter size across sharks varies considerably. For example, the gray nurse shark starts with several embryos but only two are born. This is because the embryos actually eat each other while in utero! This leaves only one survivor in each of the mother's two uteruses.

Intrauterine cannibalism may seem disturbing but is nature's way of ensuring that the strongest pups get the best chance of survival.

In contrast, other species such as the whale shark use a completely different strategy to ensure some of their offspring survive: having hundreds of pups in a single litter.

Where do baby sharks live?

The open ocean is a dangerous place. That's why pregnant female sharks often give birth in shallow coastal waters known as "". There,  are better protected from harsh environmental conditions and roaming predators, including other sharks.

Sites for shark nurseries include river mouths, estuaries, mangrove forests and coral reef flats.

You've heard the annoyingly catchy song—but did you know these incredible facts about baby sharks?
An embryonic bamboo shark in its egg. The egg cases often wash up on beaches after the
 baby has hatched. Credit: Ryan Kempster/UWA

For example, the white shark has established nursery grounds along the east coast of Australia, where babies may remain for several years before moving to deeper waters.

Although most types of sharks are confined to saltwater, the  can live in freshwater habitats. Bull shark pups born near river mouths and estuaries often migrate upstream (sometimes vast distances inland) to escape being preyed upon.

When are baby sharks born?

Sharks, like most animals in the wild, generally give birth during periods that provide favorable conditions for their offspring.

In Australia, for example, scalloped hammerheads and bull sharks tend to breed in the wet summer months when nursery grounds are warmer and there are rich feeding opportunities.

How long do baby sharks take to grow up?

Sharks grow remarkably slowly compared to other fish and remain juveniles for a long time. Although some species mature in a few years, most take considerably longer.

Take the Greenland shark—the world's longest living shark. It can live to at least 250 years and according to recent research, it's thought to take more than a century to reach sexual maturity.

You've heard the annoyingly catchy song—but did you know these incredible facts about baby sharks?
Baby sharks are often born in ‘nurseries’ - shallow coastal waters where food is plentiful 
and ocean predators are less likely. Credit: Shutterstock

What threats do baby sharks face?

While small, sharks must eat or be eaten—all the while enduring the elements and finding enough food to survive and grow.

Yet there is another challenge: humans. In fact, we are the greatest threat to sharks.

Shark nurseries are heavily concentrated in coastal zones, and often overlap with human activities such as fishing, boating and coastal development. And because sharks grow so slowly, they are particularly to vulnerable to overfishing because when populations decline, they can take a long time to bounce back.

Much more to learn

Scientists are still working to understand the  of the 500-plus species of sharks in our oceans. Each time I hear the song Baby Shark, it reminds me there's a lot more work to do.

It's crucial to keep monitoring and studying these baby wonders of the deep, to ensure shark populations survive and we maintain the delicate balance of our underwater ecosystems.

Provided by The Conversation 

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation


Video: The secret social lives of sharks


Ocean skin helps regulate ocean carbon uptake, study finds

Ocean skin helps regulate ocean carbon uptake
New research investigates how the carbon cycle functions in the upper layer of the 
ocean, seen here in a long-exposure photograph of the Caribbean Sea. 
Credit: Martin Falbisoner/Wikimedia CommonsCC BY-SA 4.0

At less than one millimeter thick, the ocean skin—the ocean's uppermost layer—plays an outsized role in marine processes, orchestrating heat and chemical exchange between the sea and sky via diffusion. The water of the skin is cooler by about 0.2–0.3 K and has higher salinity than the water at even just 2 millimeters depth.

Since it was first described in 1967, scientists have grappled with the skin's influence on  and the global  carbon sink. Understanding its role is critical: Between 2011 and 2020, the ocean absorbed 26% of all human-generated carbon dioxide emissions, and variables that affect ocean carbon sequestration contribute to governing the  and .

Hugo Bellenger and colleagues have toggled oceanic temperature and salinity gradients to represent the ocean skin over 15 years (2000–2014) in an Earth system model, assessing how these changes altered the amount of carbon absorbed by the ocean. The work, published in the Journal of Geophysical Research: Oceans represents the first model-based estimate of the ocean skin's influence on ocean–atmosphere carbon dioxide exchange.

Including the representation of the skin in the Earth system model led to a 15% increase in the simulated ocean carbon sink, the researchers found—a figure consistent with past estimates. However, when they allowed the ocean skin to respond to changing ocean carbon concentrations in the model, the effect on the sink was substantially reduced. With the dynamic skin, its contribution to the simulated ocean carbon sink was closer to 5%.

The research shows the importance of including the ocean skin in future climate and carbon modeling efforts, the authors say. And it demonstrates that an interactive parameterization of the ocean skin yields a more accurate model that reduces regional errors in carbon dioxide flux.

More information: Hugo Bellenger et al, Sensitivity of the Global Ocean Carbon Sink to the Ocean Skin in a Climate Model, Journal of Geophysical Research: Oceans (2023). DOI: 10.1029/2022JC019479


Journal information: Journal of Geophysical Research


Provided by Eos

This story is republished courtesy of Eos, hosted by the American Geophysical Union. Read the original story here.Quantifying carbon leakage from enhanced rock weathering

Measuring helium in distant galaxies may give physicists insight into why the universe exists

The Conversation
July 27, 2023

The Universe 


When theoretical physicists like myself say that we’re studying why the universe exists, we sound like philosophers. But new data collected by researchers using Japan’s Subaru telescope has revealed insights into that very question.


Japan’s Subaru telescope, located on Mauna Kea in Hawaii.
Panoramio/Wikimedia Commons, CC BY-ND

The Big Bangkick-started the universe as we know it 13.8 billion years ago. Many theories in particle physics suggest that for all the matter created at the universe’s conception, an equal amount of antimatter should have been created alongside it. Antimatter, like matter, has mass and takes up space. However, antimatter particles exhibit the opposite properties of their corresponding matter particles.

When pieces of matter and antimatter collide, they annihilate each other in a powerful explosion, leaving behind only energy. The puzzling thing about theories that predict the creation of an equal balance of matter and antimatter is that if they were true, the two would have totally annihilated each other, leaving the universe empty. So there must have been more matter than antimatter at the birth of the universe, because the universe isn’t empty – it’s full of stuff that’s made of matter like galaxies, stars and planets. A little bit of antimatter exists around us, but it is very rare.

As a physicist working on Subaru data, I’m interested in this so-called matter-antimatter asymmetry problem. In our recent study, my collaborators and I found that the telescope’s new measurement of the amount and type of helium in faraway galaxies may offer a solution to this long-standing mystery.
After the Big Bang

In the first milliseconds after the Big Bang, the universe was hot, dense and full of elementary particles like protons, neutrons and electrons swimming around in a plasma. Also present in this pool of particles were neutrinos, which are very tiny, weakly interacting particles, and antineutrinos, their antimatter counterparts.


The Big Bang created fundamental particles that make up other particles like protons and neutrons. Neutrinos are another type of fundamental particle. 
Alfred Pasieka/Science Photo Library via Getty Images


Physicists believe that just one second after the Big Bang, the nuclei of light elements like hydrogen and helium began to form. This process is known as Big Bang Nucleosynthesis. The nuclei formed were about 75% hydrogen nuclei and 24% helium nuclei, plus small amounts of heavier nuclei.

The physics community’s most widely accepted theory on the formation of these nuclei tells us that neutrinos and antineutrinos played a fundamental role in the creation of, in particular, helium nuclei.

Helium creation in the early universe happened in a two-step process. First, neutrons and protons converted from one to the other in a series of processes involving neutrinos and antineutrinos. As the universe cooled, these processes stopped and the ratio of protons to neutrons was set.

As theoretical physicists, we can create models to test how the ratio of protons to neutrons depends on the relative number of neutrinos and antineutrinos in the early universe. If more neutrinos were present, then our models show more protons and fewer neutrons would exist as a result.

As the universe cooled, hydrogen, helium and other elements formed from these protons and neutrons. Helium is made up of two protons and two neutrons, and hydrogen is just one proton and no neutrons. So the fewer the neutrons available in the early universe, the less helium would be produced.

Because the nuclei formed during Big Bang Nucleosynthesis can still be observed today, scientists can infer how many neutrinos and antineutrinos were present during the early universe. They do this by looking specifically at galaxies that are rich in light elements like hydrogen and helium.


In a series of high-energy particle collisions, elements like helium are formed in the early universe. Here, D stands for deuterium, an isotope of hydrogen with one proton and one neutron, and γ stands for photons, or light particles. In the series of chain reactions shown, protons and neutrons fuse to form deuterium, then these deuterium nuclei fuse to form helium nuclei. 
Anne-Katherine Burns


A clue in helium



Last year, the Subaru Collaboration – a group of Japanese scientists working on the Subaru telescope – released data on 10 galaxies far outside of our own that are almost exclusively made up of hydrogen and helium.

Using a technique that allows researchers to distinguish different elements from one another based on the wavelengths of light observed in the telescope, the Subaru scientists determined exactly how much helium exists in each of these 10 galaxies. Importantly, they found less helium than the previously accepted theory predicted.

With this new result, my collaborators and I worked backward to find the number of neutrinos and antineutrinos necessary to produce the helium abundance found in the data. Think back to your ninth grade math class when you were asked to solve for “X” in an equation. What my team did was essentially the more sophisticated version of that, where our “X” was the number of neutrinos or antineutrinos.

The previously accepted theory predicted that there should be the same number of neutrinos and antineutrinos in the early universe. However, when we tweaked this theory to give us a prediction that matched the new data set, we found that the number of neutrinos was greater than the number of antineutrinos.
What does it all mean?

This analysis of new helium-rich galaxy data has a far-reaching consequence – it can be used to explain the asymmetry between matter and antimatter. The Subaru data points us directly to a source for that imbalance: neutrinos. In this study, my collaborators and I proved that this new measurement of helium is consistent with there being more neutrinos then antineutrinos in the early universe. Through known and likely particle physics processes, the asymmetry in the neutrinos could propagate into an asymmetry in all matter.

The result of our study is a common type of result in the theoretical physics world. Basically, we discovered a viable way in which the matter-antimatter asymmetry could have been produced, but that doesn’t mean it definitely was produced in that way. The fact that the data fits with our theory is a hint that the theory we’ve proposed might be the correct one, but this fact alone doesn’t mean that it is.

So, are these tiny little neutrinos the key to answering the age old question, “Why does anything exist?” According to this new research, they just might be.

Anne-Katherine Burns, Ph.D. Candidate in Theoretical Particle Physics, University of California, Irvine


This article is republished from The Conversation under a Creative Commons license. Read the original article.


New analysis of SuperCDMS data sets tighter detection limits for dark matter

New analysis of SuperCDMS data sets tighter detection limits for dark matter
A collision of clusters of galaxies, showing separation of dark matter (shaded blue) from 
normal matter (shaded pink).  Credit: NASA

For nearly a century, dark matter has continued to evade direct detection, pushing scientists to come up with even more creative methods of searching. Increasingly sensitive detection experiments are a major undertaking, however, which means scientists want to be sure they analyze data from these experiments in the most thorough and robust way possible.

With that in mind, the Super Cryogenic Dark Matter Search (SuperCDMS) collaboration has published a reanalysis of previously published . Their study, published recently in Physical Review D, describes the team's search for  via two processes called Bremsstrahlung radiation and the Migdal effect.

In a first-of-its-kind analysis, the team also worked with geologists to consider how the Earth's atmosphere and inner composition interact with dark matter particles to cause their energy to dissipate. The analysis represents one of the tightest limits on dark matter detection yet and sets the stage for future dark matter searches.

"As we search for dark matter, we need to extend detection sensitivities," said Noah Kurinksy, a staff scientist at SLAC and corresponding author on the study. "Having better ways to model these processes and understand these sorts of measurements is very important for the dark matter community."

Invisible scattering

In an experiment like SuperCDMS, physicists look for signs that dark matter has collided with the —the protons and neutrons—inside a material such as silicon and germanium.

Usually, the assumption is that when a dark matter particle whacks into a nucleus, the collision is elastic: Any energy the dark matter particle loses is transferred into the motion of the nucleus, so that both particles recoil. "Your typical billiard balls scattering example," Kurinsky explained.

In recent years, however, researchers have proposed that dark matter may be detected through inelastic collisions, in which the energy from the collision is transferred to something else that's possibly easier to detect, such as photons or electrons. This could lead to more sensitive detection capabilities for  experiments.

New analysis of SuperCDMS data sets tighter detection limits for dark matter
Example of an energy spectrum from the maximum likelihood fit for a Migdal signal model 
for a WIMP with a mass of 0.5 GeV/c2 and a cross section of 3×10−37 cm2 (black dashed 
curve). The data (blue histogram) have been logarithmically binned and overlaid with the 
background models (colored solid curves). The thick black line is the sum of all the models,
 signal and background. Normalization of the surface background model components 
(TL, SG and GC) are described in Sec. 5b. The plot on the bottom shows the residual 
between data and the model with the 1σ statistical uncertainty indicated by the shaded 
region. 
Credit: Physical Review D (2023). DOI: 10.1103/PhysRevD.107.112013

Considering that the SuperCDMS experiment is already one of the most sensitive dark matter detectors of its kind, "we wanted to know what the probability was that we see this particular type of signal in SuperCDMS data," said Daniel Jardin, a co-author of the new study and a postdoctoral scholar at Northwestern University who helped lead the analysis.

The team focused on two potential avenues for inelastic collisions to occur: Bremsstrahlung radiation and the Migdal effect.

Bremsstrahlung is a well-known and previously observed phenomenon caused by the deceleration of a charged particle—the word is German for "braking radiation." In a dark matter detector, this could happen when a dark matter particle collides with a nucleus, which then transfers some of its energy to a photon instead of just recoiling. If detected, that photon would suggest some mysterious, fast-moving particle—perhaps dark matter—slammed into the nucleus and sent the photon flying.

Another possible mode for inelastic collisions is through the Migdal effect. Although it has yet to be experimentally demonstrated, the idea is that when a dark matter particle strikes a nucleus, that nucleus gets knocked out of the center of its electron cloud. After some very short amount of time, the electron cloud readjusts around the nucleus, ejecting electrons that researchers could detect. In recent years, scientists have calculated what such a signal would look like should it happen within dark matter detectors.

Reanalyzing the data taking inelastic processes into account didn't reveal evidence of dark matter, Jardin said, but "each of these analyses extended the experiment's existing limits to lower masses." A previous SuperCDMS data analysis ruled out dark matter particles with masses as low as that of the proton. Taking Bremsstrahlung into account, the experiment can now rule out dark matter particle masses down to about a fifth of the proton mass—and even lower masses when the hypothetical Migdal effect is considered.

When Earth gets in the way

But the researchers didn't stop there. "We wanted to innovate beyond taking these ideas and applying it to our data," said Jardin. "So, we added other things that no one else has been doing."

Jardin and his colleagues not only extended the lowest limits of detection for dark matter interactions, but also considered the . "Researchers in the field are now realizing that if dark matter interacts strongly enough, it could interact with the atmosphere and the Earth on its way to the detector, which is deep underground. In that interaction there's actually an upper limit where you'd be blocked by the Earth itself," Jardin said.

In particular, the more strongly dark matter interacts with other types of matter on its way to the detector, the more energy it loses. At some point, a dark matter particle could lose so much energy that by the time it reaches the detector it can no longer create a detectable signal.

To calculate the energy limit for  reaching the SuperCDMS experiment, the researchers modeled how the densities of Earth's atmosphere and inner layers might affect a dark matter particle pummeling through our planet to the detector. The team worked with geologists who determined the exact composition of the soil and rock surrounding the detector in the Soudan Mine in Minnesota.

With this information, the team could set upper limits for dark matter interaction strength depending on where the particle would be coming from, whether that's directly above the detector or the other side of the Earth.

After analyzing the SuperCDMS data with the new models established by the Bremsstrahlung and Migdal effects and the new upper limits, the team was able to expand the range of particle masses the experiment was sensitive to but found no evidence of dark matter interactions. Nonetheless, the analysis represents one of the most sensitive search for ultralight dark matter and helped researchers gain more information from existing data.

"We put a lot into this experiment, so we want to get the most out of it that we can," Jardin said. "We really don't know the mass of dark matter, and we don't know how it interacts with matter. We're just reaching out into the darkness, as best we can."

More information: M. F. Albakry et al, Search for low-mass dark matter via bremsstrahlung radiation and the Migdal effect in SuperCDMS, Physical Review D (2023). DOI: 10.1103/PhysRevD.107.112013


Journal information: Physical Review D 


Provided by SLAC National Accelerator Laboratory PandaX sets new constraints on the search for light dark matter via ionization signals


 

Astronomers shed new light on formation of mysterious fast radio bursts

Astronomers shed new light on formation of mysterious fast radio bursts
The Chinese Five-hundred-meter Aperture Spherical radio Telescope (FAST). 
Credit: Bojun Wang, Jinchen Jiang & Qisheng Cui

More than 15 years after the discovery of fast radio bursts (FRBs)—millisecond-long, deep-space cosmic explosions of electromagnetic radiation—astronomers worldwide have been combing the universe to uncover clues about how and why they form.

Nearly all FRBs identified have originated in  outside our Milky Way galaxy. That is until April 2020, when the first Galactic FRB, named FRB 20200428, was detected. This FRB was produced by a magnetar (SGR J1935+2154), a dense, city-sized neutron star with an incredibly powerful magnetic field.

This groundbreaking discovery led some to believe that FRBs identified at cosmological distances outside our galaxy may also be produced by magnetars. However, the smoking gun for such a scenario, a  due to the spin of the magnetar, has so far escaped detection. New research into SGR J1935+2154 sheds light on this curious discrepancy.

In the July 28 issue of the journal Science Advances, an international team of scientists, including UNLV astrophysicist Bing Zhang, report on continued monitoring of SGR J1935+2154 following the April 2020 FRB, and the discovery of another cosmological phenomenon known as a radio pulsar phase five months later.

Unraveling a cosmological conundrum

To aid them in their quest for answers, astronomers rely in part on powerful radio telescopes like the massive Five-hundred-meter Aperture Spherical radio Telescope (FAST) in China to track FRBs and other deep-space activity. Using FAST, astronomers observed that FRB 20200428 and the later pulsar phase originated from different regions within the scope of the magnetar, which hints towards different origins.

"FAST detected 795 pulses in 16.5 hours over 13 days from the source," said Weiwei Zhu, lead author of the paper from National Astronomical Observatory of China (NAOC). "These pulses show different observational properties from the bursts observed from the source."

This dichotomy in emission modes from the region of a magnetosphere helps astronomers understand how—and where—FRBs and related phenomena occur within our galaxy and perhaps also those at further cosmological distances.

Radio pulses are cosmic electromagnetic explosions, similar to FRBs, but typically emit a brightness roughly 10 orders of magnitude less than an FRB. Pulses are typically observed not in magnetars but in other rotating neutron stars known as pulsars. According to Zhang, a corresponding author on the paper and director of the Nevada Center for Astrophysics, most magnetars do not emit radio pulses most of the time, probably due to their extremely strong magnetic fields. But, as was the case with SGR J1935+2154, some of them become temporary radio pulsars after some bursting activities.

Another trait that makes bursts and pulses different are their emission "phases", i.e. the  where radio emission is emitted in each period of emission.

"Like pulses in radio pulsars, the magnetar pulses are emitted within a narrow phase window within the period," said Zhang. "This is the well-known 'lighthouse' effect, namely, the emission beam sweeps the line of sight once a period and only during a short interval in time in each period. One can then observe the pulsed  emission."

Zhang said the April 2020 FRB, and several later, less energetic bursts were emitted in random phases not within the  window identified in the pulsar phase.

"This strongly suggests that pulses and bursts originate from different locations within the magnetar magnetosphere, suggesting possibly different emission mechanisms between pulses and bursts," he said.

Implications for cosmological FRBs

Such a detailed observation of a Galactic FRB source sheds light on the mysterious FRBs prevailing at cosmological distances.

Many sources of cosmological FRBs—those occurring outside our galaxy—have been observed to repeat. In some instances, FAST has detected thousands of repeated bursts from a few sources. Deep searches for seconds-level periodicity have been carried out using these bursts in the past and so far no period was discovered.

According to Zhang, this casts doubt on the popular idea that repeating FRBs are powered by magnetars in the past.

"Our discovery that bursts tend to be generated in random phases provides a natural interpretation to the non-detection of periodicity from repeating FRBs," he said. "For unknown reasons, bursts tend to be emitted in all directions from a magnetar, making it impossible to identify periods from FRB sources."

More information: Weiwei Zhu et al, A radio pulsar phase from SGR J1935+2154 provides clues to the magnetar FRB mechanism, Science Advances (2023). DOI: 10.1126/sciadv.adf6198


Journal information: Science Advances 


Provided by University of Nevada, Las Vegas FAST helps reveal the origin of fast radio bursts

Sahara Desert dust found in remote European snow resorts

Sahara Desert dust found in remote European snow resorts
Dust deposition over Europe on 2nd June, 2021, with darker red indicating greater dust 
quantities. Credit: Earth System Science Data (2023). DOI: 10.5194/essd-15-3075-2023

Saharan dust has made headlines in recent years for traveling across the globe, turning our skies picturesque hues of orange while coating our cities in thin layers of wind-blown dust. This has implications for our infrastructure (for example, reducing solar energy production) and global activities (such as impacting visibility for flights), as well as human health (notably causing respiratory issues) and the natural environment (increasing cloud formation, but reducing temperatures as solar radiation is reflected back out to space).

Europe experienced such an extreme dust deposition event in February 2021. This led to scientists launching a citizen science campaign in which people who were in snow-covered mountain ranges took snow samples, which were analyzed for dust by Dr. Marie Dumont of National Center for Meteorological Research, France, and colleagues.

Volunteers and scientists collected snow samples of 10 x 10 cm2 area through the entire dust layer in the Pyrenees (bordering France and Spain) and European Alps (specifically those spanning France and Switzerland) up to an elevation of 2,500 m above sea level. The collectors then sent the melted contents to laboratories in Toulouse and Grenoble, France, where the samples were filtered and dried to obtain the .

The results, published in Earth System Science Data, reveal that 152 snow samples were collected from 70 locations over four weeks. Dust volume in the samples ranged from 0.2 to 58.6 g/m2, depending upon location, and  decreased with increased distance from the Sahara Desert as heavier and larger particles were deposited first, while smaller and lighter material is carried further by wind.

Sahara Desert dust found in remote European snow resorts
Image used by the scientists on social media to encourage the citizen science campaign, 
instructing participants to sample the whole layer of orange dust-laden snow and take a 
picture of the location on their smartphone, including coordinates. 
Credit: Earth System Science Data (2023). DOI: 10.5194/essd-15-3075-2023

Dust composition also changed with distance, as particles containing iron were preferentially deposited closer to source where particles were 11% iron by mass in the Pyrenees, but this reduced to 2% in the Swiss Alps. Principle dust deposition also occurred on south-facing slopes in line with dominant wind direction blowing dust from Africa.

Accumulation of dust in ice and snow-covered environments can be damaging as it causes a darkening of the "white" environment, resulting in a negative albedo feedback. This occurs when the darker colors absorb the incoming  from space and therefore warms the surrounding environment, causing melting of the neighboring snow, which exposes more dark surface and so the loop continues. A good analogy is to think of wearing black clothing in summer which keeps you warmer, compared to white clothing helping to reflect heat and keep you cooler

A dust event in 2018 resulted in reducing the annual snow cover by up to 30 days. Additionally, media attention surrounding the February 2021 dust event suggested that radionuclides (a chemical element that releases radiation as it breaks down) from French nuclear weapon tests had been transported in the dust.

The researchers tested this claim by analyzing the samples for cesium, and found an increase in this element in the Pyrenees. They also detected an increase in short-lived radionucleides of beryllium and lead, which are often associated with fallout from precipitation, hence assumed that these had been deposited in recent snowfall events that incorporated atmospheric dust.

However, plutonium abundances were not significantly different to background levels in the Northern Hemisphere resulting from U.S. and USSR nuclear tests in the 1950s and 1960s.

Dumont and their colleagues suggest that the nuclear signature from this plutonium is likely to be different from that resulting from French nuclear tests conducted in the Sahara in the 1960s due to the use of different fuels and engines, hence predict that the increased cesium and lead signatures measured in the Sahara are also global fallout from these U.S. and USSR tests.

They cite that French  had only 0.017% of the power of the U.S. and USSR nuclear projects. This results in a warning of an abundance of caution surrounding media coverage of such dust events in the future and their nuclear links.

There is still more work to be done to ascertain how the predicted increased frequency of these  events in the future may impact water resources, snow and ice melt and runoff, avalanche hazards and ski resort management.

The importance of this research shows that getting involved in citizen science projects in local communities and when you're traveling can make a real impact in understanding our planet's past, present and future.

More information: Marie Dumont et al, Spatial variability of Saharan dust deposition revealed through a citizen science campaign, Earth System Science Data (2023). DOI: 10.5194/essd-15-3075-2023

© 2023 Science X Network


Dirtiest snow-year in the Wasatch accelerated snowmelt by 17 days in Utah, finds study

 

Billions in conservation spending fail to improve wild fish stocks in Columbia Basin

Billions in conservation spending fail to improve wild fish stocks in Columbia Basin
Juvenile steelhead trout in a natural stream environment. (Photo by John McMillan). 

Four decades of conservation spending totaling more than $9 billion in inflation-adjusted tax dollars has failed to improve stocks of wild salmon and steelhead in the Columbia River Basin, according to Oregon State University research.

The study led by William Jaeger of the OSU College of Agricultural Sciences is based on an analysis of 50 years of data suggesting that while hatchery-reared  numbers have increased, there is no evidence of a net increase in wild, naturally spawning salmon and steelhead.

The findings were published today in PLOS One.

Jaeger, a professor of applied economics, notes that steelhead and Chinook, coho and sockeye salmon numbers have been under heavy pressure in the Columbia River Basin for more than a century and a half—initially from overharvesting, then from hydropower beginning in 1938 with the opening of Bonneville Dam, the lowermost dam on the mainstem Columbia.

"Also, farming, logging, mining and irrigation caused landscape changes and , which compounded the problems for the ," said Jaeger, who collaborated on the paper with Mark Scheuerell, a biologist with the U.S. Geological Survey and the University of Washington.

An estimated 16 million salmon and steelhead once returned from the Pacific to the portions of the basin above Bonneville Dam, but by the 1970s there were fewer than 1 million fish, prompting the federal government to intervene.

The Northwest Power Act of 1980 required fish and wildlife goals to be considered in addition to power generation and other objectives. The act created the Northwest Power and Conservation Council to set up  financed by Bonneville Power Administration revenues.

The cost and scale of  grew considerably in the 1990s, Jaeger said, following the listing of 12 Columbia River runs of salmon and steelhead as threatened or endangered under the Endangered Species Act.

The public's tab for conservation spending now exceeds $9 billion in inflation-adjusted 2020 U.S. dollars, the researchers said, which does not take into account all monies that have been spent by local governments and non-governmental agencies.

"The actual impact of all of these efforts has always been poorly understood," Jaeger said. "Lots of people have long been concerned about a lack of evidence of salmon and steelhead recovery. One of the issues is that most studies evaluating restoration efforts have examined individual projects for specific species, life stages or , which limits the ability to make broad inferences at the basin level."

Thus, Jaeger notes, a key question has persisted, and its answer is critical for sound policy and legal decisions: Is there any evidence of an overall boost in wild fish abundance that can be linked to the totality of the recovery efforts?

Based on a half-century of fish return data at Bonneville Dam, the single entry point to the basin above the dam, the evidence does not support a yes answer.

"We found no evidence in the data that the restoration spending is associated with a net increase in wild fish abundance," Jaeger said.

He said the Northwest Power and Conservation Council set a goal of increasing total salmon and steelhead abundance in the basin to 5 million fish by 2025, but annual adult returns at Bonneville Dam averaged less than 1.5 million in the 2010s.

And while hatchery production has helped with overall numbers of adult fish, Jaeger added, it has also adversely affected wild stocks through a range of mechanisms including genetics, disease, competition for habitat and food, and predation on wild fish by hatchery fish.

"The role of hatcheries in recovery plans is controversial for many reasons, but results do indicate that hatchery production combined with restoration spending is associated with increases in returning adult fish," Jaeger said. "However, we found that adult returns attributable to spending and hatchery releases combined do not exceed what we can attribute to hatcheries alone. We looked at ocean conditions and other environmental variables, hatchery releases,  for  released fish, and conservation spending, and we saw no indication of a positive net effect for wild fish."

Even expenditures on "durable" habitat improvements designed to cumulatively benefit naturally spawning wild salmon and steelhead over many years did not lead to evidence of a return on these investments, he added.

More information: William K. Jaeger et al, Return(s) on investment: Restoration spending in the Columbia River Basin and increased abundance of salmon and steelhead, PLOS ONE (2023). DOI: 10.1371/journal.pone.0289246


Journal information: PLoS ONE 


Provided by Oregon State University US protections for Idaho salmon, steelhead are here to stay

Colorado River Basin megadrought caused by massive 86% decline in snowpack runoff

Colorado River Basin megadrought caused by massive 86% decline in snowpack runoff
Horseshoe Bend along the Colorado River. Credit: Sean Benesh via Unsplash

The Colorado River Basin provides freshwater to more than 40 million people within the semi-arid southwestern United States, including major cities such as Las Vegas and Los Angeles. However, between 2000 and 2021 the basin experienced a megadrought (a severe drought lasting multiple decades), which researchers have suggested likely would not have occurred if it were not for anthropogenic climate change. In particular, during 2020 and 2021, the river basin recorded the driest 20-month period since 1895 and the lowest river flow since 1906.

Dr. Benjamin Bass and colleagues at the University of California aimed to identify how precipitation and runoff within the basin have changed since the 1880s, in line with a 1.5°C increase in temperature over the same period. New research, published in Water Resources Research, identified a 10.3% decrease in runoff within the basin as a direct result of anthropogenic warming and vegetation changes in the landscape, meaning available water resources to support the local population have declined 2.1 km3.

Furthermore, the scientists found that snowpack regions were significantly impacted by aridification, exacerbating the decline in runoff to twice that of neighboring areas. Though snowpack regions constitute only 30% of the Colorado River drainage basin, the aridification has led to an 86% decrease in runoff (losing 1.2 km3 of water per °C of warming).

This is likely to worsen due to albedo feedback, whereby the declining snow reduces the lighter "white" snow surface to reflect heat from solar insolation, instead exposing more of the land to absorb heat and ultimately increasing temperature further which causes more snow to melt and so the feedback loop continues.

Colorado River Basin megadrought caused by massive 86% decline in snowpack runoff
Percentage changes in Colorado River Basin runoff since the 1950s simulated based upon
 global warming alone, the influence of CO2 alone and the impact of warming and CO2
 combined, creating a 10.3% decrease in runoff in 2021. 
Credit: Bass et al, 2023

Using Global Climate Models and historical data, the researchers performed simulations to assess the trends in runoff with anthropogenic changes, as well as predicting the scenarios if  is removed. They found that the drainage basin runoff has decreased 1.2 km3 since 1954, but suggest that runoff would have actually increased by 0.9 km3 had the influence of global warming and elevated CO2 not occurred.

Atmospheric carbon dioxide is estimated to have been 285 ppm (parts per million) in 1880, compared to 313 ppm in 1950 and 416 ppm in 2021. It is the influence of increasing CO2 and resulting  that show a clear trend in declining runoff, enhancing pace since the 1980s.

Vegetation plays an important part in  drainage also by counterbalancing runoff. The researchers describe how elevated CO2 levels cause stomata (pores for gas exchange) on the underside of leaves to close and reduces transpiration (release of water vapor through stomata). However, elevated temperatures lead to increased rates of evaporation from the leaf surface, and this process outweighs any water resource efficiency from transpiration, leading to a net loss of water from the plants. Overall, vegetation does help to offset runoff losses by 15%, though the degree to which this is effective depends upon the type and coverage of vegetation in the .

The megadrought since 2000 has experienced a 0.48°C increase in temperature and 3.1% reduction in precipitation than the climate average, while its exacerbation in late 2021 to early 2022 coincided with elevated temperatures of 0.83°C (8.6% higher than the climate mean), plus a 23.9% and 38.6% decrease in precipitation and runoff respectively.

This led to a total decline in runoff for the megadrought of 40.1 km3, and 3 km3 for the recent 2020 to 2021 event, with Arizona, Nevada and Mexico being declared under a water shortage in 2021 and their freshwater allocation was reduced by 0.756 km3. By comparison, this  is greater than the size of Lake Mead in Nevada, the largest reservoir in the U.S. by capacity.

Given the Colorado River Basin's mean annual  is 21.2 km3, the loss of ~10% its volume over the last 140 years is significant. As CO2 levels and temperatures are predicted to rise in the future, the rate of water loss could enhance further, posing consequences for the millions who rely upon its waters every day.

More information: Benjamin Bass et al, Aridification of Colorado River Basin's Snowpack Regions Has Driven Water Losses Despite Ameliorating Effects of Vegetation, Water Resources Research (2023). DOI: 10.1029/2022WR033454