Friday, March 26, 2021

Researchers at Stanford and Carnegie Mellon reveal cost of key climate solution

STANFORD UNIVERSITY

Research News

Perhaps the best hope for slowing climate change - capturing and storing carbon dioxide emissions underground - has remained elusive due in part to uncertainty about its economic feasibility.

In an effort to provide clarity on this point, researchers at Stanford University and Carnegie Mellon University have estimated the energy demands involved with a critical stage of the process. (Watch video here: https://www.youtube.com/watch?v=-ZPIwwQs9aM)

Their findings, published April 8 in Environmental Science & Technology, suggest that managing and disposing of high salinity brines - a by-product of efficient underground carbon sequestration - will impose significant energy and emissions penalties. Their work quantifies these penalties for different management scenarios and provides a framework for making the approach more energy efficient.

"Designing massive new infrastructure systems for geological carbon storage with an appreciation for how they intersect with other engineering challenges--in this case the difficulty of managing high salinity brines--will be critical to maximizing the carbon benefits and reducing the system costs," said study senior author Meagan Mauter, an associate professor of Civil and Environmental Engineering at Stanford University.

Getting to a clean, renewable energy future won't happen overnight. One of the bridges on that path will involve dealing with carbon dioxide emissions - the dominant greenhouse gas warming the Earth - as fossil fuel use winds down. That's where carbon sequestration comes in. While most climate scientists agree on the need for such an approach, there has been little clarity about the full lifecycle costs of carbon storage infrastructure.

Salty challenge

An important aspect of that analysis is understanding how we will manage brines, highly concentrated salt water that is extracted from underground reservoirs to increase carbon dioxide storage capacity and minimize earthquake risk. Saline reservoirs are the most likely storage places for captured carbon dioxide because they are large and ubiquitous, but the extracted brines have an average salt concentration that is nearly three times higher than seawater.

These brines will either need to be disposed of via deep well injection or desalinated for beneficial reuse. Pumping it underground - an approach that has been used for oil and gas industry wastewater - has been linked to increased earthquake frequency and has led to significant public backlash. But desalinating the brines is significantly more costly and energy intensive due, in part, to the efficiency limits of thermal desalination technologies. It's an essential, complex step with a potentially huge price tag.

The big picture

The new study is the first to comprehensively assess the energy penalties and carbon dioxide emissions involved with brine management as a function of various carbon transport, reservoir management and brine treatment scenarios in the U.S. The researchers focused on brine treatment associated with storing carbon from coal-fired power plants because they are the country's largest sources of carbon dioxide, the most cost-effective targets for carbon capture and their locations are generally representative of the location of carbon dioxide point sources.

Perhaps unsurprisingly, the study found higher energy penalties for brine management scenarios that prioritize treatment for reuse. In fact, brine management will impose the largest post-capture and compression energy penalty on a per-tone of carbon dioxide basis, up to an order of magnitude greater than carbon transport, according to the study.

"There is no free lunch," said study lead author Timothy Bartholomew, a former civil and environmental engineering graduate student at Carnegie Mellon University who now works for KeyLogic Systems, a contractor for the Department of Energy's National Energy Technology Laboratory. "Even engineered solutions to carbon storage will impose energy penalties and result in some carbon emissions. As a result, we need to design these systems as efficiently as possible to maximize their carbon reduction benefits."

The road forward

Solutions may be at hand.

The energy penalty of brine management can be reduced by prioritizing storage in low salinity reservoirs, minimizing the brine extraction ratio and limiting the extent of brine recovery, according to the researchers. They warn, however, that these approaches bring their own tradeoffs for transportation costs, energy penalties, reservoir storage capacity and safe rates of carbon dioxide injection into underground reservoirs. Evaluating the tradeoffs will be critical to maximizing carbon dioxide emission mitigation, minimizing financial costs and limiting environmental externalities.

"There are water-related implications for most deep decarbonization pathways," said Mauter, who is also a fellow at the Stanford Woods Institute for the Environment. "The key is understanding these constraints in sufficient detail to design around them or develop engineering solutions that mitigate their impact."

###

To read all stories about Stanford science, subscribe to the biweekly Stanford Science Digest.

Funding for this research provided by the National Science Foundation, the ARCS Foundation and the U.S. Department of Energy.

Biocrude passes the 2,000-hour catalyst stability test

Sewage and food waste biocrude conversion process reaches major milestone

DOE/PACIFIC NORTHWEST NATIONAL LABORATORY

Research News

IMAGE

IMAGE: WET WASTES FROM SEWAGE TREATMENT AND DISCARDED FOOD CAN PROVIDE THE RAW MATERIALS FOR AN INNOVATIVE PROCESS CALLED HYDROTHERMAL LIQUEFACTION, WHICH CONVERTS AND CONCENTRATES CARBON-CONTAINING MOLECULES INTO A LIQUID BIOCRUDE.... view more 

CREDIT: (ILLUSTRATION BY MICHAEL PERKINS | PACIFIC NORTHWEST NATIONAL LABORATORY)

RICHLAND, WASH.--A large-scale demonstration converting biocrude to renewable diesel fuel has passed a significant test, operating for more than 2,000 hours continuously without losing effectiveness. Scientists and engineers led by the U.S. Department of Energy's Pacific Northwest National Laboratory conducted the research to show that the process is robust enough to handle many kinds of raw material without failing.

"The biocrude oil came from many different sources, including wastewater sludge from Detroit, and food waste collected from prison and an army base," said John Holladay, a PNNL scientist and co-director of the joint Bioproducts Institute, a collaboration between PNNL and Washington State University. "The research showed that essentially any biocrude, regardless of wet-waste sources, could be used in the process and the catalyst remained robust during the entire run. While this is just a first step in demonstrating robustness, it is an important step."

The milestone was first described at a virtual conference organized by NextGenRoadFuels, a European consortium funded by the EU Framework Programme for Research and Innovation. It addresses the need to convert biocrude, a mixture of carbon-based polymers, into biofuels. In the near term, most expect that these biofuels will be further refined and then mixed with petroleum-based fuels used to power vehicles.  

"For the industry to consider investing in biofuel, we need these kinds of demonstrations that show durability and flexibility of the process," said Michael Thorson, a PNNL engineer and project manager.


CAPTION

This reactor turns wet waste into biocrude, which in turn feeds a refining step that turns biocrude into fuels for transportation. 

CREDIT

(Andrea Starr | Pacific Northwest National Laboratory)

Biocrude to biofuel, the crucial conversion

Just as crude oil from petroleum sources must be refined to be used in vehicles, biocrude needs to be refined into biofuel. This step provides the crucial "last mile" in a multi-step process that starts with renewables such as crop residues, food residues, forestry byproducts, algae, or sewage sludge. For the most recent demonstration, the biocrude came from a variety of sources including converted food waste salvaged from Joint Base Lewis-McChord, located near Tacoma, Wash., and Coyote Ridge Corrections Center, located in Connell, Wash. The initial step in the process, called hydrothermal liquefaction, is being actively pursued in a number of demonstration projects by teams of PNNL scientists and engineers.

The "last mile" demonstration project took place at the Bioproducts, Sciences, and Engineering Laboratory on the Richland, Wash. campus of Washington State University Tri-Cities. For 83 days, reactor technician Miki Santosa and supervisor Senthil Subramaniam fed a constant flow of biocrude into carefully honed and highly controlled reactor conditions. The hydrotreating process introduces hydrogen into a catalytic process that removes sulfur and nitrogen contaminants found in biocrude, producing a combustible end-product of long-chain alkanes, the desirable fuel used in vehicle engines. Chemist Marie Swita analyzed the biofuel product to ensure it met standards that would make it vehicle-ready.

Diverting carbon to new uses

"Processing food and sewage waste streams to extract useful fuel serves several purposes," said Thorson. Food waste contains carbon. When sent to a landfill, that food waste gets broken down by bacteria that emit methane gas, a potent greenhouse gas and contributor to climate change. Diverting that carbon to another use could reduce the use of petroleum-based fuels and have the added benefit of reducing methane emissions.

The purpose of this project was to show that the commercially available catalyst could stand up to the thousands of hours of continuous processing that would be necessary to make biofuels a realistic contributor to reducing the world's carbon footprint. But Thorson pointed out that it also showed that the biofuel product produced was of high quality, regardless of the source of biocrude?an important factor for the industry, which would likely be processing biocrude from a variety of regional sources.

Indeed, knowing that transporting biocrude to a treatment facility could be costly, modelers are looking at areas where rural and urban waste could be gathered from various sources in local hubs. For example, they are assessing the resources available within a 50-mile radius of Detroit, Mich. There, the sources of potential biocrude feedstock could include food waste, sewage sludge and cooking oil waste. In areas where food waste could be collected and diverted from landfills, much as recycling is currently collected, a processing plant could be up to 10 times larger than in rural areas and provide significant progress toward cost and emission-reduction targets for biofuels.

Commercial biofuels on the horizon

Milestones such as hours of continuous operation are being closely watched by investor groups in the U.S. and Europe, which has set aggressive goals, including being the first climate-neutral continent by 2050 and achieving a 55% reduction in greenhouse gas emissions by 2030. "A number of demonstration projects across Europe aim to commercialize this process in the next few years," Holladay said.

The next steps for the research team include gathering more sources of biocrude from various waste streams and analyzing the biofuel output for quality. In a new collaboration, PNNL will partner with a commercial waste management company to evaluate waste from many sources. Ultimately, the project will result in a database of findings from various manures and sludges, which could help decide how facilities can scale up economically.

"Since at least three-quarters of the input and output of this process consists of water, the ultimate success of any industrial scale-up will need to include a plan for dealing with wastewater," said Thorson. This too is an active area of research, with many viable options available in many locations for wastewater treatment facilities.

###

DOE's Bioenergy Technologies Office has been instrumental in supporting this project, as well as the full range of technologies needed to make biofuels feasible.

Pacific Northwest National Laboratory draws on its distinguishing strengths in chemistry, Earth sciences, biology and data science to advance scientific knowledge and address challenges in sustainable energy and national security. Founded in 1965, PNNL is operated by Battelle for the U.S. Department of Energy's Office of Science, which is the single largest supporter of basic research in the physical sciences in the United States. DOE's Office of Science is working to address some of the most pressing challenges of our time. For more information, visit PNNL's News Center. Follow us on Twitter, Facebook, LinkedIn and Instagram.

Toxin in potatoes evolved from a bitter-tasting compound in tomatoes

KOBE UNIVERSITY

Research News

IMAGE

IMAGE: THE CHEMICAL STRUCTURES OF SGAS FOUND IN TOMATOES AND POTATOES. view more 

CREDIT: RYOTA AKIYAMA/MASAHARU MIZUTANI

A multi-institutional collaboration has revealed that α-solanine, a toxic compound found in potato plants, is a divergent of the bitter-tasting α-tomatine, which is found in tomato plants. The research group included Associate Professor MIZUTANI Masaharu and Researcher AKIYAMA Ryota et al. of Kobe University's Graduate School of Agricultural Science, Assistant Professor WATANABE Bunta of Kyoto University's Institute for Chemical Research, Senior Research Scientist UMEMOTO Naoyuki of the RIKEN Center for Sustainable Resource Science, and Professor MURANAKA Toshiya of Osaka University's Graduate School of Engineering.

It is hoped that these research results can be used in potato breeding as a basis for suppressing the synthetization of poisonous compounds.

These research results were published in the international academic journal 'Nature Communications' on February 26.

Main Points

  • α-solanine is a toxic steroidal glycoalkaloid (SGA) (*1) found in potatoes.
  • Tomato's α-tomatine is astringent-tasting SGA that accumulates inside unripe fruits.
  • Based on their chemical structures, SGAs can be divided into two general classes, solanidanes (*2, e.g.α-solanine) and spirosolanes (*3, e.g. α-tomatine).
  • The research group revealed that the toxic α-solanine in potatoes is biosynthesized from spirosolane.
  • They discovered that the dioxygenase DPS (*4) is the key enzyme for this catalytic conversion.
  • It was also revealed that the α-solanine biosynthesis pathway in potatoes diverged from the spirosolane biosynthesis pathway due to the evolution of DPS.


CAPTION

The research group discovered that the enzyme DPS is a catalyst for this reaction (indicated by the red arrow).

CREDIT

Ryota Akiyama/Masaharu Mizutani

Research Background

α-solanine is a type of toxic steroidal glycoalkaloid (SGA), which accumulates in the green skin on potato tubers (*5) and in tuber sprouts. SGA is not only found in potatoes but also in other plants of the Solanaceae family, including crops like tomatoes and eggplants. These substances are poisonous to many living things and serve as one of the plants' natural defenses. Low concentrations of SGA in potatoes cause a bitter taste and larger amounts can cause food poisoning. For this reason, biosynthesis research has been conducted with the aim of controlling the accumulation of SGA in potatoes.

Based on their skeletal chemical structures, SGAs can be divided into two general classes, solanidanes and spirosolanes (Figure 1). The potato toxinα-solanine is an example of a solanidane, whereas α-tomatine, which accumulates inside unripe tomatoes, is a spirosolane. It is known that both classes of SGA are biosynthesized from cholesterol. Up until now, several genes that encode the catalytic enzymes in SGA biosynthesis have been discovered and potato and tomato plants share these enzymes in the common pathway of SGA biosynthesis. However, the steps and enzymes involved in the metabolic branch point between solanidane- skeleton and spirosolane-skeleton formation remains an unsolved mystery.

This research group showed that the potato toxinα-solanine is biosynthesized from spirosolane. In a world first, they discovered that the dioxygenase DPS is the key to this conversion.


CAPTION

Metabolic reactions of spirosolane in tomatoes and potatoes.

CREDIT

Ryota Akiyama/Masaharu Mizutani

Research Findings

Potatoes contain the toxic solanidanes α-solanine and α-chaconine. The research group investigated theα-solanine biosynthesis pathway in potato plants. Using genome editing, they disrupted the biosynthetic enzyme gene in potato so that it was unable to produceα-solanine. Feeding α-tomatine (a spirosolane found in tomatoes) to the disruptant resulted in a metabolic conversion to the corresponding solanidane compound. In addition, it was found that this metabolic alteration could be suppressed with a 2-oxoglutarate dependent dioxygenase inhibitor, revealing that a dioxygenase is responsible for the oxidation reaction that synthesizes solanidanes from spirosolanes.

The researchers singled out a 2-oxoglutarate dependent dioxygenase (DPS) gene that was expressed in potato during α-solanine synthesis. To investigate this further, the researchers generated modified plants in which DPS gene expression was suppressed via RNA interference (*7). The solanidane concentrations in these modified potato plants were far lower than in the unmodified group, and spirosolanes accumulated inside the plants in place of solanidanes. Next, the researchers measured the enzymatic activity of DPS by recombining the proteins and expressing them in E. coli. The results revealed the unique catalytic role of DPS in spirosolane's conversion into solanidane (Figure 2). This proved that DPS is the key enzyme responsible for this conversion.

This research revealed that the potato's ability to produce α-solanine came about due to the evolution of DPS, which is responsible for metabolically converting spirosolanes (e.g. α-tomatine) into solanidanes. It is known that tomatoes also have an enzyme for metabolizing spirosolanes. The bitter-tasting α-tomatine is found in unripe tomatoes but is metabolized into the tasteless, non-toxic esculeoside A as the fruits ripen. The catalyst for this reaction is 23DOX (*8), which is also a dioxygenase.

From the relationship between their chromosomal positions and phylogenetic analysis, it was revealed that the α-solanine biosynthase DPS has evolved from the same precursor gene as the non-toxic α-tomatine's catalytic enzyme 23DOX. Thus, it is believed that the evolution of the dioxygenase gene that metabolizes spirosolanes is one of the main drivers of the development of structural and functional variation in SGAs.

Further Developments

Potatoes have been termed a potentially dangerous food because large concentrations of toxic α-solanine can cause food poisoning. It is hoped that these research results can provide a basis for future potato varieties in which the biosynthesis of toxic compounds is suppressed by targeting the DPS gene.

As shown in this research, the evolutionary origins of the structural diversity of SGAs provide clues towards discovering unknown SGA synthesis enzymes involved in biological functions in various plants. Illuminating these functions could pave the way for the molecular breeding of plant varieties that are able to adapt to different stressful environments.

###

Glossary

1. Steroidal Glycoalkaloid (SGA): An SGA consists of alkaloids containing nitrogen atoms arranged in a skeletal steroid structure. It is a toxic alkaloid glycoside with an oligosaccharide attached to the hydroxyl group at position C3 of the steroid. SGAs are secondary metabolites that accumulate in plants of the Solanaceae family. The SGAs α-solanine and α-chaconine are found in potatoes and are known to cause food poisoning.

2. Solanidanes: The skeletal steroid structure of some SGA chemical compounds, such as α-solanine in potatoes.

3. Spirosolanes: The skeletal steroid structure of some SGA chemical compounds, such as α-tomatine in tomatoes.

4. DPS: DPS stands for Dioxygenase for Potato Solanidane synthesis. It is the key enzyme for the biosynthesis ofα-solanine in potatoes. DPS is the catalytic enzyme for the metabolic alteration of spirosolane to solanidane by C-16 hydroxylation.

5. Tuber: An enlarged underground structure in some plant species that stores nutrients. Plants with edible tubers include potato, konnyaku and Jerusalem artichoke.

6. 2-oxoglutarate dependent dioxygenase: 2-oxoglutarate dependent dioxygenases are a super family of enzymes that are water-soluble and play various roles in many biological processes. They require 2-oxoglutarate (α-ketoglutarate) and O2 to hydrate substrates.

7. RNA interference: RNA interference is a phenomenon where an antisense RNA strand, which is complimentary to a specific gene's mRNA, and a sense RNA strand become a double-stranded RNA, suppressing the expression of the gene. This knowledge forms the basis of a method to control the expression of target genes in experiments.

8. 23DOX: This enzyme hydrolyzes spirosolane at C-23. It is part of the 2-oxoglutarate dependent dioxygenase super family of enzymes. In tomato plants, it hydrolyzes the bitter-tasting SGA α-tomatine at C-23.

Acknowledgements

This research was partially funded by the following:

  • A Grant-in-Aid for JSPS Fellows for the research project entitled 'The chemical evolution of steroidal glycoalkaloids in Solanaceae' (grant number JP19J10750, recipient: Akiyama Ryota).
  • The Japanese Ministry of Agriculture, Forestry and Fisheries' 'Development of new varieties and breeding materials in crops by genome editing' program for advancing research.
  • The Cross-ministerial Strategic Innovation Promotion (SIP) Program.

Journal Information:

Title: "The biosynthetic pathway of potato solanidanes diverged from that of spirosolanes due to evolution of a dioxygenase" DOI: 10.1038/s41467-021-21546-0

Authors: Ryota Akiyama1, Bunta Watanabe2, Masaru Nakayasu1†, Hyoung Jae Lee1, Junpei Kato1, Naoyuki Umemoto3, Toshiya Muranaka4, Kazuki Saito3,5, Yukihiro Sugimoto1, Masaharu Mizutani1

1. Graduate School of Agricultural Science, Kobe University. 2. Institute for Chemical Research, Kyoto University. 3. RIKEN Center for Sustainable Resource Science. 4. Department of Biotechnology, Graduate School of Engineering, Osaka University. 5. Graduate School of Pharmaceutical Sciences, Chiba University. †Currently a graduate student at the Research Institute for Sustainable Humanosphere, Kyoto University.

Journal: Nature Communications

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

 

Ancient megafaunal mutualisms and extinctions as factors in plant domestication


Of the vast diversity of plants on Earth, only a few evolved to become prominent agricultural crops. Scholars now suggest they originally evolved to secure mutualistic relationships with now-extinct megafauna

MAX PLANCK INSTITUTE FOR THE SCIENCE OF HUMAN HISTORY

Research News

IMAGE

IMAGE: MUSKOX (OVIBOS MOSCHATUS) - ONE OF NUMEROUS HERBIVORES THAT ROAM IN THE ENCLOSURE OF THE PLEISTOCENE PARK, NATURE RESERVE IN NORTHERN SAKHA REPUBLIC, RUSSIA. THIS ONGOING GRAZING EXPERIMENT STARTED IN... view more 

CREDIT: FRANK KIENAST

By clearing forests, burning grasslands, plowing fields and harvesting crops, humans apply strong selective pressures on the plants that survive on the landscapes we use. Plants that evolved traits for long-distance seed-dispersal, including rapid annual growth, a lack of toxins and large seed generations, were more likely to survive on these dynamic anthropogenic landscapes. In the current article, researchers argue that these traits may have evolved as adaptations for megafaunal mutualisms, later allowing those plants to prosper among increasingly sedentary human populations.

The new study hypothesizes that the presence of specific anthropophilic traits explains why a select few plant families came to dominate the crop and weed assemblages around the globe, such as quinoa, some grasses, and knotweeds. These traits, the authors argue, also explain why so many genera appear to have been domesticated repeatedly in different parts of the world at different times. The 'weediness' and adaptability of those plants was the result of exaptation traits, or changes in the function of an evolutionary trait. In this way, rather than an active and engaged human process, certain plants gradually increased in prominence around villages, in cultivated fields, or on grazing land.

Grasses and field crops weren't the only plants to use prior adaptations to prosper in human landscapes; select handfuls of trees also had advantageous traits, such as large fleshy fruits, resulting from past relationships with large browsers. The rapid extinction of megafauna at the end of the Pleistocene left many of these large-fruiting tree species with small, isolated populations, setting the stage for more dramatic changes during later hybridization. When humans began moving these trees they were likely to hybridize with distant relatives, resulting, in some cases, in larger fruits and more robust plants. In this way, the domestication process for many long-generation perennials appears to have been more rapid and tied into population changes due to megafaunal extinctions.

"The key to better understanding plant domestication may lay further in the past than archaeologists have previously thought; we need to think about the domestication process as another step in the evolution of life on Earth, as opposed to an isolated phenomenon," states Dr. Robert Spengler. He is the director of the archaeobotanical laboratories at the Max Planck Institute for the Science of Human History in Jena, Germany, and the lead investigator on this paper.

This publication is a result of archaeologists, geneticists, botanists, and paleontologists contributing insights from their unique disciplines to reframe the way scholars think about domestication. The goal of the collaboration is to get researchers to consider the deeper ecological legacies of the plants and the pre-cultivation adaptations that they study.

Prof. Nicole Boivin, director of the Department of Archaeology at the Max Planck Institute in Jena, studies the ecological impacts of humans deep in the past. "When we think about the ecology of the origins of agriculture, we need to recognize the dramatic changes in plant and animal dynamics that have unfolded across the Holocene, especially those directly resulting from human action," she adds.


CAPTION

The wood bison (Bison bison spp. athabascae) of the Ust'-Buotama Bison Park, central Sakha Republic, Russia. In 2006, 35 wood bison were brought from Canada. These megaherbivores adapted to local environment and increased their population. The 100-ha enclosure, where animals live, serves as a study site for ecologists and zoologists and provides an opportunity to trace changes in vegetation associated with herbivore pressure.

CREDIT

Frank Kienast

Ultimately, the scholars suggest that, rather than in archaeological excavations, laboratories, or in modern agricultural fields, the next big discoveries in plant domestication research may come from restored megafaunal landscapes. Ongoing research by Dr. Natalie Mueller, one of the authors, on North American restored prairies is investigating potential links between bison and the North American Lost Crops. Similar studies could be conducted on restored megafaunal landscapes in Europe, such as the Bia?owieski National Park in Poland, the Ust'-Buotoma Bizon Park or Pleistocene Park in Sakha Republic, Russia.

Dr. Ashastina, another author on the paper and a paleontologist studying Pleistocene vegetation communities in North Asia, states, "these restored nature preserves provide a novel glimpse deep into the nature of plant and animal interactions and allow ecologists, not only to directly trace vegetation changes occurring under herbivore pressure in various ecosystems, but to disentangle the deeper legacies of these mutualisms."

CAPTION

While it may just look like a regular field, the plants growing on either side of this bison trail are primarily little barley, one of the North American Lost Crops. This photo was taken during field work by Natalie Muller of Washington University in St. Louis in an attempt to determine the role that bison may have played in shaping the ecology of the progenitors for certain ancient crops.

CREDIT

Natalie Mueller


 

New documentation: Old-growth forest carbon sinks overestimated

UNIVERSITY OF COPENHAGEN - FACULTY OF SCIENCE

Research News

The claim that old-growth forests play a significant role in climate mitigation, based upon the argument that even the oldest forests keep sucking CO2 out of the atmosphere, is being refuted by researchers at the University of Copenhagen. The researchers document that this argument is based upon incorrectly analysed data and that the climate mitigation effect of old and unmanaged forests has been greatly overestimated. Nevertheless, they reassert the importance of old-growth forest for biodiversity.

Old and unmanaged forest has become the subject of much debate in recent years, both in Denmark and internationally. In Denmark, setting aside forests as unmanaged has often been argued to play a significant role for climate mitigation. The argument doesn't stand up according to researchers at the University of Copenhagen, whose documentation has just been published as a commentary in Nature.

The entire climate mitigation argument is based upon a widely cited 2008 research article which reports that old-growth forests continue to suck up and sequester large amounts of CO2 from the atmosphere, regardless of whether their trees are 200 years old or older. UCPH researchers scrutinised the article by reanalysing the data upon which it was based. They conclude that the article arrives at a highly overestimated climate effect for which the authors' data presents no evidence.

"The climate mitigation effect of unmanaged forests with trees more than 200 years old is estimated to be at least one-third too high--and is based solely upon their own data, which, incidentally, is subject to great uncertainty. Thus, the basis for the article's conclusions is very problematic," explains Professor Per Gundersen, of the University of Copenhagen's Department of Geosciences and Natural Resource Management.

An unlikely amount of nitrogen

The original research article concluded that old-growth forests more than 200 years old bind an average of 2.4 tonnes of carbon per hectare, per year, and that 1.3 tonnes of this amount is bound in forest soil. According to the UCPH researchers, this claim is particularly unrealistic. Carbon storage in soil requires the addition of a certain amount of externally sourced nitrogen.

"The large amounts of nitrogen needed for their numbers to stand up don't exist in the areas of forest which they studied. The rate is equivalent to the soil's carbon content doubling in 100 years, which is also unlikely, as it has taken 10,000 years to build up the soil's current carbon content. It simply isn't possible to bind such large quantities of carbon in soil," says Gundersen.

Trees don't grow into the sky

Unlike the authors of the 2008 article, and in line with the classical view in this area, the UCPH researchers believe that old unmanaged forests reach a saturation point after a number of years. At that point, CO2 uptake ceases. After longer periods (50-100 years in Denmark) of high CO2 sequestration, storage decreases and eventually come to a stop. This happens when a forest reaches an equilibrium, whereby, through the respiration of trees and degradation of organic matter in the soil, it emits as much CO2 into the atmosphere as it absorbs through photosynthesis.

"As we know, trees don't just grow into the sky. Trees age. And at some point, they die. When that happens, decay begins, sending carbon back into the atmosphere as CO2. Other smaller trees will then take over, thereby leaving a fairly stable CO2 stock in the forest. As trees age, the risk of a forest being impacted by storms, fire, droughts, disease, death and other events increases more and more. This releases a significant portion of the stored carbon for a period of time, until newer trees replace the old ones," explains Gundersen.

He adds that the 2008 article does not document any mechanism which allows the forest to keep sequestering CO2.

The UCPH researchers' view is supported by observations from Suserup Forest, near Sorø, Denmark, a forest that has remained largely untouched for the past century. The oldest trees within it are 300 years old. Inventories taken in 1992, 2002 and 2012 all demonstrated that there was no significant CO2 uptake by the forest.

Old-growth forest remains vital for biodiversity

"We feel a bit like the child in the Emperor's New Clothes, because what we say is based on classic scientific knowledge, thermodynamics and common sense. Nevertheless, many have embraced an alternative view--and brought the debate to a dead end. I hope that our contribution provides an exit," says Per Gundersen.

He would like to make it clear that this should in no way be perceived as a position against protection of old-growth forest or setting aside unmanaged forest areas.

"Old-growth forest plays a key role in biodiversity. However, from a long-term climate mitigation perspective, it isn't an effective tool. Grasping the nuance is important so that debate can be based upon scientifically substantiated assertions, and so that policy is not influenced on an incorrect basis," concludes Gundersen.

###

ABOUT THE STUDY:

* The analysis was conducted by Per Gundersen, Emil E. Thybring, Thomas Nord-Larsen, Lars Vesterdal, Knute J. Nadelhoffer and Vivian K. Johannsen from the Department of Geosciences and Nature Management at the University of Copenhagen.

* The commentary has been published in the prestigious journal Nature and can be accessed here: https://www.science.ku.dk/english/press/news/2021/new-documentation-old-growth-forest-carbon-sinks-overestimated/

 

Urban 'escalator' means disadvantaged rural students miss out on top universities

Bright but disadvantaged students from urban areas are more likely to enter elite UK universities than similar peers from rural communities due to an urban 'esca

Bright but disadvantaged students from urban areas are more likely to enter elite UK universities than similar peers from rural communities due to an urban 'escalator effect', according to a new study.

Researchers from the University of Bath analysed data from 800,000 English students commencing university in the years 2008, 2010, 2012, 2014 and 2016.

They found that while in general rural areas had higher overall progression to university than city centres and surrounding areas, when controlling for factors including socio-economic status, age, ethnicity and sex, disadvantaged pupils from rural areas were less likely to progress to one of 27 'top' UK universities.

The authors suggest the difference is due to a 'vortex of influences' including 'social mix effects' in more diverse urban settings, successive urban-centred policy interventions and the targeting of university and third-sector outreach activities to urban areas.

Although the results reaffirmed that social class remains the biggest predictor of progression to a top university, the researchers say the results highlight drawbacks of existing geographic measures used to identify disadvantage, as they do not account for the diverse nature of deprived areas, and therefore universities risk missing disadvantaged students. Instead the use of more sophisticated measures could help universities target under-represented and disadvantaged students more effectively, and the authors call for a co-ordinated strategic approach to ensure that no areas are missed by universities' widening participation programmes.

The paper is published in the British Educational Research Journal.

Jo Davies, who led the research as part of her PhD studies in the Department of Education, said: "There has been a lot of interest and concern about geographic inequalities in education. Our paper shows that whilst social background is still the most important predictor for progressing to an elite university, there may also be further geographic factors compounding access. We believe that the use of Geographic Information System (GIS) mapping methods, as used within our own research, could enable elite universities to target under?represented students more effectively, especially disadvantaged students living in rural areas with otherwise good progression rates."

The research team, from the Department of Education, used data from the Higher Education Statistics Agency (HESA) of 800,000 English students beginning university in the academic years 2008/09, 2010/11, 2012/ 13, 2014/15 and 2016/17.

They were interested in progression to 27 'top' UK universities, comprising the Russell Group plus the Universities of St Andrews, Bath and Strathclyde, comparing rates from each Middle Super Output Area (MSOA) in England. Each MSOA, of which there are 6,791 across England, has a population between 5,000 and 15,000, with a minimum of 2,000 and a maximum of 6,000 households.

By analysing progression to these elite institutions after controlling for a factors including education (state/private school education, tariff point score, number of facilitating subjects studied), socio-economic status, age, ethnicity, sex, distance travelled and academic year, the urban escalator effect emerged.

The research was funded by a University of Bath Research Studentship Award. The University currently funds seven PhD students as part of its programme of research aiming to uncover ways in which participation in higher education can be widened and to ensure that no student who has the ability and desire to go to onto higher education is prevented from doing so because of their background.

Dr Matt Dickson, who leads the overall programme for the University's Institute for Policy Research, said: "This research is a great example of the importance of analysis that goes beyond a descriptive picture to understand the key factors that perpetuate inequalities in higher education access. Rather than a simple rural-urban divide, the reality is much more complex and this has important implications for higher education policy."

These lessons are already being implemented at the University of Bath. For example, alongside its existing programme of Widening Participation initiatives the University recently entered into a partnership with Villiers Park Educational Trust to support students from neglected rural and coastal communities to access top universities, such as Bath, through activities including coaching and mentoring for students.

###

For more information or to arrange an interview with one of the authors contact Chris Melvin in the University of Bath Press Office on 01225 383941 or cmm64@bath.ac.uk

The paper Geographies of elite higher education participation: An urban 'escalator' effect is published in the British Educational Research Journal and is available online.

UNIVERSITY OF BATH

The University of Bath is one of the UK's leading universities both in terms of research and our reputation for excellence in teaching, learning and graduate prospects.

The University is rated Gold in the Teaching Excellence Framework (TEF), the Government's assessment of teaching quality in universities, meaning its teaching is of the highest quality in the UK.

In the Research Excellence Framework (REF) 2014 research assessment 87 per cent of our research was defined as 'world-leading' or 'internationally excellent'. From developing fuel efficient cars of the future, to identifying infectious diseases more quickly, or working to improve the lives of female farmers in West Africa, research from Bath is making a difference around the world. Find out more: http://www.bath.ac.uk/research/

Well established as a nurturing environment for enterprising minds, Bath is ranked highly in all national league tables. We are ranked 6th in the UK by The Guardian University Guide 2021, and 9th in both The Times & Sunday Times Good University Guide 2021 and the Complete University Guide 2021. Our sports offering was rated as being in the world's top 10 in the QS World University Rankings by Subject in 2021.

Arctic sponge survival in the extreme deep-sea

ROYAL NETHERLANDS INSTITUTE FOR SEA RESEARCH

Research News

IMAGE

IMAGE: STILL FROM THE FOOTAGE OF THE DEEP-SEA SPONGE GROUND THAT WAS COLLECTED OVER A YEAR. view more 

CREDIT: COPYRIGHT: NIOZ

For the first time, researchers from the SponGES project collected year-round video footage and hydrodynamic data from the mysterious world of a deep-sea sponge ground in the Arctic. Deep-sea sponge grounds are often compared to the rich ecosystems of coral reefs and form true oases. In a world where all light has disappeared and without obvious food sources, they provide a habitat for other invertebrates and a refuge for fish in the otherwise barren landscape. It is still puzzling how these biodiversity hotspots survive in this extreme environment as deep as 1500 metres below the water surface. With over 700 hundred hours of footage and data on food supply, temperature, oxygen concentration, and currents, NIOZ scientists Ulrike Hanz and Furu Mienis found clues that could help find some answers.

Colourful, thriving communities

'The deep sea, in most places, is barren and flat', says marine geologist Furu Mienis. 'And then, suddenly, we have these sponge grounds that form colourful, thriving communities. It is intriguing how this system sustains itself in such a place.' To better understand this unexpected success, the research team deployed a bottom lander equipped with sensors and an underwater camera specially designed for the extreme environment by NIOZ engineers and technicians. The location: an enormous seamount in the Norwegian Sea that is part of the Mid Atlantic ridge, known as the Schulz Bank. A year later they picked it up. What they saw and measured was a world where sponges survived in temperatures below zero degrees Celsius and withstood potential food deprivation, high current speeds and 200-metre-high underwater waves.

Mienis: 'We still don't get why they grow where they grow, but we are off to a good start of a better understanding. Apparently, this seamount and the hydrodynamic conditions create a beneficial system for the sponges.' A major finding located the sponge ground at the interface between two water masses where strong internal tidal waves can spread widely? and interact with the bottom landscape. The data from the sensors showed that the water flow at the summit of the seamount interacts with the seamount itself, producing turbulent conditions with temporarily high current speeds reaching up to 0.7 meter per second, which can be considered as "stormy" conditions in the deep sea.

At the same time, water movements around the seamount supply the sponge ground with food and nutrients from water layers above and below. The team measured the amount of food sinking from the surface to the sponge grounds and found that in this vertical direction, fresh food was delivered only once during a major event in the summer when the phytoplankton bloomed. Hanz: 'This isn't enough to sustain the sponge grounds, which is why we expect that additionally, bacteria and dissolved matter keep the sponges and associated fauna from starving.



CAPTION

Deploying the NIOZ designed and engineered bottom lander.

CREDIT

Photo: NIOZ

Extreme environment

The long-term record shows that the sponges on Schulz Bank thrive at temperatures around zero degrees Celsius. This is at least 4 °C lower than stony cold-water corals that are also found in the deep sea. Hanz: 'It is striking that they are alive with temperatures at zero degrees or even less. This is quite extreme, even for the deep sea.' In this food-deprived environment, the cold might actually play a role in the sponge's survival by lowering its metabolism. And the cold isn't the only extremity they face. Video recordings of the highest current speed events in winter show that these 'storms' further push the sponges to their limit. Hanz: 'The speeds that we witnessed, might be close to the maximum that they can endure. At the highest speed, we saw some sponges as well as anemones being ripped from the seafloor.'

And the most remarkable thing after watching hundreds of hours of footage? Hanz: 'The image that I started with was almost the same as the image in the end. One year had gone by and everything looked almost the same. It's just so cold out there that no crazy things are going on. The reef is still very pristine.' However, Hanz and Mienis stress that it is a very vulnerable ecosystem. Hanz: 'Sponges can be up to two hundred years old, once damaged it takes ages for them to recover.' And there are potential threats. Hanz: 'Fishery seems to be the biggest one, but there is also the future possibility of deep-sea mining and changes in temperature and turbulence caused by climate change. Mienis: 'It is a fragile equilibrium that consists of many tiny components. Take one of those away and the whole system can collapse.' Their research contributes to a first baseline that might become essential in future protection. Mienis: 'We now identified the first natural ranges, and gathered a bit of the information on how these rich sponge grounds can thrive.'


CAPTION

3D image of the Schulz Bank with bottom located at 1500 metres below the water surface and the summit at 600 metres. Copyright: SponGES project

CREDIT

Copyright: SponGES project