Wednesday, February 15, 2023

Rats trade initial rewards for long-term learning opportunities

When deciding on responses to a new stimulus, slower initial response times can maximize long-term reward through learning

Peer-Reviewed Publication

ELIFE

Scientists have provided evidence for the cognitive control of learning in rats, showing they can estimate the long-term value of learning and adapt their decision-making strategy to take advantage of learning opportunities.

The findings suggest that, by taking longer over a decision, rats may sacrifice immediate rewards to increase their learning outcomes and achieve greater rewards over the entire course of a task. The results are published today in eLife.

An established principle of behavioural neuroscience is the speed-accuracy trade-off, which is seen across many species, from rodents to primates. The principle describes the relationship between an individual’s willingness to respond slowly and make fewer errors compared to their willingness to respond quickly and risk making more errors. 

“Many studies in this area have focused on the speed-accuracy trade-off, without taking learning outcomes into account,” says lead author Javier Masís, who at the time was a PhD student at the Department of Molecular and Cellular Biology, and the Center for Brain Science, Harvard University, US, and is now a Presidential Postdoctoral Research Fellow at the Princeton Neuroscience Institute at Princeton University, US. “We aimed to investigate the difficult intertemporal choice problem that exists when you have the possibility to improve your behaviour through learning.”

For their study, Masís and colleagues sought to first establish whether rats were able to solve the speed-accuracy trade-off. The team set up an experiment where rats, upon seeing one of two visual objects that could vary in their size and rotation, decided whether the visual object was the one that corresponded to a left response, or a right response, and licked the corresponding touch-sensitive port once they had decided.   If the rats licked the correct port, they were rewarded with water, and if they licked the wrong port, they were given a timeout. 

The team investigated the relationship between error rate (ER) and reaction time (RT) during these trials, using the Drift-Diffusion Model (DDM) – a standard decision-making model in psychology and neuroscience in which the decision maker accumulates evidence through time until the level of evidence for one alternative reaches a threshold. The subject’s threshold level controls the speed-accuracy trade-off. Using a low-threshold yields fast, but error-prone responses, whereas a high-threshold yields slow, but accurate responses. For every difficulty level, however, there is a best threshold to set that optimally balances speed and accuracy, allowing the decision maker to maximise their instantaneous reward rate (iRR). Across difficulties, this behaviour can be summarised through a relationship between ER and RT called the optimal performance curve (OPC). After learning the task fully, over half of the trained rats reached the OPC, demonstrating that well-trained rats solve the speed-accuracy trade-off.

At the start of training, though, all rats gave up over 20% of their iRR, whereas towards the end, most rats near optimally maximised iRR. This prompted the question: if rats maximise instantaneous rewards by the end of learning, what governs their strategy at the beginning of learning?

To answer this, the team adapted the DDM as a recurrent neural network (RNN) that could learn over time and developed the Learning Drift-Diffusion Model (LDDM), enabling them to investigate how long-term perceptual learning across many trials is influenced by the choice of decision time in individual trials. The model was designed with simplicity in mind, to highlight key qualitative trade-offs between learning speed and decision strategy. The analyses from this model suggested that rats adopt a ‘non-greedy’ strategy that trades initial rewards to prioritise learning and therefore maximise total reward over the course of the task. They also demonstrated that longer initial reaction times lead to faster learning and higher reward, both in an experimental and simulated environment.

The authors call for further studies to consolidate these findings. The current study is limited by the use of the DDM to estimate improved learning. The DDM, and therefore LDDM, is a simple model that is a powerful theoretical tool for understanding specific types of simple choice behaviour that can be studied in the lab, but it is not capable of quantitatively describing more naturalistic decision-making behaviour. Furthermore, the study focuses on one visual perceptual task; the authors therefore encourage further work with other learnable  tasks across difficulties, sensory modalities and organisms.

“Our results provide a new view of the speed-accuracy trade-off by showing that perceptual decision-making behaviour is strongly shaped by the stringent requirement to learn quickly,” claims senior author Andrew Saxe, previously a postdoctoral research associate at the Department of Experimental Psychology, University of Oxford, UK, and now Sir Henry Dale Fellow and Associate Professor at the Gatsby Computational Unit and Sainsbury Wellcome Center, University College London, UK. 

“A key principle that our study propounds”, explains Javier Masís, “is that natural agents take into account the fact that they can improve through learning, and that they can and do shape the rate of that improvement through their choices. Not only is the world we live in non-stationary; we are also non-stationary, and we take that into account as we move around the world making choices.”  “You don’t learn the piano by futzing around the keys occasionally,” adds Saxe. “You decide to practise, and you practise at the expense of other more immediately rewarding activities because you know you’ll improve and it’ll probably be worth it in the end.”

##

Media contacts

Emily Packer, Media Relations Manager

eLife

e.packer@elifesciences.org

+44 (0)1223 855373

George Litchfield, Marketing and PR Assistant

eLife

g.litchfield@elifesciences.org

About eLife

eLife transforms research communication to create a future where a diverse, global community of scientists and researchers produces open and trusted results for the benefit of all. Independent, not-for-profit and supported by funders, we improve the way science is practised and shared. From the research we publish, to the tools we build, to the people we work with, we’ve earned a reputation for quality, integrity and the flexibility to bring about real change. eLife receives financial support and strategic guidance from the Howard Hughes Medical InstituteKnut and Alice Wallenberg Foundation, the Max Planck Society and Wellcome. Learn more at https://elifesciences.org/about.

To read the latest Neuroscience research published in eLife, visit https://elifesciences.org/subjects/neuroscience.

Upsurge in rocket launches could impact the ozone layer

University of Canterbury (UC) researchers have summarised the threats that future rocket launches would pose to Earth’s protective ozone layer, in a new review article published in the Journal of the Royal Society of New Zealand.

Peer-Reviewed Publication

TAYLOR & FRANCIS GROUP

University of Canterbury (UC) researchers have summarised the threats that future rocket launches would pose to Earth’s protective ozone layer, in a new review article published in the Journal of the Royal Society of New Zealand.

The ozone layer, which protects life on Earth from harmful ultraviolet (UV) rays from the sun, was severely damaged in the 1980s and 1990s due to chlorofluorocarbons (CFCs) — chemicals used in aerosols and refrigeration. Thanks to coordinated global action and legislation, the ozone layer is now on track to heal this century.

Rocket launches emit both gases and particulates that damage the ozone layer. Reactive chlorine, black carbon, and nitrogen oxides (among other species) are all emitted by contemporary rockets. New fuels like methane are yet to be measured.

“The current impact of rocket launches on the ozone layer is estimated to be small but has the potential to grow as companies and nations scale up their space programmes,” Associate Professor in Environmental Physics Dr Laura Revell says.

“Ozone recovery has been a global success story. We want to ensure that future rocket launches continue that sustainable recovery.”

Global annual launches grew from 90 to 190 in the past 5 years, largely in the Northern Hemisphere. The space industry is projected to grow more rapidly: financial estimates indicate the global space industry could grow to US$3.7 trillion by 2040.

“Rockets are a perfect example of a ‘charismatic technology’ – where the promise of what the technology can enable drives deep emotional investment – extending far beyond what the technology also affects,” Rutherford Discovery Fellow and planetary scientist UC senior lecturer Dr Michele Bannister says.

Rocket fuel emissions are currently unregulated, both in Aotearoa New Zealand and internationally.

UC Master’s student Tyler Brown, who was involved in the research, says Aotearoa New Zealand is uniquely positioned to both lead and participate in this field. “New Zealand’s role as a major player in the global launch industry means we can help steer the conversation. We stand to benefit enormously from additional growth in our domestic space industry, and with that comes the opportunity to ensure that global activities are sustainable for the planet as a whole.”

The review lays out detailed plans of action for companies and for the ozone research community, with a call for coordinated global action to protect the upper atmosphere environment. Actions that companies can take include measuring the emissions of launch vehicles on the test stand and in-situ during flight, making that data available to researchers, and putting effects on ozone into industry best-practise rocket design and development.

“The international ozone research community has a strong history of measuring atmospheric ozone and developing models to understand how human activities could impact this critical layer of our atmosphere. By working with launch providers, we are well-placed to figure out what impacts we might see”, says Dr Revell.

“Rockets have exciting potential to enable industrial-level access to near-Earth space, and exploration throughout the Solar System. Creating sustainable global rocket launches is going to take coordination across aerospace companies, scientists, and governments: it is achievable, but we need to start now,” says Dr Bannister. “This is our chance to get ahead of the game.”

 

Mysterious brain activity in mice watching a movie could help tackle Alzheimer's, improve AI

Tracking the memory-making neuron’s activity in mice as they watch a classic film reveals novel ways to diagnose learning and memory disorders and improve AI

Peer-Reviewed Publication

UNIVERSITY OF CALIFORNIA - LOS ANGELES HEALTH SCIENCES

LOS ANGELES – Even the legendary filmmaker Orson Welles couldn’t have imagined such a plot twist.  

By showing Welles’s movie “Touch of Evil” to mice, Chinmay Purandare, PhD, and Prof. Mayank Mehta of UCLA have uncovered surprising and important new insights about how neurons form memories. The discovery points to new ways to diagnose Alzheimer’s and other learning and memory impairments, while also improving artificial intelligence. 

Mice were shown a short clip from the 1958 film noir classic “Touch of Evil” as scientists monitored their brain activity. This was a rather nondescript black-and-white, silent movie clip showing humans walking about. Textbook knowledge and conventional wisdom says that mice should not show interest in such a movie and nor should neurons in a part of their brain called the hippocampus, which is known to be crucial for learning and memory. When scientists looked inside this part of the mouse brain, they found that it only acts as “the GPS system of the brain” (as described in the 2014 Nobel Prize in Physiology or Medicine), which is unrelated to general learning, e.g. a conversation. This was a major obstacle in research on diagnosis of memory and on mechanisms of abstractions or AI.

However, the researchers made a blockbuster finding: There were surprising, but highly systematic bursts of activity in the hippocampus in response to this movie. Scientists could even reconstruct specific movie segments using these mysterious bursts from only a fraction of hippocampal neurons.  

The plot thickens even more. Even neurons in other parts of the brain (the primary visual cortex or the thalamus) that are commonly thought to encode simple features like vertical or horizontal lines responded far more robustly to the specific scenes of the movie than the textbook stimuli. In fact, every part of the brain that they investigated, from the simple visual to the GPS circuits, lit up robustly in response to specific movie scenes.

Mehta said the findings represent a “major paradigm shift” in how scientists can study mice’s ability to recall a specific experience or event – or what’s known as episodic memory. Mehta said this could help scientists address a missing component in research for memory diseases like Alzheimer’s.

“Although dozens of drugs have cured Alzheimer’s in mice, none have worked in humans,” Mehta said. “One reason is that the standard test of episodic learning and memory is spatial navigation in mice. However, Alzheimer’s patients have profound deficits in non-spatial memory too – e.g., a conversation or an event they witnessed, which is unrelated to GPS navigation.”

The authors say that focusing Alzheimer’s and other memory drugs in mice using only a spatial memory test doesn’t address whether the treatments improved the mice’s ability to remember most events or experiences that make up episodic memory. 

“It is a major challenge to create such events for mice that would closely mimic events familiar to humans. Hence, we turned to movies,” said Dr. Purandare, the study’s lead author. “By all textbook accounts, human movies should not generate any interpretable pattern in the mouse hippocampus.”

However, in the studies published in Nature in 2021 and 2022, these UCLA researchers found that neurons in the mice hippocampus responded to simple visual stimuli when mice explored virtual reality and this induced robust neuroplasticity. Therefore, they theorized it was possible to test episodic memory in mice by showing them a movie and monitoring activity in their hippocampus.

In this new study, nearly half of neurons in the rodent hippocampus encoded specific, small segments of the movie, signifying a remarkable response to the events on screen. The mundanity of the silent, black-and-white clip made the findings even more compelling, Mehta said. In fact, mice were also free to ignore the movie if they wanted to.

“If the hippocampus lights up with this mundane movie clip, without any memory demand, then we can safely conclude that it is not due to other things like expectation of reward or excitement,” Mehta said. “We were blown away by the massive responses despite the lack of these emotional components.”

Mehta said preliminary data indicated that making the scene richer by adding interesting elements for mice, like images of other animals, sounds, etc. could produce a stronger hippocampal response, creating an emotional response and vibrant episodic memories.

“Another major surprise, the visual areas did not care if the movie was played in a sequence, or in a scrambled order. But the hippocampal neurons did something very different – they did not respond at all to the scrambled movie,” Mehta said. “This shows that the hippocampal neurons are extracting episodic information from the incoming visual information that is agnostic to the episode.”

Mehta said the findings are also crucial for improving AI. “The hippocampus is at the apex of a deep neural network, with the eyes at the front end, followed by the thalamus, primary visual cortices and ending up in the hippocampus.” But, given the prevailing belief that the mouse hippocampus is “the GPS system,” experiments could either study the visual cortex or the hippocampus, but not both at the same time.

“Our findings open up the possibility to study all these brain areas simultaneously and determine how the brain creates an episode from a series of images falling on the retina,” Dr. Purandare said. “Selective and episodic activation of the mouse hippocampus using a human movie opens up the possibility of directly testing human episodic memory disorders and therapies using mouse neurons, a major step forward.”

The study appears in the journal eLife. It was supported by funding from the National Institute of Mental Health and research at the Allen Institute. The authors declared no competing interest.   

Dr. Purandare is now a postdoctoral scholar at UCSF. Mehta is a professor in the departments of Physics, Neurology, Electrical and Computer Engineering, and the director of Center for Physics of Life at UCLA.  

For more information about their research using immersive virtual reality for mice, visit https://www.physics.ucla.edu/~mayank/  

  

New technique maps large-scale impacts of fire-induced permafrost thaw in Alaska

Researchers combine active airborne lidar sensors, passive spaceborne optical sensors and machine learning

Peer-Reviewed Publication

FLORIDA ATLANTIC UNIVERSITY

Fire Over Tanana Flats 

IMAGE: THE PICTURE SHOWS THE RECOVERY OF VEGETATION FOR THE 2001 FIRE AND 2010 FIRE OVER TANANA FLATS LOWLAND PERMAFROST IN INTERIOR ALASKA WHERE THE STUDY WAS CONDUCTED. view more 

CREDIT: FLORIDA ATLANTIC UNIVERSITY

About 40 percent of interior Alaska is underlain by ice-rich permafrost – permanently frozen grounds made up of soil, gravel and sand – bound together by ice. Certain conditions, such as climate warming, have intensified tundra wildfires which have profound implications for permafrost thaw.

Surface vegetation plays a dominant role in protecting permafrost from summer warmth, so any alteration of vegetation structure, particularly following severe wildfires, can cause dramatic top–down thaw.

Severe wildfires remove the vegetation and surface soil organic matter, and the loss of this insulation increases the ground heat flux and promotes permafrost thaw. This thaw triggers ground sinking and thermokarst (ground-surface collapse from permafrost thaw) development and leads to surface water inundation, vegetation shifts, changes in soil carbon balance and carbon emissions, all impacting climate warming.

The permafrost–fire–climate system has been a hotspot in research for decades. The large-scale effects of these wildfires on land cover change, post-fire resilience, and subsequent thaw settlement remain unknown. Thaw settlement is difficult to measure as there are often no absolute reference frames to compare to the subtle, but widespread topographic change in permafrost landscapes.

Researchers from Florida Atlantic University, in collaboration with the United States Army Corps of Engineers Cold Regions Research & Engineering Laboratory, and Alaska Ecoscience, systematically analyzed the effects of six large fires that have occurred since 2000 on the Tanana Flats lowland in interior Alaska on land cover change, vegetation dynamics, and terrain subsidence, or sinking.

For the first time, they have developed a machine learning-based ensemble approach to quantify fire-induced thaw settlement across the entire Tanana Flats, which encompasses more than 3 million acres (approximately 1,250 km2). Researchers linked airborne repeat lidar data to time-series Landsat products (satellite images) to delineate thaw settlement patterns across the six fire scars. This novel approach helped to explain about 65 percent of the variance in lidar-detected elevation change.  

Study findings, published in Environmental Research Lettersshowed that in total, the six fires resulted in a loss of nearly 99,000 acres (approximately 400 km2) of evergreen forest from 2000 to 2014 among nearly 155,000 acres (approximately 590 km2) of fire-influenced forests with varying degrees of burn severity. The fires provided favorable conditions for shrub-fen (low-growing shrubs) development, resulting in a comparable post-fire coverage of shrubland and evergreen forest and increasing encroachment of shrubland to areas with sparse vegetation.

Importantly, the researchers did not observe the regrowth of forests after 13 years of the oldest fire in 2001, based on Landsat observations.

“Our study has shown that linking airborne repeat lidar with Landsat products is an encouraging tool for large-scale quantification of fire-induced thaw settlement,” said Caiyun Zhang, Ph.D., senior author and a professor in the Department of Geosciences within FAU’s Charles E. Schmidt College of Science. “Because airborne lidar measurements are increasingly being made across northern permafrost regions, our method is a valuable means of projecting elevation change across entire fire scars within uniform permafrost-affected landscapes by using data-driven machine learning techniques.”

The Tanana Flats, which comprises more than 6 million acres (approximately 2,500 km2), is representative of the lowland landscape south of Fairbanks in interior Alaska. It consists of a complex mosaic of ice-rich permafrost and permafrost free ecosystems and is a hotbed for thermokarst. Much of the land is part of a military training area managed by the U.S. Department of Defense.

For the study, researchers assessed three commonly used machine learning algorithms including artificial neural network, support vector machine, and random forest for fire-induced thaw settlement modeling.

“Machine learning has been extensively applied for modeling in geosciences,” said Zhang. “The idea is that each algorithm has its pros and cons, and an ensemble analysis of comparative models can produce a more robust estimation than the application of a single model.”

Current and future projected increases in mean annual air temperature, the length of the summer growing season, and the severity and extent of wildfire are expected to lead to an increasingly dominant role of wildfire in permafrost ecosystems.

“Mapping thaw settlement as a result of wildfires is critical since it is associated with subsequent thermokarst development, snow accumulation, hydrology, vegetation shifts, and commensurate changes in the land-atmospheric exchange of water, energy, and greenhouse gases,” said Zhang. “The combination of active airborne lidar sensors with passive spaceborne optical sensors will enable scientists to measure widespread and large areas affected by wildfires in cold regions, especially with climate warming and increased fire events.”

This research was funded by the U.S. Army Corps of Engineers, Engineer Research and Development Center Applied Research Program Office for Installations and Operational Environment and Basic Research Program (PE 0601102/AB2), the U.S. Department of Defense’s Strategic Environmental Research and Development Program (projects RC2110 and RC18-1170), and the U.S. Department of Energy, Office of Science, Environmental System Science program (0000260300). 

Study co-authors are Thomas A. Douglas, Ph.D., a research chemist and senior scientist, U.S. Army Cold Regions Research & Engineering; David Brodylo, a Ph.D. student in FAU’s Department of Geosciences; and M. Torre Jorgenson, Alaska Ecoscience.  

- FAU -

About Florida Atlantic University:
Florida Atlantic University, established in 1961, officially opened its doors in 1964 as the fifth public university in Florida. Today, the University serves more than 30,000 undergraduate and graduate students across six campuses located along the southeast Florida coast. In recent years, the University has doubled its research expenditures and outpaced its peers in student achievement rates. Through the coexistence of access and excellence, FAU embodies an innovative model where traditional achievement gaps vanish. FAU is designated a Hispanic-serving institution, ranked as a top public university by U.S. News & World Report and a High Research Activity institution by the Carnegie Foundation for the Advancement of Teaching. For more information, visit www.fau.edu.

  

Image shows the mapped thaw settlement caused by the fires (a) and estimated uncertainty caused by different models (b).

 Burn Severity 

Image shows the burn severity across six fire scars (a), and vegetation change before (c) and after (b) the fires.

CREDIT

Florida Atlantic University

Number of fires in Brazilian Amazon in August-September 2022 was highest since 2010

A Brazilian study identified more than 74,000 active fires in the period. Half of the Amazon is typically vulnerable to fire in period analyzed. Human action was the main cause of the recent destruction.

Peer-Reviewed Publication

FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULO

Number of fires in Brazilian Amazon in August-September 2022 was highest since 2010 

IMAGE: HALF OF THE AMAZON IS TYPICALLY VULNERABLE TO FIRE IN THE TWO-MONTH PERIOD ANALYZED. HUMAN ACTION WAS THE MAIN CAUSE OF THE RECENT DESTRUCTION view more 

CREDIT: GABRIEL DE OLIVEIRA/UNIVERSITY OF SOUTH ALABAMA

The number of active fires recorded in the Brazilian Amazon in August-September 2022 was the highest since 2010, according to an article published in the journal Nature Ecology & Evolution. Besides the record number of fires (74,398), the researchers found they were due not to extreme drought, as in 2010, but to recent deforestation by humans.

“The idea of publishing our findings came up when we analyzed data provided free of charge by the Queimadas program,” said Guilherme Mataveli, first author of the article. Queimadas in Portuguese means burnings, and he was referring to the forest fire monitoring service run by the National Space Research Institute (INPE). Mataveli is currently a postdoctoral researcher in INPE’s Earth Observation and Geoinformatics Division and has a scholarship from FAPESP

The number of fires typically rises every year in August and September, when the weather favors fire in about half of the Amazon. “But the surge in the number of fires in 2010 was due to an extreme drought event that occurred in a large part of the region, whereas nothing similar occurred in 2022, so other factors must have been to blame,” Mataveli said.

Mataveli’s main research interest is the influence of land use and land cover on emissions of fine particulate matter from fire in the Amazon and Cerrado biomes using modeling and remote sensing.

The researchers also analyzed the locations of the fires detected using data provided free of charge by another INPE platform, TerraBrasilis. Their analysis showed that 62% occurred in recently deforested areas; that the number of fires in recently deforested areas in August-September 2022 rose 71% compared with the same period of 2021; and that the total deforested area increased by 64% according to the deforestation alerts issued by INPE’s real-time detection system (DETER).

“Alarming results also emerged from our analysis of the type of land on which these fires occurred, classified into public land, smallholdings, and medium to large private properties,” Mataveli said. More than a third (35%) of the fires detected in August-September 2022 occurred in public areas such as conservation units and Indigenous reservations, and the number of fires in these areas rose 69% year over year. 

“The Amazon has become more vulnerable to grilagem [land grabbing via falsification of title deeds] in recent years, and this sharp increase is one of the results of this process,” he said.

A key source of information for this land use and land cover analysis was TerraBrasilis, much of whose data comes from the Rural Environmental Register (CAR), designed to ensure compliance with the Forest Code. All landowners and land users are required to register with the CAR. The process is essentially self-declaratory, with the landowner entering environmental information about the property. Registration with the CAR is not required for conservation units, Indigenous reservations and other types of public land.

Climate goals

The advance of fire, deforestation, degradation, illegal mining and land grabbing in the Amazon runs counter to the goals established internationally by Brazil as part of its commitment to combat global warming, such as stopping all illegal deforestation by 2028 and achieving a 50% reduction in greenhouse gas emissions by 2030 compared to 2005 levels.

Besides the negative effect on biodiversity and maintenance of the ecosystem services essential to human life, such as climate regulation, the Brazilian economy is endangered by uncontrolled deforestation and associated activities. Markets for its exports of commodities, such as the European Union, are in the process of approving new regulatory standards that will prevent the purchase of goods produced in deforested or degraded areas.

“The article highlights a systemic problem that must be seriously addressed by society. A reversal of this trend requires punishment of those who break the law, implementation of efficient public policies, communication with society, and a search for alternative solutions based on cutting-edge science and capable of promoting sustainable development in the region. Identifying and prosecuting the people who are illicitly destroying the world’s largest tropical forest is one of the challenging tasks on the environmental agenda to be faced by the incoming federal government,” said Luiz Aragão, last author of the article. 

The other authors are Luciana Vanni Gatti and Nathália Carvalho, both researchers at INPE; Liana Oighenstein Anderson at the National Centre for Monitoring and Early Warnings of Natural Disasters (CEMADEN); Gabriel de Oliveira at the University of South Alabama (USA); Celso H. L. Silva-Junior at the University of California Los Angeles (UCLA) and California Institute of Technology (Caltech) in the United States, and the Federal University of Maranhão in Brazil; and Scott C. Stark at Michigan State University (MSU).

AragãoAnderson and Gatti are also supported in their research by funding from FAPESP.

About São Paulo Research Foundation (FAPESP)

The São Paulo Research Foundation (FAPESP) is a public institution with the mission of supporting scientific research in all fields of knowledge by awarding scholarships, fellowships and grants to investigators linked with higher education and research institutions in the State of São Paulo, Brazil. FAPESP is aware that the very best research can only be done by working with the best researchers internationally. Therefore, it has established partnerships with funding agencies, higher education, private companies, and research organizations in other countries known for the quality of their research and has been encouraging scientists funded by its grants to further develop their international collaboration. You can learn more about FAPESP at www.fapesp.br/en and visit FAPESP news agency at www.agencia.fapesp.br/en to keep updated with the latest scientific breakthroughs FAPESP helps achieve through its many programs, awards and research centers. You may also subscribe to FAPESP news agency at http://agencia.fapesp.br/subscribe.

POST MODERN HERBALISM

Pungent ginger compound puts immune cells on heightened alert

Pungent compound stimulates white blood cells

Peer-Reviewed Publication

LEIBNIZ-INSTITUT FÜR LEBENSMITTEL-SYSTEMBIOLOGIE AN DER TU MÜNCHEN

Ginger tea and ginger 

IMAGE: WHETHER AS A MEDICINAL PLANT OR FOODSTUFF, GINGER IS ALSO BECOMING INCREASINGLY POPULAR IN GERMANY view more 

CREDIT: PHOTO: GISELA OLIAS/GABY ANDERSEN/LEIBNIZ-LSB@TUM

Ginger has a reputation for stimulating the immune system. New results from the Leibniz Institute for Food Systems Biology at the Technical University of Munich (Leibniz-LSB@TUM) now support this thesis. In laboratory tests, small amounts of a pungent ginger constituent put white blood cells on heightened alert. The study also shows that this process involves a type of receptor that plays a role in the perception of painful heat stimuli and the sensation of spiciness in food.

Whether as a medicinal plant or foodstuff, ginger is also becoming increasingly popular in Germany. According to the German Federal Statistical Office, the annual import volume of the fruity-hot root has almost quadrupled over the last ten years to around 31,600 tons. However, even though ginger consumption has increased, the question arises as to whether normal consumption levels are sufficient to achieve health effects. And if so, which compounds and molecular mechanisms play a role in this.

Ginger compound enters the blood

To help clarify these questions, a team led by Veronika Somoza, director of the Leibniz Institute in Freising, Germany, conducted extensive research. The starting point was a result of an earlier pilot study, in which first author Gaby Andersen from the Leibniz-LSB@TUM also played a key role. As the study shows, significant amounts of pungent ginger compounds enter the blood about 30 to 60 minutes after consuming one liter of ginger tea. By far the highest levels were achieved by [6]-gingerol, with plasma concentrations of approximately 7 to 17 micrograms per liter. 

The pungent compound is known to exert its "taste" effect via the so-called TRPV1 receptor, an ion channel located on the surface of nerve cells that responds to painful heat stimuli as well as to pungent compounds from chili and ginger. Since some studies suggest that white blood cells also possess this receptor, the research team tested whether [6]-gingerol influences the activity of these immune cells.

Pungent compound stimulates white blood cells

In a first step, the team succeeded in detecting the receptor on neutrophil granulocytes. These cells make up about two-thirds of white blood cells and serve to combat invading bacteria. Further laboratory experiments by the research group also showed that even a very low concentration of almost 15 micrograms of [6]-gingerol per liter is sufficient to put the cells on heightened alert. Thus, compared to control cells, the stimulated cells reacted about 30 percent more strongly to a peptide that simulates a bacterial infection. Addition of a TRPV1 receptor-specific inhibitor reversed the effect induced by [6]-gingerol.

"Thus, at least in experiments, very low [6]-gingerol concentrations are sufficient to affect the activity of immune cells via the TRPV1 receptor. In blood, these concentrations could theoretically be achieved by consuming about one liter of ginger tea," says Gaby Andersen. "So, our results support the assumption that the intake of common amounts of ginger may be sufficient to modulate cellular responses of the immune system. Nevertheless, there are still many unanswered questions at the molecular, epidemiological and medical levels that need to be addressed with the help of modern food and health research," concludes Veronika Somoza.

Publication: Andersen, G., Kahlenberg, K., Krautwurst, D., and Somoza, V. (2022). [6]-Gingerol Facilitates CXCL8 Secretion and ROS Production in Primary Human Neutrophils by Targeting the TRPV1 Channel. Mol Nutr Food Res, e2200434. 10.1002/mnfr.202200434.
https://onlinelibrary.wiley.com/doi/epdf/10.1002/mnfr.202200434

More information:

Pilot study: Schoenknecht, C., Andersen, G., Schmidts, I., and Schieberle, P. (2016). Quantitation of Gingerols in Human Plasma by Newly Developed Stable Isotope Dilution Assays and Assessment of Their Immunomodulatory Potential. J Agric Food Chem 64, 2269-2279. 10.1021/acs.jafc.6b00030. pubs.acs.org/doi/10.1021/acs.jafc.6b00030

In the pilot study, subjects drank one liter of ginger tea within 20 minutes on an empty stomach. The tea had been prepared as follows: 100 g of a fresh Chinese ginger was peeled and crushed, brewed with one liter of boiling water, and allowed to steep for 15 minutes. The infusion was then filtered to remove insoluble components. The research group determined the highest average plasma concentrations for the pungent compounds [6]-, [8]-, and [10]-gingerol (42.0, 5.3, and 4.8 nmol per liter, respectively) approximately 30 to 60 minutes after the subjects drank the ginger tea.

TRPV1 stands for: Transient receptor potential cation channel subfamily V (for vanilloid), subtype 1.

Link to data from the German Federal Statistical Office (Destatis):  www.destatis.de/DE/Presse/Pressemitteilungen/Zahl-der-Woche/2023/PD23_02_p002.html

Contacts:

Scientific contact:

Prof. Dr. Veronika Somoza
Director of the Leibniz Institute for Food Systems Biology
at the Technical University of Munich (Leibniz-LSB@TUM)
Lise-Meitner-Str. 34
85354 Freising
E-mail: v.somoza.leibniz-lsb(at)tum.de

Dr. Gaby Andersen
Research Group Metabolic Function & Biosignals at the Leibniz-LSB@TUM
Phone: +49 8161 71-2930
E-mail: g.andersen.leibniz-lsb(at)tum.de

Press contact at the Leibniz-LSB@TUM:

Dr. Gisela Olias
Knowledge Transfer, Press and Public Relations
Phone: +49 8161 71-2980
E-mail: g.olias.leibniz-lsb(at)tum.de

www.leibniz-lsb.de

Information about the Institute:

The Leibniz Institute for Food Systems Biology at the Technical University of Munich (Leibniz-LSB@TUM) comprises a new, unique research profile at the interface of Food Chemistry & Biology, Chemosensors & Technology, and Bioinformatics & Machine Learning. As this profile has grown far beyond the previous core discipline of classical food chemistry, the institute spearheads the development of a food systems biology. Its aim is to develop new approaches for the sustainable production of sufficient quantities of food whose biologically active effector molecule profiles are geared to health and nutritional needs, but also to the sensory preferences of consumers. To do so, the institute explores the complex networks of sensorically relevant effector molecules along the entire food production chain with a focus on making their effects systemically understandable and predictable in the long term.

The Leibniz-LSB@TUM is a member of the Leibniz Association, which connects 97 independent research institutions. Their orientation ranges from the natural sciences, engineering and environmental sciences through economics, spatial and social sciences to the humanities. Leibniz Institutes devote themselves to social, economic and ecological issues. They conduct knowledge-oriented and application-oriented research, also in the overlapping Leibniz research networks, are or maintain scientific infrastructures and offer research-based services. The Leibniz Association focuses on knowledge transfer, especially with the Leibniz Research Museums. It advises and informs politics, science, business and the public. Leibniz institutions maintain close cooperation with universities - among others, in the form of the Leibniz Science Campuses, industry and other partners in Germany and abroad. They are subject to a transparent and independent review process. Due to their national significance, the federal government and the federal states jointly fund the institutes of the Leibniz Association. The Leibniz Institutes employ around 21,000 people, including almost 12,000 scientists. The entire budget of all the institutes is more than two billion euros.

+++ Stay up to date via our Twitter channel twitter.com/LeibnizLSB +++

Study finds only about half of AI-generated ads only labeled as such

Consumer decisions being driven unknowingly; bots not required to follow same regulations as humans

Peer-Reviewed Publication

UNIVERSITY OF KANSAS

LAWRENCE — While you've been online today, chances are you’ve seen an AI-generated ad, likely without knowing it. A University of Kansas study has analyzed more than 1,000 AI-generated ads from across the web and found that they are only labeled as ads about half the time — and that they intentionally appeal to consumers in positive ways to influence them.

The technology has the potential to influence consumer behavior and decisions without viewers understanding whether the content was an advertisement or if it was developed by humans or bots. The prevalence of AI in programmatic advertising shows how frequently the technology is used and that it can skirt guidelines that human-developed ads have to follow, according to researchers.

“AI is not just a passive technology anymore. It’s actively being engaged in what we think — and in a way, how we make our decisions,” said Vaibhav Diwanji, assistant professor of journalism & mass communications. “The process has become more automated and is taking over the role of creative content online.”

Diwanji was the lead author of a study that analyzed 1,375 AI-generated programmatic ads found on social media, news sites, search engines and video platforms. The study, written with Jaejin Lee and Juliann Cortese of Florida State University, was published in the Journal of Strategic Marketing.

AI-generated ads are those created by algorithms to develop content that is contextualized and personalized for an individual based on their internet usage and demographics. The research team analyzed the ads to better understand if they are labeled as ads, what sort of appeals they made to consumers and how they used sentiment. Only about half of the ads were clearly labeled as such, meaning people frequently see content that they might believe is organic, such as a post by a friend on social media or a news item.

The primary problem with that lack of transparency is that humans must follow guidelines set forth by agencies such as the FCC and FTC when creating advertising content. AI is not bound by such restrictions so far, Diwanji said.

“Higher levels of nondisclosure in the AI-enabled ad content, similar to native advertising, would be likely to cause consumer deception, tracking them into false beliefs, confusion or dissatisfaction. At its core, AI-enabled advertising should be a fine balance between providing consumers with clear source disclosure and offering content that meshes with and provides value similar to the context in which it is placed,” the researchers wrote.

In terms of approach, the ads tended to be positive in their appeals, containing messages that were neither negative or neutral in the way they touted the good or service represented. They also tended to focus on the consumer and the benefit the individual could experience from what was being sold. Analysis showed that ads found on social media platforms revealed sponsorship most frequently, and news and publishing sites labeled them least frequently.

“You leave your footprint wherever you go online, and this is one more way for advertisers to try to persuade you in purchasing decisions,” Diwanji said. “It’s interesting how AI has evolved from a tool people could use to something unprompted. Only about half of the ads we saw revealed their brand sponsorship. From an ethical standpoint, you’re showing us sponsored content, but not telling us. That can create a conflict.”

AI-generated programmatic ads can also be developed much faster than human-generated ads. And with creative optimization, they could be far more effective in their appeals than traditional ads. While that may be good for business’ bottom lines, it could be both deceptive and potentially threaten jobs in creative industries, including advertising. And when ads are not clearly labeled, AI can place them higher in the results of search engines, leading people to click without realizing the link leads to sponsored content. For those reasons, the authors argue that FTC guidelines and federal policy should be updated to require more transparency of AI-generated advertising.

“It’s not wrong to use AI. It’s just important that you disclose that in an ad or marketing appeal,” Diwanji said. “When humans create content, they are bound by guidelines of the FCC, FTC and others. If you’re not told it’s AI-sponsored content, it could influence your decisions outside of those restrictions.”