Saturday, December 24, 2022

Can the AI driving ChatGPT help to detect early signs of Alzheimer's disease?

Drexel researchers use GPT-3 to spot Alzheimer's indicators in spontaneous speech

Peer-Reviewed Publication

DREXEL UNIVERSITY

The artificial intelligence algorithms behind the chatbot program ChatGPT — which has drawn attention for its ability to generate humanlike written responses to some of the most creative queries — might one day be able to help doctors detect Alzheimer’s Disease in its early stages. Research from Drexel University’s School of Biomedical Engineering, Science and Health Systems recently demonstrated that OpenAI’s GPT-3 program can identify clues from spontaneous speech that are 80% accurate in predicting the early stages of dementia.

Reported in the journal PLOS Digital Health, the Drexel study is the latest in a series of efforts to show the effectiveness of natural language processing programs for early prediction of Alzheimer’s – leveraging current research suggesting that language impairment can be an early indicator of neurodegenerative disorders.

Finding an Early Sign

The current practice for diagnosing Alzheimer’s Disease typically involves a medical history review and lengthy set of physical and neurological evaluations and tests. While there is still no cure for the disease, spotting it early can give patients more options for therapeutics and support. Because language impairment is a symptom in 60-80% of dementia patients, researchers have been focusing on programs that can pick up on subtle clues — such as hesitation, making grammar and pronunciation mistakes and forgetting the meaning of words — as a quick test that could indicate whether or not a patient should undergo a full examination.

“We know from ongoing research that the cognitive effects of Alzheimer’s Disease can manifest themselves in language production,” said Hualou Liang, PhD, a professor in Drexel’s School of Biomedical Engineering, Science and Health Systems and a coauthor of the research. “The most commonly used tests for early detection of Alzheimer’s look at acoustic features, such as pausing, articulation and vocal quality, in addition to tests of cognition. But we believe the improvement of natural language processing programs provide another path to support early identification of Alzheimer’s.”

A Program that Listens and Learns

GPT-3, officially the third generation of OpenAI’s General Pretrained Transformer (GPT), uses a deep learning algorithm — trained by processing vast swaths of information from the internet, with a particular focus on how words are used, and how language is constructed. This training allows it to produce a human-like response to any task that involves language, from responses to simple questions, to writing poems or essays.

GPT-3 is particularly good at “zero-data learning” – meaning it can respond to questions that would normally require external knowledge that has not been provided. For example, asking the program to write “Cliff’s Notes” of a text, would normally require an explanation that this means a summary. But GPT-3 has gone through enough training to understand the reference and adapt itself to produce the expected response.

“GPT3’s systemic approach to language analysis and production makes it a promising candidate for identifying the subtle speech characteristics that may predict the onset of dementia,” said Felix Agbavor, a doctoral researcher in the School and the lead author of the paper. “Training GPT-3 with a massive dataset of interviews – some of which are with Alzheimer’s patients — would provide it with the information it needs to extract speech patterns that could then be applied to identify markers in future patients.”

Seeking Speech Signals

The researchers tested their theory by training the program with a set of transcripts from a portion of a dataset of speech recordings compiled with the support of the National Institutes of Health specifically for the purpose of testing natural language processing programs’ ability to predict dementia. The program captured meaningful characteristics of the word-use, sentence structure and meaning from the text to produce what researchers call an “embedding” – a characteristic profile of Alzheimer’s speech.

They then used the embedding to re-train the program — turning it into an Alzheimer’s screening machine. To test it they asked the program to review dozens of transcripts from the dataset and decide whether or not each one was produced by someone who was developing Alzheimer’s.

Running two of the top natural language processing programs through the same paces, the group found that GPT-3 performed better than both, in terms of accurately identifying Alzheimer’s examples, identifying non-Alzheimer’s examples and with fewer missed cases than both programs.

A second test used GPT-3’s textual analysis to predict the score of various patients from the dataset on a common test for predicting the severity of dementia, called the Mini-Mental State Exam (MMSE).

The team then compared GPT-3’s prediction accuracy to that of an analysis using only the acoustic features of the recordings, such as pauses, voice strength and slurring, to predict the MMSE score. GPT-3 proved to be almost 20% more accurate in predicting patients’ MMSE scores.

“Our results demonstrate that the text embedding, generated by GPT-3, can be reliably used to not only detect individuals with Alzheimer’s Disease from healthy controls, but also infer the subject’s cognitive testing score, both solely based on speech data,” they wrote. “We further show that text embedding outperforms the conventional acoustic feature-based approach and even performs competitively with fine-tuned models. These results, all together, suggest that GPT-3 based text embedding is a promising approach for AD assessment and has the potential to improve early diagnosis of dementia.”

Continuing the Search

To build on these promising results, the researchers are planning to develop a web application that could be used at home or in a doctor’s office as a pre-screening tool.

“Our proof-of-concept shows that this could be a simple, accessible and adequately sensitive tool for community-based testing,” Liang said. “This could be very useful for early screening and risk assessment before a clinical diagnosis.”

Hawai‘i earthquake swarm caused by magma moving through ‘sills’

Peer-Reviewed Publication

CALIFORNIA INSTITUTE OF TECHNOLOGY

Sills 

IMAGE: MORE THAN 192,000 SMALL SEISMIC EVENTS, EACH REPRESENTED HERE AS A SINGLE BLACK DOT, REVEAL IN 3D THE SHAPE AND LOCATION OF THE SILLS BENEATH HAWAI‘I. view more 

CREDIT: CALTECH/WILDING ET AL

Magma pumping through a massive complex of flat, interconnected chambers deep beneath volcanoes in Hawai‘i appears to be responsible for an unexplained swarm of tiny earthquakes felt on the Big Island over the past seven years, in particular since the 2018 eruption and summit collapse of Kīlauea.

The pancake-like chambers, called “sills,” channel magma laterally and upward to recharge the magma chambers of at least two of the island’s active volcanoes: Mauna Loa and Kīlauea. Using a machine-learning algorithm, geoscientists at Caltech were able to use data gathered from seismic stations on the island to chart out the structure of the sills, mapping them with never-before-seen precision and demonstrating that they link the volcanoes. 

Further, the researchers were able to monitor the progress of the magma as it pushed upward through the sills, and to link that to Kīlauea’s activity. They analyzed a period that ended in May 2022, so it is not yet possible to say whether they can spot the magma flow that led to the November 27 eruption of Mauna Loa, but the team intends to look at that next. 

“Before this study, we knew very little about how magma is stored and transported deep beneath Hawai‘i. Now, we have a high-definition map of an important part of the plumbing system,” says John D. Wilding (MS ’22), Caltech graduate student and co-lead author of a paper describing the research that was published in the journal Science on December 22. The study represents the first time scientists have been able to directly observe a magma structure located this deep underground. “We know pretty well what the magma is doing in the shallow part of the system above 15 kilometer depth, but until now, everything below that has just been the subject of speculation,” Wilding says.

With data on more than 192,000 small temblors (less than magnitude 3.0) that occurred over the 3.5-year period from 2018 to mid-2022, the team was able to map out more than a dozen sills stacked on top of one another. The largest is about 6 kilometers by 7 kilometers. The sills tend to be around 300 meters thick, and are separated by a distance of about 500 meters.

“Volcanic earthquakes are typically characterized by their small magnitude and frequent occurrence during magmatic unrest,” says Weiqiang Zhu, postdoctoral scholar research associate in geophysics and co-lead author of the Science paper. “We are excited about recent advances in machine learning, particularly deep learning, which are helping to accurately detect and locate these small seismic signals recorded by dense seismic networks. Machine learning can be an effective tool for seismologists to analyze large archived datasets, identify patterns in small earthquakes, and gain insights into underlying structures and physical mechanisms.”

Wilding and Zhu worked with Jennifer Jackson, the William E. Leonhard Professor of Mineral Physics; and Zachary Ross, assistant professor of geophysics and William H. Hurt Scholar; who are both senior authors on the paper. In October, Ross was named one of the 2022 Packard Fellows for Science and Engineering, which will provide funding to support this research moving forward. 

The team did not have to place a single piece of hardware to do the study; rather, they relied on data gathered by United States Geological Survey seismometers on the island. However, the machine-learning algorithm developed in Ross’s lab gave them an unprecedented ability to separate signal from noise—that is, to clearly identify earthquakes and their locations, which create a sort of 3-D “point cloud” that illustrates the sills. 

"It’s analogous to taking a CT [computerized tomography] scan, the way a doctor can visualize the inside of a patient’s body," Ross says. “But instead of using controlled sources with X-rays, we use passive sources, which are earthquakes.”

The team was able to catalog about 10 times as many earthquakes as was previously possible, and they were able to pinpoint their locations with a margin of error of less than a kilometer; previous locations were determined with error margins of a few kilometers. The work was accomplished using a deep-learning algorithm that had been taught to spot earthquake signals using a training dataset of millions of previously identified earthquakes. Even with small earthquakes, which might not stand out to the human eye on a seismogram, the algorithm finds patterns that distinguish quakes from background noise. Ross previously used the technique to reveal how a naturally occurring injection of underground fluids drove a four-year-long earthquake swarm near Cahuilla, California.

The sills appear to be at depths ranging from around 36–43 kilometers. (For reference, the deepest humans have ever drilled into the Earth is a little over 12 kilometers.) Scientists have long known that a phase boundary is present at a depth of around 35 kilometers beneath Hawai‘i; at such a phase boundary, rock of the same chemical composition transitions from one group of minerals above to a different group below. Studying the new data, Jackson recognized that the transitions occurring in this rock coupled to magma injections could host chemical reactions and processes that stress or weaken the rock, possibly explaining the existence of the sills—and by extension, the active seismicity.

“The transition of spinel to plagioclase within the lherzolite rock may be occurring through diffuse migration, entrapment, and crystallization of magma melts within the shallow lithospheric mantle underneath Hawai‘i,” Jackson says. “Such assemblages can exhibit transient weakening arising from coupled deformation and chemical reactions, which could facilitate crack growth or fault activation. Recurrent magma injections would continuously modulate grain sizes in the sill complex, prolonging conditions for seismic deformation in the rock. This process could exploit lateral variations in strength to produce and sustain the laterally compact seismogenic features that we observe.”

It is unclear whether the sills beneath the Big Island are unique to Hawai‘i or whether this sort of subvolcanic structure is common, the researchers say. “Hawai‘i is the best-monitored island in the world, with dozens of seismic stations giving us a window into what’s going on beneath the surface. We have to wonder, at how many other locations is this happening?” Wilding says.

Also unclear is exactly how the magma’s movement triggers the tiny quakes. The earthquakes map out the structures, but the actual mechanism of earthquakes is not well understood. It could be that the injection of a lot of magma into a space creates a lot of stress, the researchers say.

The paper is titled “The magmatic web beneath Hawai‘i.” This research was funded by the National Science Foundation and JPL, which Caltech manages for NASA. Computations for this research were conducted in the Resnick High Performance Computing Center, a facility supported by the Resnick Sustainability Institute at Caltech.

Michigan State University researchers uncover potential climate change-nutrition connection in plant metabolism

Peer-Reviewed Publication

MICHIGAN STATE UNIVERSITY

Michigan State University Assistant Professor Berkley Walker 

IMAGE: MICHIGAN STATE UNIVERSITY ASSISTANT PROFESSOR BERKLEY WALKER view more 

CREDIT: JOERG MUELLER

Images

Highlights:

  • In a new paper published in the journal Nature Plants, MSU researchers show that increasing atmospheric carbon could affect how plants produce proteins and other nutrients.
     
  • Elevated carbon dioxide levels, which help drive global warming, were found to affect how plants metabolize carbon, including a process known to make amino acids, which are the building blocks of proteins.
     
  • Although more work is required to understand the ultimate impact on a plant’s protein content, the researchers have introduced a new way to probe plant metabolism in a changing environment.

EAST LANSING, Mich. – A new study from researchers at Michigan State University underscores that we still have much to learn regarding how plants will function — and how nutritious they will be — as more carbon enters our atmosphere.

That same influx of carbon is helping drive climate change, meaning this new work, published in the journal Nature Plantsmay be revealing an unexpected way this global phenomenon is reshaping nature and our lives.

“What we’re seeing is that there’s a link between climate change and nutrition,” said Berkley Walker, an assistant professor in the Department of Plant Biology whose research team authored the new report. “This is something we didn’t know we’d be looking into when we started.”

Although elevated levels of carbon dioxide can be good for photosynthesis, Walker and his lab also showed that increasing CO2 levels can tinker with other metabolic processes in plants. And these lesser-known processes could have implications for other functions like protein production.

“Plants like CO2. If you give them more of it, they’ll make more food and they’ll grow bigger,” said Walker, who works in the College of Natural Science and the MSU-Department of Energy Plant Research Laboratory. “But what if you get a bigger plant that has a lower protein content? It’ll actually be less nutritious.”

It’s too early to say for certain whether plants face a low-protein future, Walker said. But the new research brings up surprising questions about how plants will make and metabolize amino acids — which are protein building blocks — with more carbon dioxide around.

And the harder we work to address those questions now, the better prepared we will be to confront the future, said the report’s first author and postdoctoral scholar, Xinyu Fu.

“The more we know about how plants use different metabolic pathways under fluctuating environments, the better we can find ways to manipulate the metabolic flow and ultimately engineer plants to be more efficient and nutritious,” Fu said.

Michigan State University researchers may have found a link between climate change and plant nutrition.

CREDIT

Hermann Schachner via Wikimedia Commons (plant cells)/Mike Erskine via Unsplash (arid land)

If at first plants don’t succeed, there’s photorespiration

The basics of photosynthesis are famously straightforward: Plants take water and carbon dioxide from their surroundings and, with power from the sun’s light, turn those ingredients into sugar and oxygen.

But sometimes this process starts off on the wrong foot. The enzyme responsible for collecting carbon dioxide can instead grab onto oxygen molecules.

This produces a byproduct that, left unchecked, would essentially choke out the plant, Walker said. Thankfully, however, plants have evolved a process called photorespiration that clears out the harmful byproduct and lets the enzyme take another swing at photosynthesis.

Photorespiration is not nearly as famous as photosynthesis, and it sometimes gets a bad rap because it takes up carbon and energy that could be used for making food. Inefficient though it may be, photorespiration is better than the alternative.

“It’s kind of like recycling,” Walker said. “It’d be great if we didn’t need it, but as long as we’re generating waste, we might as well use it.”

To do its job, photorespiration incorporates carbon into other molecules or metabolites, some of which are amino acids, the precursors to proteins.

“So photorespiration isn’t just recycling, it might be upcycling,” Walker said.

There’s a reason Walker used “might be” instead of “is” in his statement. Photorespiration still holds some mysteries, and the fate of its metabolites is one of those.

Metabolic sleuthing

When it comes to where amino acids produced by photorespiration end up, one established theory was that they remained in a closed loop. That means that metabolites made in the process are constrained to a select group of organelles and biochemical processes.

Now, the MSU researchers have shown that’s not always the case. In particular, they’ve shown that the amino acids glycine and serine are able to escape the confines of that closed loop.

What ultimately becomes of the compounds is a lingering question and one that could become increasingly important as carbon dioxide levels rise. 

Plants photorespire less when more carbon dioxide is available, so scientists will need to probe deeper into how plants produce and use these amino acids overall, Walker said.

For the time being, though, he and his team are excited they’ve reached this finding, which was no trivial feat. It involved feeding the plants a special type of carbon dioxide in which the carbon atoms had one more neutron than the carbon typically found in the atmosphere. 

A neutron is a subatomic particle and, as such, it has a very small mass. If you took a paper clip, cut it into a trillion pieces and then cut one of those pieces into a trillion more, the smallest pieces would have roughly the same mass as a neutron.

But the MSU collaboration had the tools and expertise needed to measure that subtle difference in mass. Those measurements, coupled with computational modeling, enabled the researchers to follow that slightly beefy carbon and see how plants integrate it at different metabolic stages when conditions favor photorespiration.

“This new technique enabled a better and more quantitative understanding of important metabolic pathways in plants,” Fu said. “With the new flux approach, we have begun to reveal the dynamic state of metabolic pathways and understand metabolism as a whole system.”

“I said that my lab could do this on my job application, but I wasn’t totally sure it would work,” said Walker, who joined MSU in 2018. The fact that it did work is a credit to the team on the paper, which also includes graduate student Luke Gregory and research assistant professor Sean Weise.

But other colleagues at MSU also helped, including University Distinguished Professor Thomas Sharkey, Professor Yair Shachar-Hill and the team at the Mass Spectrometry and Metabolomics Core.

“Coming to MSU uniquely enabled this to happen,” Walker said.

This work was supported by the U.S. Department of Energy Office of Science, with contributions from the MSU Institute for Cyber-Enabled Research, the Great Lakes Bioenergy Research Center and the National Science Foundation.

By Matt Davenport 

Michigan State University has been advancing the common good with uncommon will for more than 165 years. One of the world's leading research universities, MSU pushes the boundaries of discovery to make a better, safer, healthier world for all while providing life-changing opportunities to a diverse and inclusive academic community through more than 200 programs of study in 17 degree-granting colleges.

For MSU news on the Web, go to MSUToday. Follow MSU News on Twitter at twitter.com/MSUnews.

HAARP to bounce signal off asteroid in NASA experiment

Business Announcement

UNIVERSITY OF ALASKA FAIRBANKS

An experiment to bounce a radio signal off an asteroid on Dec. 27 will serve as a test for probing a larger asteroid that in 2029 will pass closer to Earth than the many geostationary satellites that orbit our planet.

The High-frequency Active Auroral Research Program research site in Gakona will transmit radio signals to asteroid 2010 XC15, which could be about 500 feet across. The University of New Mexico Long Wavelength Array near Socorro, New Mexico, and the Owens Valley Radio Observatory Long Wavelength Array near Bishop, California, will receive the signal.

This will be the first use of HAARP to probe an asteroid.

“What’s new and what we are trying to do is probe asteroid interiors with long wavelength radars and radio telescopes from the ground,” said Mark Haynes, lead investigator on the project and a radar systems engineer at NASA’s Jet Propulsion Laboratory in Southern California. “Longer wavelengths can penetrate the interior of an object much better than the radio wavelengths used for communication.”

Knowing more about an asteroid’s interior, especially of an asteroid large enough to cause major damage on Earth, is important for determining how to defend against it.

“If you know the distribution of mass, you can make an impactor more effective, because you’ll know where to hit the asteroid a little better,” Haynes said.

Many programs exist to quickly detect asteroids, determine their orbit and shape and image their surface, either with optical telescopes or the planetary radar of the Deep Space Network, NASA’s network of large and highly senstive radio antennas in California, Spain and Australia.

Those radar-imaging programs use signals of short wavelengths, which bounce off the surface and provide high-quality external images but don’t penetrate an object.

HAARP will transmit a continually chirping signal to asteroid 2010 XC15 at slightly above and below 9.6 megahertz (9.6 million times per second). The chirp will repeat at two-second intervals. Distance will be a challenge, Haynes said, because the asteroid will be twice as far from Earth as the moon is.

The University of Alaska Fairbanks operates HAARP under an agreement with the Air Force, which developed and owned HAARP but transferred the research instruments to UAF in August 2015. 

The test on 2010 XC15 is yet another step toward the globally anticipated 2029 encounter with asteroid Apophis. It follows tests in January and October in which the moon was the target of a HAARP signal bounce.

Apophis was discovered in 2004 and will make its closest approach to Earth on April 13, 2029, when it comes within 20,000 miles. Geostationary satellites orbit Earth at about 23,000 miles. The asteroid, which NASA estimated to be about 1,100 feet across, was initially thought to pose a risk to Earth in 2068, but its orbit has since been better projected by researchers.

The test on 2010 XC15 and the 2029 Apophis encounter are of general interest to scientists who study near-Earth objects. But planetary defense is also a key research driver.

“The more time there is before a potential impact, the more options there are to try to deflect it,” Haynes said.

NASA says an automobile-sized asteroid hits Earth’s atmosphere about once a year, creating a fireball and burning up before reaching the surface.

About every 2,000 years a meteoroid the size of a football field hits Earth. Those can cause a lot of damage. And as for wiping out civilization, NASA says an object large enough to do that strikes the planet once every few million years.

NASA first successfully redirected an asteroid on Sept. 26, when its Double Asteroid Redirection Test mission, or DART, collided with Dimorphos. That asteroid is an orbiting moonlet of the larger Didymos asteroid.

The DART collision altered the moonlet’s orbit time by 32 minutes.

The Dec. 27 test could reveal great potential for the use of asteroid sensing by long wavelength radio signals. Approximately 80 known near-Earth asteroids passed between the moon and Earth in 2019, most of them small and discovered near closest approach.

“If we can get the ground-based systems up and running, then that will give us a lot of chances to try to do interior sensing of these objects,” Haynes said.

The National Science Foundation is funding the work through its award to the Geophysical Institute for establishing the Subauroral Geophysical Observatory for Space Physics and Radio Science in Gakona

“HAARP is excited to partner with NASA and JPL to advance our knowledge of near-Earth objects,” said Jessica Matthews, HAARP’s program manager.

New analysis maps out impacts of marine chokepoint closures

Peer-Reviewed Publication

DUKE UNIVERSITY

Shipping vessels on the Suez Canal 

IMAGE: SHIPPING VESSELS ON THE SUEZ CANAL view more 

CREDIT: FABIO VIA FLICKR

DURHAM, N.C.— When the mega container ship Ever Given ran aground and blocked the Suez Canal for six days in 2021, it caused disruptions in international trade for weeks and in global supply chains for months afterward.

The far-reaching impacts drove home how important the Suez Canal and other marine chokepoints, or shipping straits, are to global economic security and how blindsided and ill-prepared for a closure businesses and governments can be.

A new analysis by a Duke University researcher should help.

“This study provides a forward look at the impacts that could result if any of the 11 busiest marine chokepoints was shut down because of geopolitical instability, piracy, marine accidents or other causes,” said Lincoln F. Pratson, Gendell Family Professor of Energy and Environment at Duke’s Nicholas School of the Environment.

Pratson used GIS data of global marine shipping lanes along with international trade data from 2019 to simulate closure scenarios in the Panama Canal, the Strait of Gibraltar, the English Channel, the Danish Straits, the Bosporus Strait, the Suez Canal, the Bab el Mandeb Strait between Yemen and East Africa, the Strait of Hormuz, the Malacca Strait between Malaysia and Indonesia, the South China Sea, and the East China Sea.

The simulations allowed him to estimate not only the types and amounts of trade that would be disrupted by each chokepoint’s closure, but also how the closure could lead to the redirection of trade flows globally.

“When international trade through one of these chokepoints is impeded, transiting ship traffic gets backed up and subsequent shipping often gets redirected along longer routes to avoid the blockage. This leads to increased shipping times and costs,” Pratson said.

“Ports that initially experienced lulls in cargo-handling because of the closure become backlogged as the delayed cargoes arrive along with ensuing on-time shipments.”

The economic impacts of these changes can reverberate for months as international supply chains for everything from electronics, vehicles, and pharmaceuticals to crude oil and food are disrupted, and just-in-time delivery of critical goods is no longer a bankable bet, he said.

“Having a pretty good idea of what to expect if there’s a prolonged closure of shipping in one of these 11 chokepoints would help governments, businesses and seaport managers develop strategies for reducing potential shipping and port delays or losses,” Pratson said.

Cargo ships move about 80% of all international trade by volume and about 70% of it by value. Much of this trade passes through one or more marine chokepoints enroute to its destination. Pratson estimates that value of trade through a number of these chokepoints, such as the Malacca Strait and South China Sea, rivals the GDPs of the world’s largest economies.

Furthermore, some chokepoints, such as the Danish Straits, Bosporus Strait, Strait of Hormuz, East China Sea, and South China Sea, provide the only access to maritime trade for a large number of countries.

If these chokepoints are shut down or if traffic through them is reduced for a prolonged period, as has happened in the Bosporus Strait during the war in Ukraine, the disruption can block or severely limit affected counties’ ability to import or export goods their economies and world markets alike depend on. This can cause volatility in global supplies and prices and spur a scramble by nations and businesses on both sides of the chokepoint to secure alternative sources or markets for critical goods.

Other chokepoints, such as the Suez and Panama canals, provide shortcuts between ocean basins that significantly reduce shipping costs and times.

“The closure scenarios suggest that if chokepoints used to move trade between the Atlantic and Indian oceans are closed for an extended period of time, more shipping may be redirected towards the Panama Canal then it can handle, which would end up causing a second source of shipping congestion, cargo delays, further rerouting.” Pratson said.

“Given that the operation costs of container ships have been running around $2 million a day, having an idea of how closure of a chokepoint could affect maritime trade globally should be useful information to have in advance,” he said.

Pratson published his peer-reviewed study Dec.21 in the journal Communications in Transportation Research.

Funding for his study came from the U.S. Department of Defense Minerva Program.

CITATION: “Assessing Impacts to Maritime Shipping from Marine Chokepoint Closures,” Lincoln F. Pratson; Communications in Transportation Research, Dec. 21, 2022. DOI: https://doi.org/10.1016/j.commtr.2022.100083

 

What drives decline of East Asian dust activity in the past two decades?

Peer-Reviewed Publication

INSTITUTE OF ATMOSPHERIC PHYSICS, CHINESE ACADEMY OF SCIENCES

Dust storm 

IMAGE: SUSPENDED DUST PARTICLES MADE THE EARTH APPEAR LIKE THE MARS. PHOTO TAKEN ON MARCH 15, 2021 IN BEIJING. view more 

CREDIT: ZHI ZHENG

Dust storms can be miles long and thousands of feet high. It may cause various environmental consequences including severe air pollution, land degradation, and damage to crop and livestock.

Dust entrained into the atmosphere serves as a major aerosol type, exerting effects on weather and climate system via aerosol-radiation-cloud interactions and delivering nutrients from continents to other continents and oceans.

Recently, a research team led by Dr. Chenglai Wu from the Institute of Atmospheric Physics of the Chinese Academy of Sciences investigated the factors driving the recent decline of East Asian dust activity.

The study was published in Nature Communications.

"It is important to understand the evolution of dust activity in the past and project its changes in the future," said Wu. "In fact, because of its natural origin, dust can serve as a mirror to understand the evolution of Earth System."

In recent decades, East Asian dust activity has decreased greatly. In particular, there is a strong decline of dust activity after a dusty period of 2000-2002. What are the reasons for the recent decline of dust activity in East Asia? How will dust activity change in the future?

"Dust emission depends on various factors including surface winds, soil conditions, vegetation cover, and human disturbances," said Prof. Zhaohui Lin, co-corresponding author of the study.

The essential part of the study was to use a physically-based dust emission model (DuEM) to reconstruct the dust emission flux from 2001 to 2017. Results showed that the model reproduced well the interannual variations and decreasing trends of dust activity in East Asia during 2001 to 2017.

Furthermore, the researchers conducted several sensitive experiments with one individual factor varying during 2001-2017 but other factors fixed to the level of the beginning year (i.e., year 2001). By comparing these experiments, they isolated the impacts of surface wind, soil moisture, and vegetation cover on the temporal variations of dust emission during 2001-2017.

"We find that the weakening of surface wind and the increase of vegetation cover and soil moisture all contribute to such dust emission reduction. The relative contributions of these three factors are 46%, 30%, and 24%, respectively," said Wu. 

"Overall, changes in meteorological factors are the main drivers for recent decline in dust activity over East Asia," said Wu.

In fact, the weakening of surface wind can be ascribed to the reduced meridional temperature gradients due to the polar amplification of global warming. The weakening can also be partly explained by internal variability in the climate system such as the Pacific Decadal Oscillation (PDO). Increase of vegetation cover is closely related to the increase of COconcentrations and temperature as well as ecological restoration in China. Soil wetting may be mainly ascribed to the increase of precipitation in the source regions.

"We have got to take into account the evolution of all these relevant factors for projecting future dust change in this region," suggested Wu.

This study provides a reliable clue for future prediction of dust storm activity. The data of simulated dust emission is available at Science Data Bank repository.

A reactive strip is developed to detect and quantify allergens in foods quickly and easily

A team from the UPV, UV, and the CIBER on Rare Diseases has prototyped the test. "We want anyone to be able to analyze a food just before eating it," say the researchers


Peer-Reviewed Publication

UNIVERSITAT POLITÈCNICA DE VALÈNCIA

A reactive strip is developed to detect and quantify allergens in foods quickly and easily 

IMAGE: A REACTIVE STRIP IS DEVELOPED TO DETECT AND QUANTIFY ALLERGENS IN FOODS QUICKLY AND EASILY view more 

CREDIT: UPV

"Food allergy or hypersensitivity is estimated to affect about 520 million people worldwide. These reactions occur mainly through the consumption of foods containing trace allergens. Therefore, identifying and quantifying them before the food is consumed is essential, and this is what the test we have developed allows," says Sergi Morais, professor in the Department of Chemistry at the Universitat Politècnica de València and researcher at the Inter-University Institute of Molecular Recognition and Technological Development (IDM).

The prototype has been developed as a proof of concept for simultaneously detecting almond and peanut allergens and has been validated with everyday commercial foods such as biscuits and energy bars.

Among its advantages, the researchers highlight the reliability of the test, which contains multiple internal controls and calibrators integrated into a miniaturised 36-point array. "With microarray technology, we perform 36 assays in a single step. The derived information allows us to identify whether the result is a true positive or negative. In addition, with the internal calibrators and the smartphone, we can quantify with high precision traces of allergen in the food," says Ángel Maquieira, full professor in the Department of Chemistry at the Universitat Politècnica de València.

Regarding the extraction method, the UPV, UV, and CIBERER team stresses its simplicity, which means anyone can carry it out at any time.

"Current extraction methods consist of multiple steps and require sophisticated equipment for grinding, degreasing, extraction, and purification of allergens. Therefore, the analysis is carried out in qualified laboratories. The aim is to decentralise the analysis, as has been done with the COVID-19 test. We want anyone to be able to analyse a food just before consuming it," adds Sergi Morais.

The extraction method developed is based on the use of a portable grinder, which is used to grind and filter the sample in a single step; 5 mL of a solution is then added to extract the allergen, and, once the sample is prepared, the test strip is immersed in the solution. And in just 5 minutes, the result is obtained, which can be read with a mobile phone.

"At an estimated cost of €1 per strip, the developed test has great commercial potential, for example, in the food sector for rapid identification of allergens in situ and in the pharmaceutical sector to quantify the potency of allergenic extracts used in allergy testing," says Amadeo Sena, a postdoctoral researcher at the Inter-University Institute for Molecular Recognition and Technological Development (IDM).

Future development

Looking to the future, the UPV, UV, and CIBERER team points out that, given the characteristics of the test strip, it could easily be adapted for other allergens, as the group has specific antibodies for a wide range of allergens and biomarkers. "Our challenge is to develop a test for the simultaneous quantification of the 14 allergens that must be declared according to Royal Decree 126/2015. " concludes Patricia Casino, a researcher at Instituto Universitario de Biotecnología i Biomedicina (BIOTECMED) -Universitat de València and the CIBERER.