Thursday, May 06, 2021

Forest fires drive expansion of savannas in the heart of the Amazon

Researchers analyzed the effects of wildfires on plant cover and soil quality in the last 40 years. The findings of the study show that the forest is highly vulnerable even in well-conserved areas far from the 'deforestation arc'.

FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULO

Research News

IMAGE

IMAGE: FOREST DESTROYED BY FIRE IN MIDDLE NEGRO.THE EFFECTS OF WILDFIRES ON PLANT COVER AND SOIL QUALITY view more 

CREDIT: BERNARDO MONTEIRO FLORES

Agência FAPESP – White-sand savannas are expanding in the heart of the Amazon as a result of recurring forest fires, according to a study published in the journal Ecosystems.

The study was supported by FAPESP, and conducted by Bernardo Monteiro Flores, currently a postdoctoral fellow in ecology at the Federal University of Santa Catarina (UFSC) in Brazil, and Milena Holmgren, a professor in the Department of Environmental Sciences at Wageningen University in the Netherlands.

“The edges of the Amazon Rainforest have long been considered the most vulnerable parts owing to expansion of the agricultural frontier. This degradation of the forest along the so-called ‘deforestation arc’ [a curve that hugs the southeastern edge of the forest] continues to occur and is extremely troubling. However, our study detected the appearance of savannas in the heart of the Amazon a long way away from the agricultural frontier,” Flores told Agência FAPESP.

The authors studied an area of floodplains on the middle Negro River near Barcelos, a town about 400 km upstream of Manaus, the capital of Amazonas state, where areas of white-sand savanna are expanding, although forest ecosystems still predominate. They blame the increasing frequency and severity of wildfires in the wider context of global climate change.

“We mapped 40 years of forest fires using satellite images, and collected detailed information in the field to see whether the burned forest areas were changing,” Flores said. “When we analyzed tree species richness and soil properties at different times in the past, we found that forest fires had killed practically all trees so that the clayey topsoil could be eroded by annual flooding and become increasingly sandy.”

They also found that as burnt floodplain forest naturally recovers, there is a major shift in the type of vegetation, with native herbaceous cover expanding, forest tree species disappearing, and white-sand savanna tree species becoming dominant.

Less resilient

Where do the savanna tree species come from? According to Flores, white-sand savannas are part of the Amazon ecosystem, covering about 11% of the biome. They are ancient savannas and very different from the Cerrado with its outstanding biodiversity, yet even so they are home to many endemic plant species. They are called campinas by the local population. Seen from above, the Amazon is an ocean of forest punctuated by small islands of savanna. The seeds of savanna plants are distributed by water, fish and birds, and are more likely than forest species to germinate when they reach a burnt area with degraded soil, repopulating the area concerned.

“Our research shows native savanna cover is expanding and may continue expanding in the Amazon. Not along the ‘deforestation arc’, where exotic grasses are spreading, but in remote areas throughout the basin where white-sand savannas already exist,” Flores said.

It is important to stress that in the Amazon floodplain forest is far less resilient than upland terra firma forest. It burns more easily, after which its topsoil is washed away and degrades much more rapidly. “Floodplain forest is the ‘Achilles heel’ of the Amazon,” Holmgren said. “We have field evidence that if the climate becomes drier in the Amazon and wildfires become more severe and frequent, floodplain forest will be the first to collapse.”

These two factors – a drier climate, and more severe and frequent fires – are already in play as part of the ongoing climate change crisis. The study shows that wildfires in the middle Negro area during the severe 2015-16 El Niño burned down an area seven times larger than the total area destroyed by fire in the preceding 40 years.

“The additional loss of floodplain forest could result in huge emissions of carbon stored in trees, soil and peatlands, as well as reducing supplies of resources used by local people, such as fish and forest products. The new discoveries reinforce the urgency of defending remote forest areas. For example, a fire management program should be implemented to reduce the spread of wildfires during the dry season,” Flores said.

The article “White-sand savannas expand at the core of the Amazon after forest wildfires” is at: link.springer.com/article/10.1007%2Fs10021-021-00607-x.

 

Johns Hopkins scientists model Saturn's interior

Researchers simulate conditions necessary for planet's unique magnetic field

JOHNS HOPKINS UNIVERSITY

Research News

IMAGE

IMAGE: THE MAGNETIC FIELD OF SATURN SEEN AT THE SURFACE. view more 

CREDIT: ANKIT BARIK/JOHNS HOPKINS UNIVERSITY

New Johns Hopkins University simulations offer an intriguing look into Saturn's interior, suggesting that a thick layer of helium rain influences the planet's magnetic field.

The models, published this week in AGU Advances, also indicate that Saturn's interior may feature higher temperatures at the equatorial region, with lower temperatures at the high latitudes at the top of the helium rain layer.

It is notoriously difficult to study the interior structures of large gaseous planets, and the findings advance the effort to map Saturn's hidden regions.

"By studying how Saturn formed and how it evolved over time, we can learn a lot about the formation of other planets similar to Saturn within our own solar system, as well as beyond it," said co-author Sabine Stanley, a Johns Hopkins planetary physicist.

Saturn stands out among the planets in our solar system because its magnetic field appears to be almost perfectly symmetrical around the rotation axis. Detailed measurements of the magnetic field gleaned from the last orbits of NASA's Cassini mission provide an opportunity to better understand the planet's deep interior, where the magnetic field is generated, said lead author Chi Yan, a Johns Hopkins PhD candidate.

By feeding data gathered by the Cassini mission into powerful computer simulations similar to those used to study weather and climate, Yan and Stanley explored what ingredients are necessary to produce the dynamo--the electromagnetic conversion mechanism--that could account for Saturn's magnetic field.

"One thing we discovered was how sensitive the model was to very specific things like temperature," said Stanley, who is also a Bloomberg Distinguished Professor at Johns Hopkins in the Department of Earth & Planetary Sciences and the Space Exploration Sector of the Applied Physics Lab. "And that means we have a really interesting probe of Saturn's deep interior as far as 20,000 kilometers down. It's a kind of X-ray vision."

Strikingly, Yan and Stanley's simulations suggest that a slight degree of non-axisymmetry could actually exist near Saturn's north and south poles.

"Even though the observations we have from Saturn look perfectly symmetrical, in our computer simulations we can fully interrogate the field," said Stanley.

Direct observation at the poles would be necessary to confirm it, but the finding could have implications for understanding another problem that has vexed scientists for decades: how to measure the rate at which Saturn rotates, or, in other words, the length of a day on the planet.

This project was conducted using computational resources at the Maryland Advanced Research Computing Center (MARCC).


CAPTION

Saturn's interior with stably stratified Helium Insoluble Layer.

CREDIT

Yi Zheng (HEMI/MICA Extreme Arts Program)


 

New application of AI just removed one of the biggest roadblocks in astrophysics

Using neural networks, Flatiron Institute research fellow Yin Li and his colleagues simulated vast, complex universes in a fraction of the time it takes with conventional methods

SIMONS FOUNDATION

Research News

IMAGE

IMAGE: SIMULATIONS OF A REGION OF SPACE 100 MILLION LIGHT-YEARS SQUARE. THE LEFTMOST SIMULATION RAN AT LOW RESOLUTION. USING MACHINE LEARNING, RESEARCHERS UPSCALED THE LOW-RES MODEL TO CREATE A HIGH-RESOLUTION SIMULATION... view more 

CREDIT: Y. LI ET AL./PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES 2021

Using a bit of machine learning magic, astrophysicists can now simulate vast, complex universes in a thousandth of the time it takes with conventional methods. The new approach will help usher in a new era in high-resolution cosmological simulations, its creators report in a study published online May 4 in Proceedings of the National Academy of Sciences.

"At the moment, constraints on computation time usually mean we cannot simulate the universe at both high resolution and large volume," says study lead author Yin Li, an astrophysicist at the Flatiron Institute in New York City. "With our new technique, it's possible to have both efficiently. In the future, these AI-based methods will become the norm for certain applications."

The new method developed by Li and his colleagues feeds a machine learning algorithm with models of a small region of space at both low and high resolutions. The algorithm learns how to upscale the low-res models to match the detail found in the high-res versions. Once trained, the code can take full-scale low-res models and generate 'super-resolution' simulations containing up to 512 times as many particles.

The process is akin to taking a blurry photograph and adding the missing details back in, making it sharp and clear.

This upscaling brings significant time savings. For a region in the universe roughly 500 million light-years across containing 134 million particles, existing methods would require 560 hours to churn out a high-res simulation using a single processing core. With the new approach, the researchers need only 36 minutes.

The results were even more dramatic when more particles were added to the simulation. For a universe 1,000 times as large with 134 billion particles, the researchers' new method took 16 hours on a single graphics processing unit. Existing methods would take so long that they wouldn't even be worth running without dedicated supercomputing resources, Li says.

Li is a joint research fellow at the Flatiron Institute's Center for Computational Astrophysics and the Center for Computational Mathematics. He co-authored the study with Yueying Ni, Rupert Croft and Tiziana Di Matteo of Carnegie Mellon University; Simeon Bird of the University of California, Riverside; and Yu Feng of the University of California, Berkeley.

Cosmological simulations are indispensable for astrophysics. Scientists use the simulations to predict how the universe would look in various scenarios, such as if the dark energy pulling the universe apart varied over time. Telescope observations may then confirm whether the simulations' predictions match reality. Creating testable predictions requires running simulations thousands of times, so faster modeling would be a big boon for the field.

Reducing the time it takes to run cosmological simulations "holds the potential of providing major advances in numerical cosmology and astrophysics," says Di Matteo. "Cosmological simulations follow the history and fate of the universe, all the way to the formation of all galaxies and their black holes."

So far, the new simulations only consider dark matter and the force of gravity. While this may seem like an oversimplification, gravity is by far the universe's dominant force at large scales, and dark matter makes up 85 percent of all the 'stuff' in the cosmos. The particles in the simulation aren't literal dark matter particles but are instead used as trackers to show how bits of dark matter move through the universe.

The team's code used neural networks to predict how gravity would move dark matter around over time. Such networks ingest training data and run calculations using the information. The results are then compared to the expected outcome. With further training, the networks adapt and become more accurate.

The specific approach used by the researchers, called a generative adversarial network, pits two neural networks against each other. One network takes low-resolution simulations of the universe and uses them to generate high-resolution models. The other network tries to tell those simulations apart from ones made by conventional methods. Over time, both neural networks get better and better until, ultimately, the simulation generator wins out and creates fast simulations that look just like the slow conventional ones.

"We couldn't get it to work for two years," Li says, "and suddenly it started working. We got beautiful results that matched what we expected. We even did some blind tests ourselves, and most of us couldn't tell which one was 'real' and which one was 'fake.'"

Despite only being trained using small areas of space, the neural networks accurately replicated the large-scale structures that only appear in enormous simulations.

The simulations don't capture everything, though. Because they focus only on dark matter and gravity, smaller-scale phenomena -- such as star formation, supernovae and the effects of black holes -- are left out. The researchers plan to extend their methods to include the forces responsible for such phenomena, and to run their neural networks 'on the fly' alongside conventional simulations to improve accuracy. "We don't know exactly how to do that yet, but we're making progress," Li says.

###

Why robots need reflexes - interview

Robots could safeguard people from pain

TECHNICAL UNIVERSITY OF MUNICH (TUM)

Research News

Reflexes protect our bodies - for example when we pull our hand back from a hot stove. These protective mechanisms could also be useful for robots. In this interview, Prof. Sami Haddadin and Johannes Kühn of the Munich School of Robotics and Machine Intelligence (MSRM) of the Technical University of Munich (TUM) explain why giving test subjects a "slap on the hand" could lay the foundations for the robots of the future.

In your paper, published in Scientific Reports, you describe an experimental setup where people were actually slapped on the hand - to study their reflexes....

Kühn: Yes, you can put it that way. For our study, in cooperation with Imperial College London, the test subjects needed their reflexes to protect them against two different pain sources: first, a slap on the hand. And, while pulling their hand and arm out of harm's way, they also had to avoid an elbow obstacle. We studied the hand retraction and discovered that it is a highly coordinated motion.

We also observed that the pain anticipated by a person shapes the reflex: If I know that the object behind me will cause similar pain to the slap on my fingers, I will withdraw my hand differently than when I know that the object will cause no pain.

How can such a seemingly simple experiment contribute to the development of intelligent high-tech machines like robots?

Haddadin: Humans have fascinating abilities. One could speak of a built-in intelligence in the human body that is indispensable for survival. The protective reflex is a central part of this. Imagine the classical "hand on the hot stove" situation. Without thinking, we pull back our hand as soon as the skin senses heat. So far, robots do not have reflexes of this kind. Their reactions to impending collisions tend to be rather mindless: They just stop and don't move until a person takes action.

In some situations this might make sense. But if a robot simply stopped moving when touching a hot stove, this would obviously have fatal consequences. At the MSRM we are therefore interested in developing autonomous and intelligent reflex mechanisms as part of a central nervous system for robots, so to speak. Humans are serving as our role models. How do their reflexes work and what can we learn from them for the development of intelligent robots?

What conclusions can you draw from your experiment for the development of robots?

Kühn: We gained an insight into how the reflex motion works in detail: The way humans coordinate the reflex can be seen as throwing the shoulder forward, in a sense, in order to accelerate the withdrawal of the hand. This principle could be applied in the development of reflex motions in humanoid robots, with a signal sent to one part of a robot in order to influence the motion of another one.

This knowledge will also be helpful in the design of robot-enabled prosthetics that are expected to perform in "human-like" ways.

You mentioned that "anticipated pain" played a role in your experiment. Should robots be able to anticipate pain, too?

Kühn: That would be a big advantage. It could help to classify potential collisions based on danger levels - and to plan evasive actions if appropriate. This would not only ensure the safety of the robot.

If the robot were capable of anticipating human pain, it could intervene in a dangerous situation to save a person from experiencing this pain.

Would robots then need to learn how to feel pain in the same way as humans?

Haddadin: No. Our pain perception is highly complex and linked to emotions. So we can't compare this to a human's "pain sensation". Robots are tools and not living creatures. Artificial pain is nothing more than a technical signal based on data from various sensors. At the MSRM we have already developed an initial reflex mechanism for robots based on "artificial pain". When touching hot or sharp objects, our robot withdrew its arm in a reflexive movement.

What are your next steps on the way to a robot with a fully developed protective reflex?

Haddadin: The big challenge in our research field between humans and machines is that we still have only a rudimentary understanding of our role model: the human reflex system, working with the sensorimotoric learning mechanisms of a complex, neuromechanical motion apparatus. And that is where the exciting scientific challenge lies: with all of the unknowns, to continually improve the human-inspired abilities of our intelligent machines, while using what we learn to arrive at a better understanding of how humans function. Basically, we can say that this has continued since the days of Leonardo Da Vinci and will carry on for many years to come.

###

An uncrackable combination of invisible ink and artificial intelligence

AMERICAN CHEMICAL SOCIETY

Research News

IMAGE

IMAGE: WITH REGULAR INK, A COMPUTER TRAINED WITH THE CODEBOOK DECODES "STOP " (TOP); WHEN A UV LIGHT IS SHOWN ON THE PAPER, THE INVISIBLE INK IS EXPOSED, AND THE REAL MESSAGE... view more 

CREDIT: ADAPTED FROM ACS APPLIED MATERIALS & INTERFACES 2021, DOI: 10.1021/ACSAMI.1C01179

Coded messages in invisible ink sound like something only found in espionage books, but in real life, they can have important security purposes. Yet, they can be cracked if their encryption is predictable. Now, researchers reporting in ACS Applied Materials & Interfaces have printed complexly encoded data with normal ink and a carbon nanoparticle-based invisible ink, requiring both UV light and a computer that has been taught the code to reveal the correct messages.

Even as electronic records advance, paper is still a common way to preserve data. Invisible ink can hide classified economic, commercial or military information from prying eyes, but many popular inks contain toxic compounds or can be seen with predictable methods, such as light, heat or chemicals. Carbon nanoparticles, which have low toxicity, can be essentially invisible under ambient lighting but can create vibrant images when exposed to ultraviolet (UV) light - a modern take on invisible ink. In addition, advances in artificial intelligence (AI) models -- made by networks of processing algorithms that learn how to handle complex information -- can ensure that messages are only decipherable on properly trained computers. So, Weiwei Zhao, Kang Li, Jie Xu and colleagues wanted to train an AI model to identify and decrypt symbols printed in a fluorescent carbon nanoparticle ink, revealing hidden messages when exposed to UV light.

The researchers made carbon nanoparticles from citric acid and cysteine, which they diluted with water to create an invisible ink that appeared blue when exposed to UV light. The team loaded the solution into an ink cartridge and printed a series of simple symbols onto paper with an inkjet printer. Then, they taught an AI model, composed of multiple algorithms, to recognize symbols illuminated by UV light and decode them using a special codebook. Finally, they tested the AI model's ability to decode messages printed using a combination of both regular red ink and the UV fluorescent ink. With 100% accuracy, the AI model read the regular ink symbols as "STOP", but when a UV light was shown on the writing, the invisible ink illustrated the desired message "BEGIN". Because these algorithms can notice minute modifications in symbols, this approach has the potential to encrypt messages securely using hundreds of different unpredictable symbols, the researchers say.

###

The authors acknowledge funding from the Shenzhen Peacock Team Plan and the Bureau of Industry and Information Technology of Shenzhen through the Graphene Manufacturing Innovation Center (201901161514).

The abstract that accompanies this paper can be viewed here.

The American Chemical Society (ACS) is a nonprofit organization chartered by the U.S. Congress. ACS' mission is to advance the broader chemistry enterprise and its practitioners for the benefit of Earth and all its people. The Society is a global leader in promoting excellence in science education and providing access to chemistry-related information and research through its multiple research solutions, peer-reviewed journals, scientific conferences, eBooks and weekly news periodical Chemical & Engineering News. ACS journals are among the most cited, most trusted and most read within the scientific literature; however, ACS itself does not conduct chemical research. As a leader in scientific information solutions, its CAS division partners with global innovators to accelerate breakthroughs by curating, connecting and analyzing the world's scientific knowledge. ACS' main offices are in Washington, D.C., and Columbus, Ohio.

HEPA filter effectively reduces airborne respiratory particles generated during vigorous exercise

MAYO CLINIC

Research News

ROCHESTER, Minn. -- A pair of Mayo Clinic studies shed light on something that is typically difficult to see with the eye: respiratory aerosols. Such aerosol particles of varying sizes are a common component of breath, and they are a typical mode of transmission for respiratory viruses like COVID-19 to spread to other people and surfaces.

Researchers who conduct exercise stress tests for heart patients at Mayo Clinic found that exercising at increasing levels of exertion increased the aerosol concentration in the surrounding room. Then also found that a high-efficiency particulate air (HEPA) device effectively filtered out the aerosols and decreased the time needed to clear the air between patients.

"Our work was conducted with the support of Mayo Cardiovascular Medicine leadership who recognized right at the start of the pandemic that special measures would be required to protect patients and staff from COVID-19 while continuing to provide quality cardiovascular care to all who needed it," says Thomas Allison, Ph.D., director of Cardiopulmonary Exercise Testing at Mayo Clinic in Rochester. "Since there was no reliable guidance on how to do this, we put a research team together to find answers through scientific testing and data. We are happy to now share our findings with everyone around the world." Dr. Allison is senior author of both studies.

To characterize the aerosols generated during various intensities of exercise in the first study, Dr. Allison's team set up a special aerosol laboratory in a plastic tent with controlled airflow. Two types of laser beam particle counters were used to measure aerosol concentration at the front, back and sides of a person riding an exercise bike. Eight exercise volunteers wore equipment to measure their oxygen consumption, ventilation and heart rate.

During testing, a volunteer first had five minutes of resting breathing, followed by four bouts of three-minute exercise staged ? with monitoring and coaching ? to work at 25%, 50%, 75% and 100% of their age-predicted heart rate. This effort was followed by three minutes of cooldown. The findings are publicized online in CHEST.

The aerosol concentrations increased exponentially throughout the test. Specifically, exercise at or above 50% of resting heart rate showed significant increases in aerosol concentration.

"In a real sense, I think we have proven dramatically what many suspected ? that is why gyms were shut down and most exercise testing laboratories closed their practices. Exercise testing was not listed as an aerosol-generating procedure prior to our studies because no one had specifically studied it before. Exercise generates millions of respiratory aerosols during a test, many of a size reported to have virus-carrying potential. The higher the exercise intensity, the more aerosols are produced," says Dr. Allison.

The follow-up study led by Dr. Allison focused on how to mitigate the aerosols generated during exercise testing by filtering them out of the air immediately after they came out of the subject's mouth. Researchers used a similar setup with the controlled airflow exercise tent, particle counter and stationary bike, but added a portable HEPA filter with a flume hood.

Six healthy volunteers completed the same 20-minute exercise test as the previous study, first without the mitigation and then with the portable HEPA filter running.

Also, a separate experiment tested aerosol clearance time in the clinical exercise testing laboratories by using artificially generated aerosols to test how long it took for 99.9% of aerosols to be removed. Researchers performed the test first with only existing heating, ventilation and air conditioning, and then with the addition of the portable HEPA filter running.

"Studying clearance time informed us of how soon we could safely bring a new patient into the laboratory after finishing the test on the previous patient. HEPA filters cut this time by 50%, allowing the higher volume of testing necessary to meet the clinical demands of our Cardiovascular Medicine practice," says Dr. Allison.

"We translated CDC (Centers for Disease Control and Prevention) guidelines for aerosol mitigation with enhanced airflow through HEPA filters and showed that it worked amazingly well for exercise testing. We found that 96% plus or minus 2% of aerosols of all sizes generated during heavy exercise were removed from the air by the HEPA filter. As a result, we have been able to return to our practice of performing up to 100 stress tests per day without any recorded transmission of COVID in our exercise testing laboratories," says Dr. Allison.

###

About Mayo Clinic

Mayo Clinic is a nonprofit organization committed to innovation in clinical practice, education and research, and providing compassion, expertise and answers to everyone who needs healing. Visit the Mayo Clinic News Network for additional Mayo Clinic news. For information on COVID-19, including Mayo Clinic's Coronavirus Map tracking tool, which has 14-day forecasting on COVID-19 trends, visit the Mayo Clinic COVID-19 Resource Center.

UNC Charlotte researchers analyzed the host origins of SARS-CoV-2 and other coronaviruses

UNIVERSITY OF NORTH CAROLINA AT CHARLOTTE

Research News

IMAGE

IMAGE: THIS TREE IS A SUMMARY OF THE SELECTED HOST TRANSFORMATIONS IN THE CLADE OF BETACORONAVIRUS ASSOCIATED WITH SARS-COV, MERS-COV, AND SARS-COV-2. BATS HAVE BEEN FUNDAMENTAL HOSTS OF THESE HUMAN CORONAVIRUSES.... view more 

CREDIT: DENIS JACOB MACHADO

Coronavirus (CoVs) infection in animals and humans is not new. The earliest papers in the scientific literature of coronavirus infection date to 1966. However, prior to SARS-CoV, MERS-CoV, and SARS-CoV-2, very little attention had been paid to coronaviruses.

Suddenly, coronaviruses changed everything we know about personal and public health, and societal and economic well-being. The change led to rushed analyses to understand the origins of coronaviruses in humans. This rush has led to a thus far fruitless search for intermediate hosts (e.g., civet in SARS-CoV and pangolin in SARS-CoV-2) rather than focusing on the important work, which has always been surveillance of SARS-like viruses in bats.

To clarify the origins of coronavirus' infections in humans, researchers from the Bioinformatics Research Center (BRC) at the University of North Carolina at Charlotte (UNC Charlotte) performed the largest and most comprehensive evolutionary analyses to date. The UNC Charlotte team analyzed over 2,000 genomes of diverse coronaviruses that infect humans or other animals.

"We wanted to conduct evolutionary analyses based on the most rigorous standards of the field," said Denis Jacob Machado, the first author of the paper. "We've seen rushed analyses that had different problems. For example, many analyses had poor sampling of viral diversity or placed excessive emphasis on overall similarity rather than on the characteristics shared due to common evolutionary history. It was very important to us to avoid those mistakes to produce a sound evolutionary hypothesis that could offer reliable information for future research."

The study's major conclusions are:

    1) Bats have been ancestral hosts of human coronaviruses in the case of SARS-CoV and SARS-CoV-2. Bats also were the ancestral hosts of MERS-CoV infections in dromedary camels that spread rapidly to humans.

    2) Transmission of MERS-CoV among camels and their herders evolved after the transmission from bats to these hosts. Similarly, there was transmission of SARS-CoV after the bat to human transmission among human vendors and their civets. These events are similar to the transmission of SARS-CoV-2 by fur farmers to their minks. The evolutionary analysis in this study helps to elucidate that these events occurred after the original human infection from lineages of coronaviruses hosted in bats. Therefore, these secondary transmissions to civet or mink did not play a role in the fundamental emergence of human coronaviruses.

    3) The study corroborates the animal host origins of other human coronaviruses, such as HCoV-NL63 (from bat hosts), HCoV-229E (from camel hosts), HCoV-HKU1 (from rodent hosts) and HCoV-OC43 and HECV-4408 (from cow hosts).

    4) Transmission of coronaviruses from animals to humans occurs episodically. From 1966 to 2020, the scientific community has described eight human-hosted lineages of coronaviruses. Although it is difficult to predict when a new human hosted coronavirus could emerge, the data indicate that we should prepare for that possibility.

"As coronavirus transmission from animal to human host occurs episodically at unpredictable intervals, it is not wise to attempt to time when we will experience the next human coronavirus," noted professor Daniel A. Janies, Carol Grotnes Belk Distinguished Professor of Bioinformatics and Genomics and team leader for the study. "We must conduct research on viruses that can be transferred from animals to humans on a continuous rather than reactionary basis."

###

"Fundamental evolution of all Orthocoronavirinae including three deadly lineages descendent from Chiroptera-hosted coronaviruses: SARS-CoV, MERS-CoV, and SARS-CoV-2" was published online in the journal Cladistics on April 26, 2021. The authors are Denis Jacob Machado, Rachel Scott, Sayal Guirales, and Daniel A. Janies. The article's digital object number is 10.1111/cla.12454.

Article's URL: http://doi.org/10.1111/cla.12454.

I WANT ONE

Personalized sweat sensor reliably monitors blood glucose without finger pricks

AMERICAN CHEMICAL SOCIETY

Research News

IMAGE

IMAGE: A HAND-HELD DEVICE COMBINED WITH A TOUCH SWEAT SENSOR (STRIP AT RIGHT) MEASURES GLUCOSE IN SWEAT, WHILE A PERSONALIZED ALGORITHM CONVERTS THAT DATA INTO A BLOOD GLUCOSE LEVEL. view more 

CREDIT: ADAPTED FROM ACS SENSORS 2021, DOI: 10.1021/ACSSENSORS.1C00139

Many people with diabetes endure multiple, painful finger pricks each day to measure their blood glucose. Now, researchers reporting in ACS Sensors have developed a device that can measure glucose in sweat with the touch of a fingertip, and then a personalized algorithm provides an accurate estimate of blood glucose levels.

According to the American Diabetes Association, more than 34 million children and adults in the U.S. have diabetes. Although self-monitoring of blood glucose is a critical part of diabetes management, the pain and inconvenience caused by finger-stick blood sampling can keep people from testing as often as they should. Scientists have developed ways to measure glucose in sweat, but because levels of the sugar are much lower than in blood, they can vary with a person's sweat rate and skin properties. As a result, the glucose level in sweat usually doesn't accurately reflect the value in blood. To obtain a more reliable estimate of blood sugar from sweat, Joseph Wang and colleagues wanted to devise a system that could collect sweat from a fingertip, measure glucose and then correct for individual variability.

The researchers made a touch-based sweat glucose sensor with a polyvinyl alcohol hydrogel on top of an electrochemical sensor, which was screen-printed onto a flexible plastic strip. When a volunteer placed their fingertip on the sensor surface for 1 minute, the hydrogel absorbed tiny amounts of sweat. Inside the sensor, glucose in the sweat underwent an enzymatic reaction that resulted in a small electrical current that was detected by a hand-held device. The researchers also measured the volunteers' blood sugar with a standard finger-prick test, and they developed a personalized algorithm that could translate each person's sweat glucose to their blood glucose levels. In tests, the algorithm was more than 95% accurate in predicting blood glucose levels before and after meals. To calibrate the device, a person with diabetes would need a finger prick only once or twice per month. But before the sweat diagnostic can be used to manage diabetes, a large-scale study must be conducted, the researchers say.

###

The authors acknowledge funding from the University of California San Diego Center for Wearable Sensors and the National Research Foundation of Korea.

The abstract that accompanies this paper is available here.

The American Chemical Society (ACS) is a nonprofit organization chartered by the U.S. Congress. ACS' mission is to advance the broader chemistry enterprise and its practitioners for the benefit of Earth and all its people. The Society is a global leader in promoting excellence in science education and providing access to chemistry-related information and research through its multiple research solutions, peer-reviewed journals, scientific conferences, eBooks and weekly news periodical Chemical & Engineering News. ACS journals are among the most cited, most trusted and most read within the scientific literature; however, ACS itself does not conduct chemical research. As a leader in scientific information solutions, its CAS division partners with global innovators to accelerate breakthroughs by curating, connecting and analyzing the world's scientific knowledge. ACS' main offices are in Washington, D.C., and Columbus, Ohio.

To automatically receive news releases from the American Chemical Society, contact newsroom@acs.org.  

Follow us: Twitter | Facebook

New app makes Bitcoin more secure

More than 90% of users don't know if their mobile wallet is potentially compromised; now, there's an app for that

MICHIGAN STATE UNIVERSITY

Research News

A computer science engineer at Michigan State University has a word of advice for the millions of bitcoin owners who use smartphone apps to manage their cryptocurrency: don't. Or at least, be careful. Researchers from MSU are developing a mobile app to act as a safeguard for popular but vulnerable "wallet" applications used to manage cryptocurrency.

"More and more people are using bitcoin wallet apps on their smartphones," said Guan-Hua Tu, an assistant professor in MSU's College of Engineering who works in the Department of Computer Science and Engineering. "But these applications have vulnerabilities."

Smartphone wallet apps make it easy to buy and trade cryptocurrency, a relatively new digital currency that can be challenging to understand in just about every way except one: It's very clearly valuable. Bitcoin was the most valuable cryptocurrency at the time of writing, with one bitcoin being worth more than $55,000.

But Tu and his team are uncovering vulnerabilities that can put a user's money and personal information at risk. The good news is that the team is also helping users better protect themselves by raising awareness about these security issues and developing an app that addresses those vulnerabilities.

The researchers showcased that app -- the Bitcoin Security Rectifier -- in a paper published for the Association for Computing Machinery's Conference on Data and Application Security and Privacy. In terms of raising awareness, Tu wants to help wallet users understand that these apps can leave them vulnerable by violating one of Bitcoin's central principles, something called decentralization.

Bitcoin is a currency that's not tied to any central bank or government. There's also no central computer server that stores all the information about bitcoin accounts, such as who owns how much.

"There are some apps that violate this decentralized principle," Tu said. "The apps are developed by third parties. And, they can let their wallet app connect with their proprietary server that then connects to Bitcoin."

In essence, Bitcoin Security Rectifier can introduce a middleman that Bitcoin omits by design. Users often don't know this and app developers aren't necessarily forthcoming with the information.

"More than 90% of users are unaware of whether their wallet is violating this decentralized design principle based on the results of a user study," Tu said. And if an app violates this principle, it can be a huge security risk for the user. For example, it can open the door for an unscrupulous app developer to simply take a user's bitcoin.

Tu said that the best way users can safeguard themselves is to not use a smartphone wallet app developed by untrusted developers. He instead encourages users to manage their bitcoin using a computer -- not a smartphone -- and resources found on Bitcoin's official website, bitcoin.org. For example, the site can help users make informed decisions about wallet apps.

But even wallets developed by reputable sources may not be completely safe, which is where the new app comes in.

Most smartphone programs are written in a programming language called Java. Bitcoin wallet apps make use of a Java code library known bitcoinj, pronounced "bitcoin jay." The library itself has vulnerabilities that cybercriminals could attack, as the team demonstrated in its recent paper.

These attacks can have a variety of consequences, including compromising a user's personal information. For example, they can help an attacker deduce all the Bitcoin addresses that wallet users have used to send or receive bitcoin. Attacks can also send loads of unwanted data to a user, draining batteries and potentially resulting in hefty phone bills.

Tu's app is designed to run at the same time on the same phone as a wallet, where it monitors for signs of such intrusions. The app alerts users when an attack is happening and provides remedies based on the type of attack, Tu said. For example, the app can add "noise" to outgoing Bitcoin messages to prevent a thief from getting accurate information.

"The goal is that you'll be able to download our tool and be free from these attacks," Tu said.

The team is currently developing the app for Android phones and plans to have it available for download in the Google Play app store in the coming months. There's currently no timetable for an iPhone app because of the additional challenges and restrictions posed by iOS, Tu said.

In the meantime, though, Tu emphasized that the best way users can protect themselves from the insecurities of a smartphone bitcoin wallet is simply by not using one, unless the developer is trusted.

"The main thing that I want to share is that if you do not know your smartphone wallet applications well, it is better not to use them since any developer -- malicious or benign -- can upload their wallet apps to Google Play or Apple App Store," he said.

###

Also collaborating on this project were MSU's Professor Li Xiao as well as Ph.D. students Yiwen Hu and Sihan Wang, all from the Department of Computer Science and Engineering. This work was funded in part by the National Science Foundation.

A calculator that predicts risk of lung cancer underperforms in diverse populations

Research finds that a commonly used risk-prediction model for lung cancer does not accurately identify high-risk Black patients who could benefit from early screening

THOMAS JEFFERSON UNIVERSITY

Research News

PHILADELPHIA - Lung cancer is the third most common cancer in the U.S. and the leading cause of cancer death, with about 80% of the total 154,000 deaths recorded each year caused by cigarette smoking. Black men are more likely to develop and die from lung cancer than persons of any other racial or ethnic group, pointing to severe racial disparities. For example, research has shown that Black patients are less likely to receive early diagnosis and life-saving treatments like surgery. Now researchers at Jefferson have found that a commonly used risk prediction model does not accurately identify high-risk Black patients who could gain life-saving benefit from early screening, and paves the way for improving screenings and guidelines. The research was published in JAMA Network Open on April 6.

"Black individuals develop lung cancer at younger ages and with less intense smoking histories compared to white individuals," explains Julia Barta, MD, Assistant Professor of Medicine in the Division of Pulmonary and Critical Care Medicine at Thomas Jefferson University, and researcher at the Jane and Leonard Korman Respiratory Institute. "Updated guidelines now recommend screening eligible patients beginning at age 50, but could still potentially exclude higher-risk Black patients. We are interested in finding methods that could help identify at-risk patients who are under-screened."

Screening for lung cancer is an annual CT scan to detect the presence of lung cancer in otherwise healthy people with a high risk of lung cancer. Current guidelines do not require a risk score for screening eligibility, but some researchers think that risk models could improve care. Risk prediction models are mathematical equations that take into account risk factors like smoking history and age to produce a risk score, which indicates the risk for developing lung cancer. Existing risk prediction models are derived from screening data that only include 5% or fewer African American individuals.

"What makes our study unique is that our screening cohort included more than 40% Black individuals," says senior author Dr. Barta, a member of Sidney Kimmel Cancer Center - Jefferson Health. "To our knowledge, our study is the first to examine lung cancer risk in a diverse screening program and aims to strengthen the argument for more inclusive guidelines for screening eligibility."

The most well-validated model used in screening research is the Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial modified logistic regression model (PLCOm2012). "It uses 10-12 risk factors that include age, race, smoking history, as well as some socioeconomic factors like education to calculate a risk score," says Christine Shusted, MPH, first author of the study and research data analyst for Jefferson's Lung Cancer Screening Program through the Korman Respiratory Institute at Thomas Jefferson University. "The higher the score, the higher the risk of developing lung cancer. We wanted to see how well this model identifies patients with the highest risk of lung cancer in this diverse patient population."

The researchers conducted a cross-sectional, retrospective study in 1,276 Black and white patients (mean age, 64.25 years; 42.7% Black; 59.3% women) who enrolled in the Jefferson Lung Cancer Screening Program between January 2018 and September 2020. From this screening cohort, lung cancer was detected in 32 patients, 44% of whom were Black - these patients formed the cancer cohort. The researchers then calculated risk scores using the PLCOm2012 model. In the screening cohort, more Black patients than white patients were in high-risk groups, indicating that Black patients in this cohort had a higher risk of developing lung cancer.

As anticipated, white patients with screen-detected lung cancer generally had high lung cancer risk scores. "Among Black patients, we would have expected to see a similar trend," explains Dr. Barta. "However, we saw that despite having a lung cancer diagnosis through screening, Black patients were actually defined as lower risk. This indicates that the model is not accurately predicting risk of lung cancer in Black patients."

"These findings allowed us to identify weaknesses in this model for risk calculation for lung cancer," explains Shusted. "It indicates that we need to not only expand criteria for lung cancer screening so that more diverse populations are included, but that these prediction models need to include factors, like environmental contributors, access to health care, and other social determinants of health."

The researchers hope to continue building on these findings, with the ultimate goal of defining comprehensive risk factors and improving lung cancer screening uptake and adherence especially among vulnerable populations.

"This work is an important step to reducing disparities in the screening and early detection of lung cancer, and making sure we can trust our models to predict those individuals at the highest risk," says Dr. Barta.

###

This work was supported in part by the Bristol Myers-Squibb Foundation's Specialty Care for Vulnerable Populations initiative. Dr. Barta reported receiving grants from the Genentech Health Equity Innovations Fund and the Prevent Cancer Foundation outside the submitted work. The authors report no other conflicts of interest.

Article Reference: Christine Shusted, Nathaniel Evans, Hee-Soon Juon, Gregory Kane, Julie Barta, "Association of Race With Lung Cancer Risk Among Adults Undergoing Lung Cancer Screening," JAMA Network Open, DOI: 10.1001/jamanetworkopen.2021.4509, 2021