Monday, October 24, 2022

Cat got your tongue: Cats distinguish between speech directed at them and humans



Peer-Reviewed Publication

SPRINGER

A small study has found that cats may change their behaviour when they hear their owner’s voice talking in a tone directed to them, the cats, but not when hearing the voice of a stranger or their owner’s voice directed at another person. The study of 16 cats is published in the journal Animal Cognition and adds to evidence that cats may form strong bonds with their owner.

Human tone is known to vary depending on who the speech is directed to, such as when talking to infants and dogs. The tone of human speech has been shown in previous studies to change when directed at cats, but less is known about how cats react to this.

Charlotte de Mouzon and colleagues from Université Paris Nanterre (Nanterre, France) investigated how 16 cats reacted to pre-recorded voices from both their owner and that of a stranger when saying phrases in cat-directed and human adult-directed tones.

The authors investigated three conditions, with the first condition changing the voice of the speaker from a stranger’s voice to the cat’s owner. The second and third conditions changed the tone used (cat-directed or adult-directed) for the cat’s owner or a stranger’s voice, respectively. The authors recorded and rated the behaviour intensity of cats reacting to the audio, checking for behaviours such as resting, ear moving, pupil dilation, and tail moving, amongst others. 

In the first condition, 10 out of the 16 cats showed a decrease in behaviour intensity as they heard three audio clips of a stranger’s voice calling them by their name. However, when hearing their owner’s voice their behaviour intensity significantly increased again. The cats displayed behaviours such as turning their ears to the speakers, increased movement around the room, and pupil dilation when hearing their owners’ voice. The authors suggest that the sudden rebound in behaviour indicates that cats could discriminate their owner’s voice from that of a stranger.

In the second condition, 10 cats (8 of which were the same from the first condition) decreased their behaviour as they heard audio from their owner in an adult-directed tone but significantly increased their behaviour when hearing the cat-directed tone from their owner. The change in behaviour intensity was not found in the third condition when a stranger was speaking in an adult-directed and cat-directed tone.

The authors observed that the cats can distinguish when their owner is talking in a cat-directed tone compared to an adult-directed tone, but did not react any differently when a stranger changes tone.  

The small sample size used in this study may not represent all cat behaviour but the authors propose that future research could investigate if their findings can be replicated in more socialised cats that are used to interacting with strangers.

The authors suggest that their findings bring a new dimension to cat-human relationships, with cat communication potentially relying on experience of the speaker’s voice. They conclude that one-to-one relationships are important for cats and humans to form strong bonds.

###

Media Contact:

Tara Eadie
Press Officer
Springer Nature
T: +44 20 3426 3329 
E: tara.eadie@springernature.com

Notes to editor:

Discrimination of cat‑directed speech from human‑directed speech in a population of indoor companion cats (Felis catus)

Animal Cognition 2022

DOI: 10.1007/s10071-022-01674-w

For an embargoed copy of the research article, related images and video please contact Tara Eadie at Springer Nature.

1. After the embargo ends, the full paper will be available at: https://link.springer.com/article/10.1007/s10071-022-01674-w

2. Please name the journal in any story you write. If you are writing for the web, please link to the article.

Animal Cognition is an interdisciplinary journal publishing current research from various backgrounds and disciplines (ethology, behavioral ecology, animal behaviour and learning, cognitive sciences, comparative psychology and evolutionary psychology) on all aspects of animal (and human) cognition in an evolutionary framework.

Machine learning enables an 'almost perfect' diagnosis of an elusive global killer

patient
Credit: Pixabay/CC0 Public Domain

Sepsis, the overreaction of the immune system in response to an infection, causes an estimated 20% of deaths globally and as many as 20 to 50% of U.S. hospital deaths each year. Despite its prevalence and severity, however, the condition is difficult to diagnose and treat effectively.

The disease can cause decreased blood flow to vital organs, inflammation throughout the body, and abnormal blood clotting. Therefore, if  isn't recognized and treated quickly, it can lead to shock, organ failure, and death. But it can be difficult to identify which pathogen is causing sepsis, or whether an infection is in the  or elsewhere in the body. And in many patients with symptoms that resemble sepsis, it can be challenging to determine whether they truly have an infection at all.

Now, researchers at the Chan Zuckerberg Biohub (CZ Biohub), the Chan Zuckerberg Initiative (CZI), and UC San Francisco (UCSF) have developed a new diagnostic method that applies machine learning to advanced genomics data from both microbe and host—to identify and predict sepsis cases. As reported on October 20, 2022 in Nature Microbiology, the approach is surprisingly accurate, and has the potential to far exceed current diagnostic capabilities.

"Sepsis is one of the top 10 public health issues facing humanity," said senior author Chaz Langelier, M.D., Ph.D., an associate professor of medicine in UCSF's Division of Infectious Diseases and a CZ Biohub Investigator. "One of the key challenges with sepsis is diagnosis. Existing  are not able to capture the dual-sided nature of the disease—the infection itself and the host's immune response to the infection."

Current sepsis diagnostics focus on detecting bacteria by growing them in culture, a process that is "essential for appropriate antibiotic therapy, which is critical for sepsis survival," according to the researchers behind the new method. But culturing these pathogens is time-consuming and doesn't always correctly identify the bacterium that is causing the infection. Similarly for viruses, PCR tests can detect that viruses are infecting a patient but don't always identify the particular virus that's causing sepsis.

"This results in clinicians being unable to identify the cause of sepsis in an estimated 30 to 50% of cases," Langelier said. "This also leads to a mismatch in terms of the antibiotic treatment and the pathogen causing the problem."

In the absence of a definitive diagnosis, doctors often prescribe a cocktail of antibiotics in an effort to stop the infection, but the overuse of antibiotics has led to increased antibiotic resistance worldwide. "As physicians, we never want to miss a case of infection," said Carolyn Calfee, M.D., M.A.S., a professor of medicine and anesthesia at UCSF and co-senior author of the new study. "But if we had a test that could help us accurately determine who doesn't have an infection, then that could help us limit  in those cases, which would be really good for all of us."

Eliminating ambiguity

The researchers analyzed whole blood and  from more than 350 critically ill patients who had been admitted to UCSF Medical Center or the Zuckerberg San Francisco General Hospital between 2010 and 2018.

But rather than relying on cultures to identify pathogens in these samples, a team led by CZ Biohub scientists Norma Neff, Ph.D., and Angela Pisco, Ph.D., instead used metagenomic next-generation sequencing (mNGS). This method identifies all the nucleic acids or genetic data present in a sample, then compares those data to reference genomes to identify the microbial organisms present. This technique allows scientists to identify genetic material from entirely different kingdoms of organisms—whether bacteria, viruses, or fungi—that are present in the same sample.

However, detecting and identifying the presence of a pathogen alone isn't enough for accurate sepsis diagnosis, so the Biohub researchers also performed transcriptional profiling—which quantifies gene expression—to capture the patient's response to infection.

Next they applied machine learning to the mNGS and transcriptional data to distinguish between sepsis and other critical illnesses and thus confirm the diagnosis. Katrina Kalantar, Ph.D., a lead computational biologist at CZI and co-first author of the study, created an integrated host-microbe model trained on data from patients in whom either sepsis or non-infectious systemic inflammatory illnesses had been established, which enabled sepsis diagnosis with very high accuracy.

"We developed the model by looking at a bunch of metagenomics data alongside results from traditional clinical tests," Kalantar explained. To start, the researchers identified changes in gene expression between patients with confirmed sepsis and non-infectious systemic inflammatory conditions that appear clinically similar, then used machine learning to identify the genes that could best predict those changes.

The researchers found that when traditional bacterial culture identified a sepsis-causing pathogen, there was usually an overabundance of genetic material from that pathogen in the corresponding plasma sample analyzed by mNGS. With that in mind, Kalantar programmed the model to identify organisms present in disproportionately high abundance compared to other microbes in the sample, and to then compare those to a reference index of well-known sepsis-causing microbes.

"In addition to that, we also noted any viruses that were detected, even if they were at lower levels, because those really shouldn't be there," Kalantar explained. "With this relatively straightforward set of rules, we were able to do pretty well."

'Almost perfect' performance

The researchers found that the mNGS method and their corresponding model worked better than expected: They were able to identify 99% of confirmed bacterial sepsis cases, 92% of confirmed viral sepsis cases, and were able to predict sepsis in 74% of clinically suspected cases that hadn't been definitively diagnosed.

"We were expecting good performance, or even great performance, but this was almost perfect," said Lucile Neyton, Ph.D., a postdoctoral researcher in the Calfee lab and co-first author of the study. "By using this approach, we get a pretty good idea of what is causing the disease, and we know with relatively high confidence if a patient has sepsis or not."

The team was also excited to discover that they could use this combined host-response and microbe detection method to diagnose sepsis using plasma samples, which are routinely collected from most patients as part of standard clinical care. "The fact that you can actually identify sepsis patients from this widely available, easy-to-collect sample type has big implications in terms of practical utility," Langelier said.

The idea for the work stemmed from previous research by Langelier, Kalantar, Calfee, UCSF researcher and CZ Biohub President Joe DeRisi, Ph.D., and their colleagues, in which they used mNGS to effectively diagnose lower respiratory tract infections in critically ill patients. Because the method worked so well, "we wanted to see if the same type of approach could work in the context of sepsis," said Kalantar.

Broader implications

The team hopes to build upon this successful  by developing a model that can also predict antibiotic resistance from pathogens detected with this method. "We've had some success doing that for respiratory infections, but no one has come up with a good approach for sepsis," Langelier said.

Furthermore, the researchers hope to eventually be able to predict outcomes of patients with sepsis, "such as mortality or length of stay in the hospital, which would provide key information that would allow clinicians to better care for their patients and match resources to the patients who need them the most," Langelier said.

"There's a lot of potential for novel sequencing approaches such as this to help us more precisely identify the causes of a patient's critical illness," added Calfee. "If we can do that, it's the first step towards precision medicine and understanding what's going on at an individual patient level."Consumer health: What do you know about sepsis?

More information: Katrina L. Kalantar et al, Integrated host-microbe plasma metagenomics for sepsis diagnosis in a prospective cohort of critically ill adults, Nature Microbiology (2022). DOI: 10.1038/s41564-022-01237-2
Journal information: Nature Microbiology 
Provided by Chan Zuckerberg Biohub
MY FATHER DIED OF SEPSIS FROM AN INFECTION CAUSED AT THE HOSPITAL USING A BAD SALINE SOLUTION FOR HIS HIP JOINT

Underground microbes may have swarmed ancient Mars

Underground microbes may have swarmed ancient Mars
This image captured by the United Arab Emirates' "Amal" ("Hope") probe shows the planet 
Mars on Feb. 10, 2021. Ancient Mars may have had an environment capable of harboring 
an underground world teeming with microscopic organisms. That's according to French 
scientists who published their findings Monday, Oct. 10, 2022.
 Credit: Mohammed bin Rashid Space Center/UAE Space Agency, via AP, File

Ancient Mars may have had an environment capable of harboring an underground world teeming with microscopic organisms, French scientists reported Monday.

But if they existed, these simple life forms would have altered the atmosphere so profoundly that they triggered a Martian Ice Age and snuffed themselves out, the researchers concluded.

The findings provide a bleak view of the ways of the cosmos. Life—even simple life like microbes—"might actually commonly cause its own demise," said the study's lead author, Boris Sauterey, now a post-doctoral researcher at Sorbonne University.

The results "are a bit gloomy, but I think they are also very stimulating.," he said in an email. "They challenge us to rethink the way a biosphere and its planet interact."

In a study in the journal Nature Astronomy, Sauterey and his team said they used climate and terrain models to evaluate the habitability of the Martian crust some 4 billion years ago when the red planet was thought to be flush with water and much more hospitable than today.

They surmised that hydrogen-gobbling, methane-producing microbes might have flourished just beneath the surface back then, with several inches (a few tens of centimeters) of dirt, more than enough to protect them against harsh incoming radiation. Anywhere free of ice on Mars could have been swarming with these organisms, according to Sauterey, just as they did on early Earth.

Early Mars' presumably moist, warm climate, however, would have been jeopardized by so much hydrogen sucked out of the thin, carbon dioxide-rich atmosphere, Sauterey said. As temperatures plunged by nearly minus 400 degrees Fahrenheit (minus 200 degrees Celsius), any organisms at or near the surface likely would have buried deeper in an attempt to survive.

Underground microbes may have swarmed ancient Mars
This image made available by NASA shows the Jezero Crater area on the planet Mars, captured by the Mars Reconnaissance Orbiter. Ancient Mars may have had an environment capable of harboring an underground world teeming with microscopic organisms. That's according to French scientists who published their findings, Monday, Oct. 10, 2022. Credit: NASA/JPL-Caltech/USGS via AP, File

By contrast, microbes on Earth may have helped maintain temperate conditions, given the nitrogen-dominated atmosphere, the researchers said.

The SETI Institute's Kaveh Pahlevan said future models of Mars' climate need to consider the French research.

Pahlevan led a separate recent study suggesting Mars was born wet with warm oceans lasting millions of years. The atmosphere would have been dense and mostly hydrogen back then, serving as a heat-trapping greenhouse gas that eventually was transported to higher altitudes and lost to space, his team concluded.

The French study investigated the climate effects of possible microbes when Mars' atmosphere was dominated by carbon dioxide and so is not applicable to the earlier times, Pahlevan said.

"What their study makes clear, however, is that if (this) life were present on Mars" during this earlier period, "they would have had a major influence on the prevailing climate," he added in an email.

The best places to look for traces of this past life? The French researchers suggest the unexplored Hellas Planita, or plain, and Jezero Crater on the northwestern edge of Isidis Planita, where NASA's Perseverance rover currently is collecting rocks for return to Earth in a decade.

Next on Sauterey's to-do list: looking into the possibility that microbial life could still exist deep within Mars.

"Could Mars still be inhabited today by micro-organisms descending from this primitive biosphere?" he said. "If so, where?"New clues about early atmosphere on Mars suggest a wet planet capable of supporting life

More information: Boris Sauterey, Early Mars habitability and global cooling by H2-based methanogens, Nature Astronomy (2022). DOI: 10.1038/s41550-022-01786-wwww.nature.com/articles/s41550-022-01786-w
Journal information: Nature Astronomy 
© 2022 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Something Buried For Centuries Has Just Been Uncovered - Intro To 2001 Movie Ghost Of Mars

 


GHOSTS OF MARS (The Horrific Martian Spirits & Ending) EXPLAINED

 

Research finds unprecedented levels of insects damaging plants

Research finds unprecedented levels of insects damaging plants
Lauren Azevedo-Schmidt searches for fossilized plants in Wyoming’s Hanna Basin in a 
deposit that is about 60 million years old. She and other researchers compared fossil 
leaves with modern samples and found higher rates of insect damage today. 
Credit: Lauren Azevedo-Schmidt

Insects today are causing unprecedented levels of damage to plants, even as insect numbers decline, according to new research led by University of Wyoming scientists.

The first-of-its-kind study compares insect herbivore damage of modern-era plants with that of fossilized leaves from as far back as the Late Cretaceous period, nearly 67 million years ago. The findings appear in the journal Proceedings of the National Academy of Sciences.

"Our work bridges the gap between those who use fossils to study plant-insect interactions over deep time and those who study such interactions in a modern context with fresh leaf material," says the lead researcher, UW Ph.D. graduate Lauren Azevedo-Schmidt, now a postdoctoral research associate at the University of Maine. "The difference in insect damage between the modern era and the fossilized record is striking."

Azevedo-Schmidt conducted the research along with UW Department of Botany and Department of Geology and Geophysics Professor Ellen Currano, and Assistant Professor Emily Meineke of the University of California-Davis.

The study examined fossilized leaves with insect feeding damage from the Late Cretaceous through the Pleistocene era, a little over 2 million years ago, and compared them with leaves collected by Azevedo-Schmidt from three modern forests. The detailed research looked at different types of damage caused by , finding marked increases in all recent damage compared to the .

Research finds unprecedented levels of insects damaging plants
This fossil leaf from Wyoming’s Hanna Basin, about 54 million years old, shows damage
 by insects. Credit: Lauren Azevedo-Schmidt

"Our results demonstrate that plants in the modern era are experiencing unprecedented levels of insect damage, despite widespread insect declines," wrote the scientists, who suggest that the disparity can be explained by .

More research is necessary to determine the precise causes of increased insect damage to , but the scientists say a , urbanization and introduction of invasive species likely have had a major impact.

"We hypothesize that humans have influenced (insect) damage frequencies and diversities within modern forests, with the most human impact occurring after the Industrial Revolution," the researchers wrote. "Consistent with this hypothesis,  from the early 2000s were 23 percent more likely to have insect damage than specimens collected in the early 1900s, a pattern that has been linked to climate warming."

But climate change doesn't fully explain the increase in insect damage, they say.

"This research suggests that the strength of human influence on plant-insect interactions is not controlled by climate change alone, but rather, the way in which humans interact with the terrestrial landscape," the researchers concluded.

Beetle in the coconut: Fossil find sheds new light on Neotropical rainforests

More information: Azevedo-Schmidt, Lauren, Insect herbivory within modern forests is greater than fossil localities, Proceedings of the National Academy of Sciences (2022). DOI: 10.1073/pnas.2202852119doi.org/10.1073/pnas.2202852119
Provided by University of Wyoming 

DNA found in sediment reveals that ancient artificial islands may have been high-status homes

Dna found in sediment near ancient artificial islands
Crannog in Lough na Cranagh, near Fair Head, Northern Ireland. Credit: Antony Brown

Researchers have recovered DNA from the sediments surrounding ancient artificial islands, known as crannogs, in Britain and Ireland. These results, along with environmental and biochemical data in these sediments, show the structures were once used by elites.

Crannogs were built and occupied from the Neolithic, 4000–2200 BC, through to the 16th century AD. Hundreds have been found, mostly in Scotland, Ireland, and Northern Ireland.

However, the aquatic location of crannogs makes them difficult for archaeologists to excavate. So Professor Antony Brown, from UiT Arctic University of Norway, and an interdisciplinary team from across the UK set out to take samples from  instead.

Previous research had recovered , like pollen, from  obtained near crannogs. Material gets washed off the island and deposited layer by layer in the lake, so cores of these sediments reveal crannog conditions over time.

Now, Professor Brown and the team have also been able to recover DNA from these cores, known as "sedaDNA." This was combined with analysis of other material in the sediment to build an integrated picture of the crannogs. Their work is published in the journal Antiquity.

For this research, the team analyzed sediment cores from around crannogs in White Loch of Myrton, Scotland, and Lough Yoan, Ireland. Radiocarbon dating of the cores revealed the crannog in White Loch was built in the Iron Age, around 400 BC, whilst Lough Yoan's dates to the medieval period. They found both were abandoned and re-occupied multiple times.

Dna found in sediment near ancient artificial islands
Replica crannog in Waterford, Ireland. Credit: Antony Brown

The integrated analysis showed that the crannogs were important places, likely high-status homes, stocked with resources. The team found pollen from bracken, for bedding, as well as signs crafting took place on the islands.

Additionally, sedaDNA from cows, sheep, and goats indicated that animals were kept on the crannogs. At one of the Lough Yoan crannogs,  suggest that were butchered on-site in high quantities, likely for feasts. This supports previous research suggesting some crannogs were compounds used by the elite, perhaps for ceremonial occasions.

The team also found that the cost of supplying the crannogs with these resources meant they were linked to dramatic changes in the local area. Changes in pollen and plant sedaDNA indicated crannog construction was associated with deforestation, likely to provide materials and clear farmland. Meanwhile, waste from the inhabitants polluted the lakes, leading to  on lake ecosystems as early as the Iron Age.

As such, this research sheds new light on life on crannogs and also shows how this integrated method of studying  cores provides insight into otherwise hard-to-study waterside sites. This approach, incorporating sedaDNA, provides researchers with a powerful new tool.

The team are already expanding to study other crannogs. These unique structures are under threat from erosion and other .

New evidence suggests Scottish crannogs thousands of years older than thought
More information: Antony G. Brown et al, New integrated molecular approaches for investigating lake settlements in north-western Europe, Antiquity (2022). DOI: 10.15184/aqy.2022.70
Journal information: Antiquity 
Provided by Antiquity

ALCHEMY

Nanomaterial from the Middle Ages

Nanomaterial from Middle Ages
PXCT 3D images of the 35-year old Zwischgold sample showing the addition of (a) Au, (b) Ag (with transparent voids), (c) “silver corrosion products”, and (d) other segments. (e) Stack plot of the depth profile of the single-layered section of the sample, aligned to the main layer of the Au segment. Credit: Nanoscale (2022). DOI: 10.1039/D2NR03367D

To gild sculptures in the late Middle Ages, artists often applied ultra-thin gold foil supported by a silver base layer. For the first time, scientists at the Paul Scherrer Institute PSI have managed to produce nanoscale 3D images of this material, known as Zwischgold. The pictures show this was a highly sophisticated medieval production technique and demonstrate why restoring such precious gilded artifacts is so difficult.

The samples examined at the Swiss Light Source SLS using one of the most advanced microscopy methods were unusual even for the highly experienced PSI team: minute samples of materials taken from an altar and wooden statues originating from the fifteenth century. The altar is thought to have been made around 1420 in Southern Germany and stood for a long time in a mountain chapel on Alp Leiggern in the Swiss canton of Valais.

Today it is on display at the Swiss National Museum (Landesmuseum Zürich). In the middle you can see Mary cradling Baby Jesus. The material sample was taken from a fold in the Virgin Mary's robe. The tiny samples from the other two medieval structures were supplied by Basel Historical Museum.

The material was used to gild the sacred figures. It is not actually  leaf, but a special double-sided foil of gold and  where the gold can be ultra-thin because it is supported by the silver base. This material, known as Zwischgold (part-gold) was significantly cheaper than using pure gold leaf.

"Although Zwischgold was frequently used in the Middle Ages, very little was known about this material up to now," says PSI physicist Benjamin Watts: "So we wanted to investigate the samples using 3D technology which can visualize extremely fine details."

Although other microscopy techniques had been used previously to examine Zwischgold, they only provided a 2D cross-section through the material. In other words, it was only possible to view the surface of the cut segment, rather than looking inside the material. The scientists were also worried that cutting through it may have changed the structure of the sample.

The advanced microscopy imaging method used today, ptychographic tomography, provides a 3D image of Zwischgold's exact composition for the first time.

X-rays generate a diffraction pattern

The PSI scientists conducted their research using X-rays produced by the Swiss Light Source SLS. These produce tomographs displaying details in the nanoscale range—millionths of a millimeter, in other words.

"Ptychography is a fairly sophisticated method, as there is no objective lens that forms an image directly on the detector," Watts explains. Ptychography actually produces a  of the illuminated area, in other words an image with points of differing intensity.

By manipulating the sample in a precisely defined manner, it is possible to generate hundreds of overlapping diffraction patterns. "We can then combine these diffraction patterns like a sort of giant Sudoku puzzle and work out what the original image looked like," says the physicist. A set of ptychographic images taken from different directions can be combined to create a 3D tomogram.

The advantage of this method is its extremely high resolution. "We knew the thickness of the Zwischgold sample taken from Mary was of the order of hundreds of nanometers," Watts explains. "So we had to be able to reveal even tinier details."

The scientists achieved this using ptychographic tomography, as they report in their latest article in the journal Nanoscale. "The 3D images clearly show how thinly and evenly the gold layer is over the silver base layer," says Qing Wu, lead author of the publication.

The art historian and conservation scientist completed her Ph.D. at the University of Zurich, in collaboration with PSI and the Swiss National Museum. "Many people had assumed that technology in the Middle Ages was not particularly advanced," Wu comments. "On the contrary: this was not the Dark Ages, but a period when metallurgy and gilding techniques were incredibly well developed."

Secret recipe revealed

Unfortunately there are no records of how Zwischgold was produced at the time. "We reckon the artisans kept their recipe secret," says Wu. Based on nanoscale images and documents from later epochs, however, the art historian now knows the method used in the 15th century: first the gold and the silver were hammered separately to produce thin foils, whereby the gold film had to be much thinner than the silver.

Then the two metal foils were worked on together. Wu describes the process: "This required special beating tools and pouches with various inserts made of different materials into which the foils were inserted," Wu explains. This was a fairly complicated procedure that required highly skilled specialists.

"Our investigations of Zwischgold samples showed the average thickness of the gold layer to be around 30 nanometers, while gold leaf produced in the same period and region was approximately 140 nanometers thick," Wu explains. "This method saved on gold, which was much more expensive." At the same time, there was also a very strict hierarchy of materials: gold leaf was used to make the halo of one figure, for example, while Zwischgold was used for the robe.

Because this material has less of a sheen, the artists often used it to color the hair or beards of their statues. "It is incredible how someone with only hand tools was able to craft such nanoscale material," Watts says. Medieval artisans also benefited from a unique property of gold and silver crystals when pressed together: their morphology is preserved across the entire metal film. "A lucky coincidence of nature that ensures this technique works," says the physicist.

Golden surface turns black

The 3D images do bring to light one drawback of using Zwischgold, however: the silver can push through the gold layer and cover it. The silver moves surprisingly quickly—even at room temperature. Within days, a thin silver coating covers the gold completely. At the surface the silver comes into contact with water and sulfur in the air, and corrodes.

"This makes the gold surface of the Zwischgold turn black over time," Watts explains. "The only thing you can do about this is to seal the surface with a varnish so the sulfur does not attack the silver and form silver sulfide." The artisans using Zwischgold were aware of this problem from the start. They used resin, glue or other organic substances as a varnish. "But over hundreds of years this protective layer has decomposed, allowing corrosion to continue," Wu explains.

The corrosion also encourages more and more silver to migrate to the surface, creating a gap below the Zwischgold. "We were surprised how clearly this gap under the metal layer could be seen," says Watts. Especially in the sample taken from Mary's robe, the Zwischgold had clearly come away from the base layer.

"This gap can cause mechanical instability, and we expect that in some cases it is only the protective coating over the Zwischgold that is holding the metal foil in place," Wu warns. This is a massive problem for the restoration of historical artifacts, as the silver sulfide has become embedded in the varnish layer or even further down.

"If we remove the unsightly products of corrosion, the varnish layer will also fall away and we will lose everything," says Wu. She hopes it will be possible in future to develop a special material that can be used to fill the gap and keep the Zwischgold attached. "Using ptychographic tomography, we could check how well such a consolidation material would perform its task," says the art historian.Maintaining the structure of gold and silver in alloys

More information: Qing Wu et al, A modern look at a medieval bilayer metal leaf: nanotomography of Zwischgold, Nanoscale (2022). DOI: 10.1039/D2NR03367D
Journal information: Nanoscale 
Provided by Paul Scherrer Institute 

 

Scientists hit their creative peak early in their careers, study finds

scientist
Credit: Pixabay/CC0 Public Domain

A new study provides the best evidence to date that scientists overall are most innovative and creative early in their careers.

Findings showed that, on one important measure, the impact of biomedical scientists' published work drops by between one-half to two-thirds over the course of their careers.

"That's a huge decline in impact," said Bruce Weinberg, co-author of the study and professor of economics at The Ohio State University.

"We found that as they get older, the work of biomedical scientists was just not as innovative and impactful."

But the reasons behind this trend of declining innovativeness make the findings more nuanced and show why it is still important to support scientists later in their careers, Weinberg said.

The study was published online Oct. 7, 2022 in the Journal of Human Resources.

Researchers have been studying the relationship between age or experience with innovativeness for nearly 150 years, but no consensus has emerged. Findings, in fact, have been "all over the map," Weinberg said.

"For a topic that so many people with so many approaches have studied for so long, it is pretty remarkable that we still don't have a conclusive answer."

One advantage of this study is that the authors had a huge dataset to work with—5.6 million biomedical science articles published over a 30-year period, from 1980 to 2009, and compiled by MEDLINE. These data include detailed information on the authors.

This new study measured the innovativeness of the articles by biomedical scientists using a standard method—the number of times other scientists mention (or "cite") a study in their own work. The more times a study is cited, the more important it is thought to be.

With detailed information on the authors of each paper, the researchers in this study were able to compare how often scientists' work was cited early in their careers compared to later in their careers.

As they analyzed the data, Weinberg and his colleagues made a discovery that was a key to understanding how innovation changes over a .

They found that scientists who were the least innovative early in their careers tended to drop out of the field and quit publishing new research. It was the most productive, the most important young scholars who were continuing to produce research 20 or 30 years later

"Early in their careers, scientists show a wide range of innovativeness. But over time, we see selective attrition of the people who are less innovative," Weinberg said.

"So when you look at all biomedical scientists as a group, it doesn't look like innovation is declining over time. But the fact that the least innovative researchers are dropping out when they are relatively young disguises the fact that, for any one person, innovativeness tends to decline over their career."

Results showed that for the average researcher, a scientific article they published late in their career was cited one-half to two-thirds less often than an article published early in their careers.

But it wasn't just citation counts that suggest researchers were less innovative later in their career.

"We constructed additional metrics that captured the breadth of an article's impact based on the range of fields that cite it, whether the article is employing the best and latest ideas, citing the best and latest research, and whether the article is drawing from multiple disciplines," said Huifeng Yu, a co-author, who worked on the study as a Ph.D. student at the University at Albany, SUNY.

"These other metrics also lead to the same conclusion about declining innovativeness."

The findings showing selective attrition among less-innovative scientists can help explain why previous studies have had such conflicting results, Weinberg said.

Studies using Nobel Laureates and other eminent researchers, for whom attrition is relatively small, tend to find earlier peak ages for innovation. In contrast, studies using broader cross-sections of scientists don't normally find an early peak in creativity, because they don't account for the attrition.

Weinberg noted that  in the  may not relate only to innovativeness. Scientists who are women or from underrepresented minorities may not have had the opportunities they needed to succeed, although this study can't quantify that effect.

"Those scientists who succeeded probably did so through a combination of talent, luck, personal background and prior training," he said.

The findings suggest that organizations that fund scientists have to maintain a delicate balance between supporting youth and experience.

"Young scientists tend to be at their peak of creativity, but there is also a big mix with some being much more innovative than others. You may not be supporting the very best researchers," said Gerald Marschke, a co-author of the study and associate professor of economics at the University at Albany,

"With older, more experienced scientists, you are getting the ones who have stood the test of time, but who on average are not at their best anymore."

Creativity is not just for the young, study finds

More information: Huifeng Yu et al, Publish or Perish: Selective Attrition as a Unifying Explanation for Patterns in Innovation over the Career, Journal of Human Resources (2022). DOI: 10.3368/jhr.59.2.1219-10630R1
Journal information: Journal of Human Resources

A new approach, not currently described by the Clean Air Act, could eliminate air pollution disparities

air quality
Credit: Unsplash/CC0 Public Domain

While air quality has improved dramatically over the past 50 years thanks in part to the Clean Air Act, people of color at every income level in the United States are still exposed to higher-than-average levels of air pollution.

A team led by researchers at the University of Washington wanted to know if the Clean Air Act is capable of reducing these disparities or if a new approach would be needed. The team compared two approaches that mirror main aspects of the Clean Air Act and a third approach that is not commonly used to see if it would be better at addressing disparities across the contiguous U.S. The researchers used national emissions data to model each strategy: targeting specific emissions sources across the U.S.; requiring regions to adhere to specific concentration standards; or reducing emissions in specific communities.

While the first two approaches—based on the Clean Air Act—didn't get rid of disparities, the community-specific approach eliminated  disparities and reduced pollution exposure overall.

The team published these findings Oct. 24 in the Proceedings of the National Academy of Sciences.

"In earlier research, we wanted to know which  were responsible for these disparities, but we found that nearly all sources lead to unequal exposures. So we thought, what's it going to take? Here, we tried three approaches to see which would be the best for addressing these disparities," said senior author Julian Marshall, a UW professor of civil and environmental engineering. "The two approaches that mirror aspects of the Clean Air Act were pretty weak at addressing disparities. The third approach, targeting emissions in specific locations, is not commonly done, but is something overburdened communities have been asking for for years."

Fine particulate matter pollution, or PM2.5, is less than 2.5 micrometers in diameter—about 3% of the diameter of a human hair. PM2.5 comes from vehicle exhaust; fertilizer and other agricultural emissions; electricity generation from ; forest fires; and burning of fuels such as wood, oil, diesel, gasoline and coal. These tiny particles can lead to heart attacks, strokes,  and other diseases, and are estimated to be responsible for about 90,000 deaths each year in the U.S.

The researchers tested the three potential strategies using a tool called InMAP, which Marshall and other co-authors developed. InMAP models the chemistry and physics of PM2.5, including how it is formed in the atmosphere, how it dissipates and how wind patterns move it from one location to another. The team modeled these approaches with national emissions data from 2014 because it was the most recent data set available at the time of this study.

The researchers looked at how efficiently and effectively each approach reduced average  for all people and how well it eliminated the disparities for people of color.

While the emission source and concentration standards approaches were successful in reducing overall exposure across the country, these methods failed to address pollution disparities.

"Our optimization models what happens if we maximize the reductions in disparities. If an approach cannot address disparities even when optimized to do so, then any real-world implementation of the approach will also not address disparities," said lead author Yuzhou Wang, a doctoral student in civil and environmental engineering. "But we saw that even with less than 1% of emission reductions targeting specific locations, the pollution  that have persisted for decades were reduced to zero."

Implementing this location-specific approach would require additional work to identify which locations would be the best to target and working with the communities there to identify how to reduce emissions, the team said.

"Current regulations have improved average air pollution levels, but they have not addressed structural inequalities and often have ignored the voices and lived experiences of people in overburdened communities, including their requests to focus greater attention on sources impacting their communities," Marshall said. "These findings reflect historical experiences. Because of redlining and other racist urban planning from many decades ago, many pollution sources are more likely to be located in Black and brown communities. If we wish to address current inequalities, we need an approach that reflects and acknowledges this historical context."

Additional co-authors are Joshua Apte and Cesunica Ivey, both at the University of California, Berkeley; Jason Hill at the University of Minnesota; Regan Patterson at the University of California, Los Angeles; Allen Robinson at Carnegie Mellon University; and Christopher Tessum at the University of Illinois Urbana-Champaign.People of color hardest hit by air pollution from nearly all sources

More information: Wang, Yuzhou, Location-specific strategies for eliminating US national racial-ethnic PM2.5 exposure inequality, Proceedings of the National Academy of Sciences (2022). DOI: 10.1073/pnas.2205548119doi.org/10.1073/pnas.2205548119