Thursday, June 19, 2025

 

Before dispersing out of Africa, humans learned to thrive in diverse habitats





Max Planck Institute of Geoanthropology
Mixed forest 

image: 

Humans learned to thrive in a variety of African environments before their successful expansion into Eurasia roughly 50,000 years ago.

view more 

Credit: Ondrej Pelanek and Martin Pelanek





Today, all non-Africans are known to have descended from a small group of people that ventured into Eurasia after around 50 thousand years ago. However, fossil evidence shows that there were numerous failed dispersals before this time that left no detectable traces in living people.

In a paper published in Nature this week, new evidence for the first time explains why those earlier migrations didn’t succeed. A consortium of scientists led by Prof. Eleanor Scerri of the Max Planck Institute of Geoanthropology in Germany, and Prof. Andrea Manica of the University of Cambridge has found that before expanding into Eurasia 50 thousand years ago, humans began to exploit different habitat types in Africa in ways not seen before.

“We assembled a dataset of archaeological sites and environmental information covering the last 120 thousand years in Africa. We used methods developed in ecology to understand changes in human environmental niches, the habitats humans can use and thrive in, during this time,” says Dr Emily Hallett of Loyola University Chicago, co-lead author of the study.

“Our results showed that the human niche began to expand significantly from 70 thousand years ago, and that this expansion was driven by humans increasing their use of diverse habitat types, from forests to arid deserts,” adds Dr Michela Leonardi of London’s Natural History Museum, the study’s other lead author.

“This is a key result.” explains Professor Manica, “Previous dispersals seem to have happened during particularly favourable windows of increased rainfall in the Saharo-Arabian desert belt, thus creating ‘green corridors’ for people to move into Eurasia. However, around 70,000-50,000 years ago, the easiest route out of Africa would have been more challenging than during previous periods, and yet this expansion was sizeable and ultimately successful.”

Many explanations for the uniquely successful dispersal out of Africa have been made, from technological innovations to immunities granted by admixture with Eurasian hominins. However, no technological innovations have been apparent, and previous admixture events do not appear to have saved older human dispersals out of Africa.

Here the researchers show that humans greatly increased the breadth of habitats they were able to exploit within Africa before the expansion out of the continent. This increase in the human niche may have been a result of a positive feedback of greater contact and cultural exchange, allowing larger ranges and the breakdown of geographic barriers.

“Unlike previous humans dispersing out of Africa, those human groups moving into Eurasia after ~60-50 thousand years ago were equipped with a distinctive ecological flexibility as a result of coping with climatically challenging habitats,” says Prof. Scerri, “This likely provided a key mechanism for the adaptive success of our species beyond their African homeland.”

The research was supported by funding from the Max Planck Society, European Research Council and Leverhulme Trust.


Humans learned to thrive in a variety of African environments before their successful expansion into Eurasia roughly 50,000 years ago.

Credit

Ondrej Pelanek and Martin Pelanek

Humans learned to thrive in a variety of African environments before their successful expansion into Eurasia roughly 50,000 years ago.

Credit

Ondrej Pelanek and Martin Pelanek

 

Sculpting the surface of the water

Researchers at the University of Liège are revolutionising the handling of liquids and floating objects thanks to capillary action.

Peer-Reviewed Publication

University of Liège

Artistic topography 

image: 

Artistic topography

view more 

Credit: Université de Liège / M.Delens

Physicists at the University of Liège have succeeded in sculpting the surface of water by exploiting surface tension. Using 3D printing of closely spaced spines, they have combined menisci to create programmed liquid reliefs, capable of guiding particles under the action of gravity alone. This is a promising advance for microscopic transport and sorting, as well as marine pollution control.

Have you ever tried tilting a liquid in a glass? It's completely impossible. If you tilt the glass, the surface of the liquid will automatically return to the horizontal ... except for a small - barely visible - curvature that forms near the edge of the glass. This curvature is called a meniscus. And this meniscus is due to capillarity, a force acting on a millimetre scale and resulting from the surface tension of the liquid. What would happen if we could create lots of little menisci over a large surface? What if these small reliefs could add up to form slopes, valleys, or even entire landscapes ... liquid? This is exactly what scientists from the GRASP laboratory at the University of Liège, in collaboration with Brown University (USA), have succeeded in doing.

Drawing on its experience in the field of liquids, and more specifically of liquid interfaces, and with access to cutting-edge 3D printing equipment, the GRASP team set about printing several 'models', several playgrounds, in an attempt to validate their theory: 3D printing conical spines close enough together to deform the surface of water on a large scale. As we know, each spike creates a meniscus around itself," explains physicist Megan DelensFollowing this logic, this means that if we align them well and they are close enough together, we should see a sort of giant meniscus appear, resulting from the superposition and addition of each individual meniscus."  The team found that ' by modifying each spine individually, the surface of the liquid no longer remains flat, but forms a kind of "programmed" liquid landscape. "Programmed" because it is by modifying the height or distance between the spines that the researchers have been able to design liquid interfaces that follow all sorts of topographies: inclined planes, hemispheres, but that also draw much more complex shapes.  For example, they have succeeded in creating the Atomium in Brussels in liquid relief!*

A motorway for bubbles and microparticles

But that's not all. This method also offers a new way of moving and sorting floating objects such as marbles, droplets or plastic particles," explains Professor Nicolas Vandewalle, physicist and director of the lab. When the liquid surface slopes, the lighter objects rise thanks to Archimedes' thrust , and the denser ones sink under the action of their own weight, as if they were sliding down a hill of water". This completely passive approach could be used in micromanipulation, particle sorting or even cleaning liquid surfaces, for example, to capture microplastics or oil droplets on the surface of water.

Future research could look at more advanced ways of making the small tips move, for example by using materials that react to magnetic fields or that can change shape." The idea would be to be able to control the shape of the liquid surface in real time. These advances would make this method even more useful for developing innovative new technologies in microfluidics," concludes Megan Delens."

 

Prescribing fewer antibiotics might not be enough to combat threat of 'superbugs,' says new research




University of Bath
Dr Nicola Ceolotto, one of the authors of the paper 

image: 

Dr Nicola Ceolotto, one of the co-authors of the paper.

view more 

Credit: University of Bath





Antimicrobial resistance is still spreading in the environment despite a reduction in the amounts of antibiotic drugs prescribed, according to a new study led by the University of Bath. Researchers warn that multiple approaches will be required to tackle the increasing threat of antimicrobial resistance to public health.

Antimicrobial resistance (AMR) happens when bacteria evolves over time and doesn’t respond to treatment with antibiotics. It’s been highlighted by the World Health Organization as one of the world’s biggest killers, causing over five million deaths per year.

AMR can develop through several routes: over-use or misuse of antibiotics to treat or prevent bacterial infections; using antibiotics in farm animals to improve meat production; bacteria can also acquire resistance directly by swapping genes with resistant microbes in the environment.

Researchers from the University’s Department of Chemistry, Centre of Excellence in Water-Based Early-Warning Systems for Health Protection (CWBE) and Institute of Sustainability and Climate Change worked with Wessex Water to track the use of antibiotics and the presence of genes linked to AMR in the environment by analysing wastewater.

They took samples from four wastewater treatment plants in southwest England over two years during the COVID-19 pandemic and compared them with previous data collected before 2019.

They matched these data with the number of prescriptions for antibiotics during the same time period. They found that despite a seasonal drop in the amount of antibiotics prescribed in years 2017-19 and lower amounts of antibiotic drugs identified in wastewater, there was no corresponding drop in the levels of AMR genes in the environment.

In 2020, a significant reduction in antibiotics and AMR genes was observed during lockdowns due to COVID pandemic social distancing measures that lead to the reduction in the spread of resistant bacteria. After lockdowns, when social interactions increased, both antibiotic presence/prescription and AMR genes increased indicating increased pathogen spread by infected individuals.  

The study is published in the Journal of Global Antimicrobial Resistance.

Professor Barbara Kasprzyk-Hordern, Director of CWBE, said: “The spread of antimicrobial resistance is a huge threat to all our lives – we rely on antibiotics for treating common infections and to safely carry out surgical procedures.

“The main focus globally on combatting AMR has been to reduce the amount of antibiotics used, but our research findings show that this alone might not be enough to tackle the problem.

“Once resistance genes are out there in the environment, they can be transferred between bacteria, making more and more of them resistant to treatment with antibiotics.

“This is really worrying because we had previously assumed that less usage would result in less AMR, but our results show the problem is more complex than that.”

The researchers suggest that governments and policymakers must take a ‘One Health’ approach to tackling AMR – not just looking at how antibiotics are used in human health, but also how they are used in animals and the effects of antibiotics on the wider environment.

The researchers will tackle this and other urgent public health issues while working together with partners across academia, government organisations and industry, in the Centre of Excellence in Water-Based Early-Warning Systems for Health Protection that launched in April 2025.

They are establishing first living lab facility that will enable longitudinal studies spanning from early warning for pathogen exposure though to chemical exposure and associated health outcomes.

Dr Like Xu, first author of the study, said: “Antimicrobial resistance is a growing concern, as antibiotics and antibiotic-resistant genes persist in the environment, leading to serious and widespread issues.

“Our work shows that wastewater-based epidemiology is an innovative and cost-effective monitoring tool that can be used to understand antibiotics usage and how antibiotic-resistant genes spread.

“Through wastewater analysis, this approach helps identify new resistance patterns, understand their transmission and establish baselines at community level.

“This evidence can support decision makers in developing coordinated interventions and assessing their effectiveness in near-real time.”

More information on wastewater-based epidemiology: Tracking the health of the nation through wastewater.


Researchers analysed wastewater for antibiotic resistance genes

Credit

 

From single cells to complex creatures: New study points to origins of animal multicellularity




Researchers at UChicago analyze genetic data and protein sequences to find key innovations that allowed modern, multicellular animals to emerge




University of Chicago




Animals, from worms and sponges to jellyfish and whales, contain anywhere from a few thousand to tens of trillions of nearly genetically identical cells. Depending on the organism, these cells arrange themselves into a variety of tissues and organs, such as a gut, muscles, and sensory systems. While not all animals have each of these tissues, they do all have one tissue, the germline, that produces sperm or eggs to propagate the species.

Scientists don’t completely understand how this kind of multicellularity evolved in animals. Cell-cell adhesion, or the ability for individual cells to stick to each other, certainly plays a role, but scientists already know that the proteins that serve these functions evolved in single-celled organisms, well before animal life emerged.

Now, research from the University of Chicago provides a new view into key innovations that allowed modern, multicellular animals to emerge. By analyzing the proteins predicted from the genomes of many animals (and close relatives to the animal kingdom), researchers found that animals evolved a more sophisticated mechanism for cell division that also contributes to developing multicellular tissues and the germline.

“This work strongly suggests that one of the early steps in the evolution of animals was the formation of the germline through the ability of cells to stay connected by incomplete cytokinesis,” said Michael Glotzer, PhD, Professor of Molecular Genetics and Cell Biology at UChicago and author of the new study. “The evolution of these three proteins allowed both multicellularity and the ability to form a germline: two of the key features of animals.”

Positioning the dividing line

Cell division, or cytokinesis, is the process by which a cell divides into two distinct daughter cells. Many of the proteins involved with cytokinesis are ancient, present long before the first Metazoa arose about 800 million years ago.

Glotzer has been studying animal cell division for several decades, focusing on how cells determine where to divide. In animal cells, a structure called the mitotic spindle segregates the chromosomes before the cells divide; it also dictates the position where cell division occurs. Glotzer and his team homed in on a set of three proteins—Kif23, Cyk4, and Ect2—that bind to each other and the spindle, and which are directly involved in establishing the division plane. Close relatives of these proteins had only been found previously in animals.

Two of these proteins, Kif23 and Cyk4, form a stable protein complex called centralspindlin that Glotzer and his colleagues discovered more than 20 years ago. Not only does centralspindlin contribute to division plane positioning, but it also generates a bridge between the two incipient daughter cells.

The cells that make up non-germline tissues and organs are called somatic cells, which are not passed on to the next generation. Germline cells are special because they can become any cell type. During the development of sperm and eggs, these cells also recombine the chromosomes they inherited from their parents, generating genetic diversity. While centralspindlin-dependent bridges are generally severed in somatic cells, the germlines of most animals have cells that remain connected by stable bridges.

Tracking down the proteins

Given the recent explosion in genome sequence data now available for a wide range of animals, Glotzer first wanted to determine if the two proteins that make up the centralspindlin complex, as well as Ect2, the regulatory protein that binds to it, were present and well conserved in all animals. During his analysis for this study, which was published in Current Biology, he found that all branches of animals have all three of these proteins.

Studies of these proteins in species commonly used in the lab discovered conserved sequence motifs that are linked to their known functions. Using Google DeepMind’s AlphaFold AI platform (developed by UChicago alum and recent Nobel Laureate John Jumper), he was able to predict the interactions among these different proteins and found that every interaction is likely conserved across all animals. This suggests that these proteins were all in place at the beginning of the animal kingdom more than 800 million years ago and have not undergone any dramatic changes since that time.

Next, Glotzer wondered whether any related proteins could be found in single-celled organisms. He identified somewhat related proteins in choanoflagellates, the group of single-celled creatures most closely related to animals. Alphafold predicted that some of them can form a complex somewhat like centralspindlin. Though related, these complexes are clearly distinct from centralspindlin, and they lack the sequences that allow Ect2 to bind to the structure. Remarkably, some choanoflagellate species that have this complex can form colonies via incomplete cytokinesis too.

“Pre-metazoan cells have mechanisms of dividing and separating, probably with some themes and variations. Then this protein complex allowed cells to stop at the stage just before separation,” Glotzer said. “Maybe multicellular life evolved because of a genetic change that prevented cells from fully separating.”

“A mutation that disrupted the assembly of centralspindlin is what allowed my colleagues and me to find these proteins in the first place, more than 25 years ago,” he continued. “And it appears that the evolution of this exact same region contributed to the evolution of animal life on the planet, which is mind blowing.”

The study, “A key role for centralspindlin and Ect2 in the development of multicellularity and the emergence of Metazoa” was supported by the National Institutes of Health.

 

Doctors need better guidance on AI



To avoid burnout and medical mistakes, health care organizations should train physicians in AI-assisted decision-making




University of Texas at Austin





Artificial intelligence is everywhere — whether you know it or not. In many fields, AI is being touted as a way to help workers at all levels accomplish tasks, from routine to complicated. Not even physicians are immune.

But AI puts doctors in a bind, says Shefali Patil, associate professor of management at Texas McCombs, in a recent article. Health care organizations are increasingly pushing them to rely on assistive AI to minimize medical errors. But they lack direct support for how to use it.

The result, Patil says, is that physicians risk burnout, as society decides whom to hold accountable when AI is involved in medical decisions. Paradoxically, they also face greater chances of making medical mistakes. This interview has been edited for length and clarity.

Your article discusses the phenomenon of superhumanization. Unlike the rest of us, doctors are thought to have extraordinary mental, physical, and moral capacities, and they may be held to unrealistic standards. What pressures does this place on medical professionals?

AI is generally meant to aid and enhance clinical decisions. When an adverse patient outcome arises, who gets the blame? It’s up to the physician now to decide whether to take the machine’s recommendation and to anticipate what will happen if there’s an adverse patient result.

There are two possible types of errors — false positives and false negatives. The doctor has to determine if the illness is really bad and to do treatments that are potentially unnecessary, if it turns out to be a false positive. The other is a false negative, where the patient is super sick and the doctor doesn’t catch it.

The doctor has to figure out how to use AI software systems but has no control over the systems that the hospital buys. It all has to do with liability. There are no tight regulations around AI.

AI diagnoses, which are supposed to make doctors’ lives easier and reduce medical errors, are potentially having an opposite effect. Why?

The promise for AI is to alleviate some of the decision-making pressures on physicians. The promise is to make their jobs easier and lead to less burnout.

But these come with liability issues. AI vendors do not reveal the way the algorithms actually work. There’s limited transparency on how the algorithms are making a decision, so it’s difficult to calibrate when to use AI and when not to.

If you don’t use it, and there’s a mistake, you’ll be asked why you did not take the AI recommendation. Or, if AI makes a mistake, you’re held responsible, because it’s not a human being. That’s the tension.

What risks does this situation pose to patient care?

People want a physician who’s competent and decisive without feeling a sense of analysis paralysis because of information overload. Decision-making uncertainty and anxiety cause physicians to second-guess themselves. That leads to poor decision-making and, subsequently, poor patient care.

You predict that medical liability will depend on who people believe is at fault for a mistake. How could that expectation increase the risk of doctor burnout and mistakes?

Decision-making research suggests that people who suffer from performance anxiety and constantly second-guess themselves are not thinking logically through decisions. They’re questioning their own judgments.

That’s a very strong, accepted finding in the field of organizational behavior. It’s not specific to doctors, but we’re extrapolating to them.

What strategies can health care organizations use to alleviate those pressures and support physicians in using AI?

One of the big things that needs to be implemented with medical education is simulation training. It can be done as part of continuing education. It’s going to be very significant, because this is the future of medicine. There’s no turning back.

Learning how these systems actually work and understanding how they update and make a recommendation based on medical literature and past case outcomes is important in effective decision-making.

What do you mean when you write about a “regulatory gap?”

We mean that legal regulations always lag behind technological advances. You’re never going to be able to get fair and effective regulations that meet everybody’s interests. The liability risk always happens. The perception of blame is always after the fact. That’s why we’re trying to say the onus should be on administrators to help physicians deal with this issue.

Can you offer some practical advice for doctors, suggesting some do’s and don’ts for using AI assistance?

Right now, there is very little assistance from hospital administrators in teaching physicians how to calibrate the use of AI. More needs to be done.

Administrators need to implement more practical support that heavily relies on feedback from clinicians. At the moment, administrators don’t get that feedback. Performance outcomes, such as what was useful and what was not, need to be tracked.

Calibrating AI Reliance – A Physician’s Superhuman Dilemma,” co-authored with Christopher Myers of Johns Hopkins University and Yemeng Lu-Myers of Johns Hopkins Medicine, is published in JAMA Health Forum.