Showing posts sorted by date for query HYENAS. Sort by relevance Show all posts
Showing posts sorted by date for query HYENAS. Sort by relevance Show all posts

Wednesday, November 12, 2025

The Super Predator: How Humans Became The Animal Kingdom’s Most Feared Hunters – OpEd

November 12, 2025
By John Divinagracia

Humanity’s evolution into a super predator has reshaped ecosystems and instilled a primal fear in much of the animal kingdom.

Hunting is considered critical to human evolution by many researchers who believe that several characteristics that distinguish humans from our closest living relatives, the apes, may have partly resulted from our adaptation to hunting, including our large brain size.

Over time, however, the need to hunt for survival has been replaced by greed, leading to the exploitation of natural resources, which is destroying the environment and causing the extinction of thousands of species.

There has been a 60 percent decrease in the wildlife population between 1970 and 2014, according to the Living Planet 2018 report by the World Wildlife Fund. Referring to the report, the Guardian stated that “the vast and growing consumption of food and resources by the global population is destroying the web of life, billions of years in the making, upon which human society ultimately depends for clean air, water and everything else.”
Hunting for Survival

The San People, also known as the Bushmen of the Kalahari Desert in Africa, have for generations employed persistence or endurance hunting to chase down prey such as the kudu. Groups of three or more men find a herd and scatter it, targeting the weakest, slowest, or heaviest animal in the herd. During the hunt, one of the Bushmen serves as the main runner, who finishes the last legs of the hunt by tracking and finally killing the prey. Although the Bushmen do employ more familiar tactics like ambushing, shooting poisoned darts, and throwing spears, persistence hunting has been a standby of the San People in an environment that favors human endurance and stamina.

Over the years, however, persistence hunting has become a topic of debate. In a 2007 article published in the Journal of Human Evolution, Henry T. Bunn, a paleoanthropologist from the University of Wisconsin-Madison, and Travis Rayne Pickering, a professor of anthropology from UW-Madison, questioned the assumption that endurance training was “regularly employed” during hunting and scavenging. They argued that humans would have relied more on their brains than their legs to hunt.

Bunn and Pickering studied a pile of bones found in the Olduvai Gorge in Tanzania, dating back to 1.8 million to 2 million years ago—which were unearthed by paleontologist Mary Leakey—and discovered that “most of the animals in the collection were either young adults or adults in their prime… To Bunn and Pickering, that suggested the animals hadn’t been chased down. And because there were butchering marks on the bones with the best meat, it was also safe to assume that humans hadn’t scavenged animal carcasses after being killed by other predators… Instead, Bunn believes ancient human hunters relied more on smarts than on persistence to capture their prey,” according to a 2019 article in Undark magazine.

Opposing this debate are researchers such as Eugène Morin, an evolutionary anthropologist at Trent University in Canada, and Bruce Winterhalder, from the Department of Anthropology and the Graduate Group in Ecology at the University of California, Davis. A 2024 article written by them, published in the journal Nature Human Behavior, revealed that they scoured ethnographic records and identified almost 400 cases of long-distance running used for hunting around the world. Their research on energy expenditure shows that “running can be more efficient than walking for pursuing prey,” stated a Smithsonian magazine article.

Many other scholars have written about the “locomotor endurance” that humans possess compared to other animals, as well as the anatomical advantages of long legs, Achilles tendons, arched feet, and large, stress-bearing joints in our legs, which collectively contribute to our ability to run long distances.

Whether humans originally evolved as persistence hunters is a matter of debate. What is undeniable is that humans are among the most deadly predators on Earth. “From agricultural feed to medicine to the pet trade, modern society exploits wild animals in a way that surpasses even the most voracious, unfussy wild predator,” said a Smithsonian magazine article.
Ways to Kill

A 2023 study, “Humanity’s Diverse Predatory Niche and Its Ecological Consequences,” published in Communications Biology, describes human predation as a commercial enterprise rather than a necessity:

We consider predation by humans broadly—and from the perspective of effects on prey populations—as any use that removes individuals from wild populations, lethally or otherwise… [ranging] from removal of live individuals for the pet trade, to harvesting by societies that rely heavily on hunting and fishing, to globalized, commercial fishing and trade of vertebrates, and interactions among these activities.

When a shark, a tiger, a boa constrictor, or even the rusty-spotted cat of South Asia kills another animal, its primary aim is survival. As carnivores, these animals must eat meat, and therefore, they must kill. But humans go beyond the necessary. In our years of remodeling landscapes and industrializing the wilderness, we have pushed animals toward extinction. Our activities have had maximum impact on the ocean, leading to the exploitation of 43 percent of Earth’s marine species. While these species are killed for several purposes, 72 percent of marine and freshwater fish species are being used for food. Taxonomically, birds were the most predated group, with 46 percent mainly being used as pets or for other “recreational pursuits.”

Meanwhile, “in the terrestrial realm, use as pets is almost twice as common (74 percent) as food use (39 percent),” according to the 2023 study. Sport hunting and other forms of activities (i.e., for trophies) accounted for 8 percent of the use of exploited terrestrial species.

Due to humanity’s alarming exploitation of 14,663 species, we are driving 39 percent of these species toward extinction. “We exploit around a third of all wild animals for food, medicines, or to keep as pets… That makes us hundreds of times more dangerous than natural predators such as the great white shark,” according to a 2023 BBC article, referring to an analysis by scientists. The article further stated that we were entering the Anthropocene, “the period during which human activity has been the dominant influence on climate and the environment.”

Today, human-induced climate crises and environmental damage are the forces that pulverize bones to dust. An analysis of peer-reviewed literature published between 2012 and 2020 revealed 99 percent consensus that human activity shapes climate change—a collective power that no other species on Earth has ever had. Not even the dinosaurs would have been able to out-roar the din of factories and exhaust pipes.
Fearful Symmetry

Human beings have been termed “super predators,” surpassing other famous predators in the number of prey they kill. This has led to animals fearing us. “Consistent with humanity’s unique lethality, a growing number of playback experiments have demonstrated that fear of humans far exceeds that of the non-human apex predator in the system. In Africa, 95 percent of carnivore and ungulate species (e.g., giraffes, leopards, hyenas, zebras, rhinos, and elephants) in the Greater Kruger National Park ran more or faster on hearing humans compared with hearing lions,” stated the Proceedings of the Royal Society of London. Series B: Biological Sciences.

An atavistic fear also afflicts animals when they see a human. We might be slower and weaker creatures compared to bears and lions, but to these animals, we look monstrous. “There is a threat level that comes from being bipedal,” said John Hawks, a paleoanthropologist at the University of Wisconsin-Madison, to Live Science. So when, say, a kudu in the Kalahari Desert spots one of the San People or Bushmen jogging or walking toward it, the kudu bolts.

This ecology of fear has shifted the predator-prey paradigms, as ungulates such as white-tailed deer and moose will “shield” their offspring by giving birth close to houses and villages, utilizing the local predators’ fear of humans to create a safe environment for their offspring to grow up.

Wildlife researcher Hugh Webster said that research about animals fearing humans shows that “human impacts on animal behavior are even more wide-reaching than we thought. Perhaps the key point is that we need to identify the most disturbance-sensitive species and engineer protections for them that allow freedom from this pervasive fear,” writes Phoebe Weston, a biodiversity writer for the Guardian.

In the animal kingdom, humans now occupy a unique role, surpassing all other predators in lethality and reshaping the natural order through unchecked consumption and industrial-scale exploitation. Our presence instills a pervasive fear across ecosystems, altering animal behavior and disrupting millennia-old predator-prey dynamics. Yet this super predator status comes with unprecedented responsibility. Unlike other apex predators, humans possess the awareness, technology, and moral capacity to recognize the consequences of our actions and to mitigate the harm we inflict. The question before us is whether we will continue to exploit the web of life for short-term gain or harness our intelligence and ingenuity to protect it—ensuring that future generations inherit a planet where humans are not feared as destroyers, but remembered as stewards of the living world.


Author Bio: John Divinagracia is a writer and novelist. He is the author of It’s Always Snowing in Iberia (2021) and was a fellow at the 19th Ateneo National Writers Workshop in 2022. He is a writer at WorldAtlas and a contributing editor and author at the Observatory. He holds a cum laude degree in creative writing from Ateneo de Manila University in the Philippines.


Credit Line: This article was produced by Earth | Food | Life, a project of the Independent Media Institute.



Saturday, October 04, 2025

 

The ‘big bad wolf’ fears the human ‘super predator’ – for good reason




University of Western Ontario
Zanette with automated camera-speaker system. 

image: 

Western University professor Liana Zanette sets up an automated camera-speaker system.   

view more 

Credit: Michael Clinchy





Fear of the fabled ‘big bad wolf’ has dominated the public perception of wolves for millennia and strongly influences current debates concerning human-wildlife conflict. Humans both fear wolves and, perhaps more importantly, are concerned about wolves losing their fear of humans – because if they fear us, they avoid us and that offers protection.

A new Western University study shows that even where laws are in place to protect them, wolves fully fear the human ‘super predator.’

These findings by Western biology professor Liana Zanette – in collaboration with one of Europe’s leading wolf experts, Dries Kuijper from the Polish Academy of Sciences, and others – were published today in Current Biology.

Zanette and her colleagues conducted an unprecedented experiment across a vast 1,100 sq. km area in north-central Poland, demonstrating that wolves fully retain their fear of humans, even where laws exist to protect them. To conduct their experiment, the team deployed hidden, automated camera-speaker systems at the intersection of paths in the Tuchola Forest that, when triggered by an animal passing within a short distance (10 metres), filmed the response of the animal to hearing either humans speaking calmly in Polish, dogs barking or non-threatening controls (bird calls).

Wolves were more than twice as likely to run, and twice as fast to abandon the site, after hearing humans compared to control sounds (birds). The same was true of wolves’ prey (deer and wild boar).

By demonstrating experimentally that wolves fear humans, the study verifies that fear of humans, who are predominantly active in the daytime, forces wolves to restrict their activities to the night. Wolves were 4.9 times more nocturnal (active at night) than humans. In fact, wolves are not just nocturnal where Zanette and her team did their study, but everywhere humans are present, as shown in a recent continent-wide survey. This new experiment establishes that the reason is because wolves everywhere are fearful of humans.  

“Wolves are not exceptional in fearing humans – and they have good reason to fear us,” said Zanette, a renowned wildlife ecologist. “Global surveys show humans kill prey at much higher rates than other predators and kill large carnivores like wolves at on average nine times the rate they die naturally, making humans a ‘super predator.’”

Consistent with humanity’s unique lethality, growing experimental evidence from every inhabited continent demonstrates that wildlife worldwide, including other large carnivores like leopards, hyenas and cougars, fear the human ‘super predator’ above all else.

Legally protected but still fearful 

“Legal protection does not change wolves’ fear of humans because legal protection does not mean not killing wolves, it means not exterminating them. This is an important distinction,” said Zanette.

Humans remain very much a ‘super predator’ of wolves even where wolves are strictly protected, such as in the European Union, where humans legally and illegally kill wolves at seven times the rate they die naturally. France, for example, allows up to 20 per cent of the wolf population to be legally killed every year. Human killing of wolves in North America is comparable.

“At these rates, any truly fearless wolf that did not avoid humans would very soon be a dead wolf,” said Zanette.

Legal protection leading to fearless wolves – not scientifically supported

Wolves are now reoccupying areas in Europe and North America where they had been exterminated, leading to increased human-wolf encounters. This increase in encounters has been attributed to legal protection allowing the emergence of fearless wolves, but these new experimental results demonstrate this assumption is not scientifically supported.

“For wolves – like all creatures great and small – fear is primarily about food, specifically, how to avoid becoming food while trying to find food. Focusing on this fundamental risk-reward trade-off is critical,” said Zanette. “The certainty that wolves fear humans means we need to re-focus attention on what counterbalances this fear, rather than whether wolves are fearless.”

Humans are both uniquely lethal and unique in being normally surrounded by super-abundant, super high-quality food. Results of the study strongly indicate any apparently fearless wolf is actually a fearful wolf risking proximity to humans to get a bite of our ‘superfoods.’

The real problem, said Zanette, is how to keep the wolf from our human food.  

“The critical significance of our study lies in re-focusing the discourse on human-wolf conflict toward public education on food storage, garbage removal and livestock protection – reducing wolf access to human foodstuffs,” said Zanette. “What our study establishes is that there is no alternate problem to contend with. There is no ‘big bad wolf’ unafraid of the human ‘super predator.’”

Wolf in Poland's Tuchola Forest. 



 

Credit

Dries Kuijper


Thursday, September 25, 2025

Rice anthropologist among first to use AI to uncover new clues that early humans were prey, not predators Were early humans hunters — or hunted?




Rice University
Leopard bite marks embedded in skull 

image: 

Fossil evidence showing leopard bite marks embedded in a hominin skull.

view more 

Credit: Manuel Domínguez-Rodrigo


Were early humans hunters — or hunted?

For decades, researchers believed that Homo habilis — the earliest known species in our genus — marked the moment humans rose from prey to predators. They were thought to be the first stone tool users and among the earliest meat eaters and hunters based on evidence from early archaeological sites.

But fossils of another early human species — African Homo erectus — show they lived alongside H. habilis about 2 million years ago. That raised a new mystery: Which of these two species was actually making tools and eating the meat of hunted animals? Most anthropologists long suspected H. habilis was responsible, which would have placed them in a dominant predatory role. 

New findings from a team led by Rice University anthropologist Manuel Domínguez-Rodrigo, in  partnership between Rice and the Archaeological and Paleontological Museum of Madrid through the Institute of Evolution in Africa (IDEA), which he co-directs with Enrique Baquedano, challenge that view, revealing that these early humans were still preyed upon by carnivores, likely leopards. The work is published in the Annals of the New York Academy of Sciences.

“We discovered that these very early humans were eaten by other carnivores instead of mastering the landscape at that time,” Domínguez-Rodrigo said.

The breakthrough was made possible by applying artificial intelligence (AI) to fossil analysis, giving researchers insights they could not have reached with traditional methods alone. Domínguez-Rodrigo is among the first anthropologists to use AI for taxon-specific analysis of bone surface damage — training computer vision models to recognize the microscopic tooth mark patterns left by different predators.

“Human experts have been good at finding modifications on prehistoric bones,” he said. “But there were too many carnivores at that time. AI has opened new doors of understanding.”

His team trained deep learning models to distinguish bone damage left by leopards, lions, hyenas, crocodiles and wolves. When the models analyzed marks on H. habilis fossils from Olduvai Gorge in Tanzania, they consistently identified leopard bite marks with high confidence.

“AI is a game changer,” Domínguez-Rodrigo said. “It’s pushing methods that have been stable for 40 years beyond what we imagined. For the first time, we can pinpoint not just that these humans were eaten but by whom.”

The finding challenges a long-standing idea about when and what type of humans began to dominate their environment, showing that even as their brains were beginning to grow, they were still vulnerable.

“The beginning of the human brain doesn’t mean we mastered everything immediately,” Domínguez-Rodrigo said. “This is a more complex story. These early humans, these Homo habilis, were not the ones responsible for that transformation.”

He said it’s a reminder that human evolution wasn’t a single leap from prey to predator but a long, gradual climb and that H. habilis may not have been the turning point researchers once believed.

Domínguez-Rodrigo added that the methods developed for this study could unlock discoveries across anthropology, allowing researchers to analyze other early human fossils in new ways. The work is part of a growing collaboration between Rice and IDEA, where his team is based.

“This is a pioneer center in the use of artificial intelligence to the past,” he said. “It’s one of the first places using AI for paleontological and anthropological research.” 

Domínguez-Rodrigo said he hopes this discovery is just the beginning. By applying AI to other fossils, he believes researchers can map when humans truly rose from prey to predator and uncover new chapters in our evolutionary story that have long been hidden.

“It’s extremely stimulating to be the first one to see something for the first time. When you uncover sites that have been hidden from the human eye for more than 2 million years, you’re contributing to how we reconstruct who we are. It’s a privilege and very encouraging.”

The study was co-authored by Marina Vegara Riquelme and Enrique Baquedano, and was supported by the Spanish Ministry of Science and Innovation, the Spanish Ministry of Universities and the Spanish Ministry of Culture.

AI system learns from many types of scientific information and runs experiments to discover new materials



The new “CRESt” platform could help find solutions to real-world energy problems that have plagued the materials science and engineering community for decades.




Massachusetts Institute of Technology





Machine-learning models can speed up the discovery of new materials by making predictions and suggesting experiments. But most models today only consider a few specific types of data or variables. Compare that with human scientists, who work in a collaborative environment and consider experimental results, the broader scientific literature, imaging and structural analysis, personal experience or intuition, and input from colleagues and peer reviewers.

Now, MIT researchers have developed a method for optimizing materials recipes and planning experiments that incorporates information from diverse sources like insights from the literature, chemical compositions, microstructural images, and more. The approach is part of a new platform, named Copilot for Real-world Experimental Scientists (CRESt), that also uses robotic equipment for high-throughput materials testing, the results of which are fed back into large multimodal models to further optimize materials recipes.

Human researchers can converse with the system in natural language, with no coding required, and the system makes its own observations and hypotheses along the way. Cameras and visual language models also allow the system to monitor experiments, detect issues, and suggest corrections.

“In the field of AI for science, the key is designing new experiments,” says Ju Li, School of Engineering Carl Richard Soderberg Professor of Power Engineering. “We use multimodal feedback — for example information from previous literature on how palladium behaved in fuel cells at this temperature, and human feedback — to complement experimental data and design new experiments. We also use robots to synthesize and characterize the material’s structure and to test performance.”

The system is described in a paper published in Nature. The researchers used CRESt to explore more than 900 chemistries and conduct 3,500 electrochemical tests, leading to the discovery of a catalyst material that delivered record power density in a fuel cell that runs on formate salt to produce electricity.

Joining Li on the paper as first authors are PhD student Zhen Zhang, Zhichu Ren PhD ’24, PhD student Chia-Wei Hsu, and postdoc Weibin Chen. Their coauthors are MIT Assistant Professor Iwnetim Abate; Associate Professor Pulkit Agrawal; JR East Professor of Engineering Yang Shao-Horn; MIT.nano researcher Aubrey Penn; Zhang-Wei Hong PhD ’25, Hongbin Xu PhD ’25; Daniel Zheng PhD ’25; MIT graduate students Shuhan Miao and Hugh Smith; MIT postdocs Yimeng Huang, Weiyin Chen, Yungsheng Tian, Yifan Gao, and Yaoshen Niu; former MIT postdoc Sipei Li; and collaborators including Chi-Feng Lee, Yu-Cheng Shao, Hsiao-Tsu Wang, and Ying-Rui Lu.

A smarter system

Materials science experiments can be time-consuming and expensive. They require researchers to carefully design workflows, make new material, and run a series of tests and analysis to understand what happened. Those results are then used to decide how to improve the material.

To improve the process, some researchers have turned to a machine-learning strategy known as active learning to make efficient use of previous experimental data points and explore or exploit those data. When paired with a statistical technique known as Bayesian optimization (BO), active learning has helped researchers identify new materials for things like batteries and advanced semiconductors.

“Bayesian optimization is like Netflix recommending the next movie to watch based on your viewing history, except instead it recommends the next experiment to do,” Li explains. “But basic Bayesian optimization is too simplistic. It uses a boxed-in design space, so if I say I’m going to use platinum, palladium, and iron, it only changes the ratio of those elements in this small space. But real materials have a lot more dependencies, and BO often gets lost.”

Most active learning approaches also rely on single data streams that don’t capture everything that goes on in an experiment. To equip computational systems with more human-like knowledge, while still taking advantage of the speed and control of automated systems, Li and his collaborators built CRESt. 

CRESt’s robotic equipment includes a liquid-handling robot, a carbothermal shock system to rapidly synthesize materials, an automated electrochemical workstation for testing, characterization equipment including automated electron microscopy and optical microscopy, and auxiliary devices such as pumps and gas valves, which can also be remotely controlled.  Many processing parameters can also be tuned.

With the user interface, researchers can chat with CRESt and tell it to use active learning to find promising materials recipes for different projects. CRESt can include up to 20 precursor molecules and substrates into its recipe. To guide material designs, CRESt’s models search through scientific papers for descriptions of elements or precursor molecules that might be useful. When human researchers tell CRESt to pursue new recipes, it kicks off a robotic symphony of sample preparation, characterization, and testing. The researcher can also ask CRESt to perform image analysis from scanning electron microscopy imaging, X-ray diffraction, and other sources.

Information from those processes is used to train the active learning models, which use both literature knowledge and current experimental results to suggest further experiments and accelerate materials discovery.

“For each recipe we use previous literature text or databases, and it creates these huge representations of every recipe based on the previous knowledge base before even doing the experiment,” says Li. “We perform principal component analysis in this knowledge embedding space to get a reduced search space that captures most of the performance variability. Then we use Bayesian optimization in this reduced space to design the new experiment. After the new experiment, we feed newly acquired multimodal experimental data and human feedback into a large language model to augment the knowledgebase and redefine the reduced search space, which gives us a big boost in active learning efficiency.”

Materials science experiments can also face reproducibility challenges. To address the problem, CRESt monitors its experiments with cameras, looking for potential problems and suggesting solutions via text and voice to human researchers.

The researchers used CRESt to develop an electrode material for an advanced type of high-density fuel cell known as a direct formate fuel cell. After exploring more than 900 chemistries over three months, CRESt discovered a catalyst material made from eight elements that achieved a 9.3-fold improvement in power density per dollar over pure palladium, an expensive precious metal. In further tests, CRESTs material was used to deliver a record power density to a working direct formate fuel cell even though the cell contained just one-fourth of the precious metals of previous devices.

The results show the potential for CRESt to find solutions to real-world energy problems that have plagued the materials science and engineering community for decades.

“A significant challenge for fuel-cell catalysts is the use of precious metal,” says Zhang. “For fuel cells, researchers have used various precious metals like palladium and platinum. We used a multielement catalyst that also incorporates many other cheap elements to create the optimal coordination environment for catalytic activity and resistance to poisoning species such as carbon monoxide and adsorbed hydrogen atom. People have been searching low-cost options for many years. This system greatly accelerated our search for these catalysts.”

A helpful assistant

Early on, poor reproducibility emerged as a major problem that limited the researchers’ ability to perform their new active learning technique on experimental datasets. Material properties can be influenced by the way the precursors are mixed and processed, and any number of problems can subtly alter experimental conditions, requiring careful inspection to correct.

To partially automate the process, the researchers coupled computer vision and vision language models with domain knowledge from the scientific literature, which allowed the system to hypothesize sources of irreproducibility and propose solutions. For example, the models can notice when there’s a millimeter-sized deviation in a sample’s shape or when a pipette moves something out of place. The researchers incorporated some of the model’s suggestions, leading to improved consistency, suggesting the models already make good experimental assistants.

The researchers noted that humans still performed most of the debugging in their experiments.

“CREST is an assistant, not a replacement, for human researchers,” Li says. “Human researchers are still indispensable. In fact, we use natural language so the system can explain what it is doing and present observations and hypotheses. But this is a step toward more flexible, self-driving labs.”

###

Written by Zach Winn, MIT News

Researchers from Perugia University describe how AI can unveil the secrets of eruptions



KeAi Communications Co., Ltd.

Graphical abstract 

image: 

Graphical abstract

view more 

Credit: Mónica Ágreda-López, Maurizio Petrelli





Volcanoes are among the most powerful natural hazards on Earth, yet predicting their behavior remains one of the biggest scientific challenges. A new article published in Artificial Intelligence in Geosciences explores how machine learning (ML) can accelerate discoveries in volcano science, while also warning of potential pitfalls if ML is used without critical reflection.

The study, conducted by a duo of researchers from the University of Perugia, involved the analysis of current and emerging applications of artificial intelligence in volcanology.

“While ML tools can process massive amounts of seismic, geochemical, and satellite data far faster than traditional methods, opening up opportunities for earlier hazard detection and improved risk communication — it is not a silver bullet, says corresponding author Maurizio Petrelli. “We need to be aware of what models really learn and why transparency, reproducibility, and interpretability matter when decisions affect public safety in hazard assessment and crisis management.”

“AI can help us see volcanic systems in new ways, but it must be used responsibly,” says co-author Mónica Ágreda-López. “Our goal is not only to show both the opportunities and the risks but also to promote the understanding behind these tools, so that volcano science can benefit from machine learning without losing rigor and transparency.”

The authors call for careful epistemological evaluation, asking not just what AI can do, but how its methods align with scientific reasoning and the needs of society. The duo also stressed that building trust between AI developers, geoscientists, and at-risk communities is key to harnessing these technologies responsibly.

“Interdisciplinary collaborations and open data practices are essential steps to ensure AI contributes to safer, more resilient societies living with volcano hazards. We also need to consider ethics and evolving policies across the EU, China, and the US,” adds Ágreda-López.

###

Contact the author: Maurizio Petrelli, University of Perugia, Italy , maurizio.petrelli@unipg.it

The publisher KeAi was established by Elsevier and China Science Publishing & Media Ltd to unfold quality research globally. In 2013, our focus shifted to open access publishing. We now proudly publish more than 200 world-class, open access, English language journals, spanning all scientific disciplines. Many of these are titles we publish in partnership with prestigious societies and academic institutions, such as the National Natural Science Foundation of China (NSFC).

 

FHBDSR-Net: AI tool enables accurate measurement of diseased spikelet rate of wheat Fusarium Head Blight from phone images, aiding smart phenotyping




Beijing Zhongke Journal Publising Co. Ltd.

FHBDSR-Net: Automated measurement of diseased spikelet rate of Fusarium Head Blight on wheat spikes 

image: 

FHBDSR-Net: AI tool enables accurate measurement of diseased spikelet rate of wheat Fusarium Head Blight from phone images, aiding smart phenotyping.

view more 

Credit: Beijing Zhongke Journal Publising Co. Ltd.






This study is led by Professor Weizhen Liu (School of Computer Science and Artificial Intelligence, Wuhan University of Technology, Wuhan, China). The authors proposed a deep learning algorithm named FHBDSR-Net, which can automatically measure the diseased spikelet rate (DSR) trait from wheat spike images with complex backgrounds captured by mobile phones, providing an efficient and accurate phenotypic measurement tool for wheat Fusarium Head Blight (FHB) resistance breeding.

 

The FHBDSR-Net model integrates three innovative modules: the Multi-scale Feature Enhancement (MFE) module effectively suppresses complex background interference by dynamically fusing lesion texture, morphological features, and lesion-awn contrast features; the Inner-Efficient CIoU (Inner-EfficiCIoU) loss function significantly improves the localization accuracy of dense small targets; the Scale-Aware Attention (SAA) module enhances the encoding capability of multi-scale pathological features and spatial distribution through dilated convolution and self-attention mechanisms.

 

Experimental results show that FHBDSR-Net achieves 93.8% average precision (AP) in diseased spikelet detection, with the average Pearsons correlation coefficient between its DSR measurements and manual observations exceeding 0.901. The model possesses excellent generalization ability and robustness, and exhibits high accuracy in detecting the DSR of wheat spikes with different varieties, growth stages, and infection degrees. Meanwhile, FHBDSR-Net also features lightweight properties with only 7.2M parameters, which can be adapted for deployment on resource-constrained mobile terminals. It can support the accurate acquisition of DSR trait in greenhouse and field scenarios, further providing efficient and reliable technical support for wheat FHB resistance breeding screening and field disease dynamic monitoring, and promoting the upgrading of plant phenotyping analysis towards field portability and intelligence.

See the article:

FHBDSR-Net: Automated measurement of diseased spikelet rate of Fusarium Head Blight on wheat spikes

https://link.springer.com/article/10.1007/s42994-025-00245-0

Robot or human? It depends on the situation, a large study shows



Although many of us are still skeptical of chatbots, algorithms, and robots, the artificial agents actually do well dealing with customers. Sometimes even better than humans



Aalborg University






When we shop online, a chatbot answers our questions. A virtual assistant helps us track a package. And an AI system guides us through a return of our goods.

We have become accustomed to technology being a customer service representative. But is it actually important to us as customers whether a human or a machine is helping us?

A new international meta-analysis shows that artificial agents are in many cases more positively received than you might think. They are not necessarily perceived as better than flesh-and-blood employees – but the difference between them is often smaller than we might expect.

The study was conducted by four researchers, including Professor Holger Roschk from Aalborg University Business School. Along with Katja Gelbrich, Sandra Miederer and Alina Kerath from Catholic University Eichstätt-Ingolstadt, he analyzed 327 experimental studies with almost 282,000 participants and published the results in the prestigious Journal of Marketing.

 

We also see that artificial agents, contrary to what you might expect, have certain advantages in situations where a negative response must be given. This may be the case when an algorithm rejects a loan application

Holger Roschk

Professor, Aalborg University Business School

 

"At a time when AI is being integrated everywhere – from banking to healthcare – it is crucial to know when and how we as humans accept machines. It is often postulated that customers prefer to talk to a human, and many are skeptical of machines. But when we look at customers' actual behaviour – whether they follow advice, buy something or return – the differences are often small," says Holger Roschk.

Context determines effect

The human element has previously been highlighted when research has discussed whether an artificial agent is good or bad. But according to Holger Roschk, this approach lacks nuance, and there is a big difference in what the different agents are good at. The human element is not always crucial – it depends on the task and the situation.

For example, chatbots perform particularly well in situations that customers may perceive as embarrassing - for example, when buying health-related or intimate products. In these transactions, many people prefer a discreet digital contact to a human.

"We may have overestimated the need for artificial agents to be human-like. This is not always necessary – in fact, it can be an advantage that they appear as distinct machines," says Holger Roschk.

He adds that algorithms work well in situations where, for example, you need to calculate the shortest route or estimate waiting time. They also perform well for recommendations like getting the right clothing sizes on a web shop.

Robots that have a physical presence perform best in tasks where motor skills and practical help are needed – for example, room service in hotels or tasks in a warehouse

"We also see that artificial agents, contrary to what you might expect, have certain advantages in situations where a negative response must be given.
This may be the case when an algorithm rejects a loan application," says Holger Roschk, adding that this is probably because the machine's "insensitivity" can have a disarming effect

Technology with clear limits

Holger Roschk emphasizes that artificial agents are not a substitute for humans. Technology has clear limits. In situations where empathy, spontaneity and situational awareness are particularly necessary, it is crucial to have people in the shop and behind the screen.

"We recommend that companies focus on using artificial agents in situations where they can relieve employees of physically or mentally demanding tasks. It's not about replacing people – it's about using technology where it makes sense," says Holger Roschk.