Friday, May 16, 2025

 

Robotic hand moves objects with human-like grasps



A robotic hand developed at EPFL can pick up 24 different objects with human-like movements that emerge spontaneously, thanks to compliant materials and structures rather than programming.



Ecole Polytechnique Fédérale de Lausanne

ADAPT hand (Adaptive Dexterous Anthropomorphic Programmable sTiffness) 

image: 

The ADAPT robotic hand (Adaptive Dexterous Anthropomorphic Programmable sTiffness)

view more 

Credit: CREATE Lab EPFL







When you reach out your hand to grasp an object like a bottle, you generally don’t need to know the bottle’s exact position in space to pick it up successfully. But as EPFL researcher Kai Junge explains, if you want to make a robot that can pick up a bottle, you must know everything about the surrounding environment very precisely.

“As humans, we don’t really need too much external information to grasp an object, and we believe that’s because of the compliant – or soft – interactions that happen at the interface between an object and a human hand,” says Junge, a PhD student in the School of Engineering’s Computational Robot Design & Fabrication (CREATE) Lab, led by Josie Hughes. “This compliance is what we are interested in exploring for robots.”

In robotics, compliant materials are those that deform, bend, and squish. In the case of the CREATE Lab’s robotic ADAPT hand (Adaptive Dexterous Anthropomorphic Programmable sTiffness), the compliant materials are relatively simple: strips of silicone wrapped around a mechanical wrist and fingers, plus spring-loaded joints, combined with a bendable robotic arm. But this strategically distributed compliance is what allows the device to pick up a wide variety of objects using “self-organized” grasps that emerge automatically, rather than being programmed. 

In a series of experiments, the ADAPT hand, which can be controlled remotely, was able to pick up 24 objects with a success rate of 93%, using self-organized grasps that mimicked a natural human grasp with a direct similarity of 68%. The research has been published in Nature Communications Engineering.

‘Bottom-up’ robotic intelligence

While a traditional robotic hand would need a motor to actuate each joint, the ADAPT hand has only 12 motors, housed in the wrist, for its 20 joints. The rest of the mechanical control comes from springs, which can be made stiffer or looser to tune the hand’s compliance, and from the silicone ‘skin’, which can also be added or removed.

As for software, the ADAPT hand is programmed to move through just four general waypoints, or positions, to lift an object. Any further adaptations required to complete the task occur without additional programming or feedback; in robotics, this is called ‘open loop’ control. For example, when the team programmed the robot to use a certain motion, it was able to adapt its grasp pose to various objects ranging from a single bolt to a banana. The researchers analyzed this extreme robustness -- thanks to the robot’s spatially distributed compliance -- with over 300 grasps and compared them against a rigid version of the hand.

“Developing robots that can perform interactions or tasks that humans do automatically is a lot harder than most people expect,” Junge says. “That’s why we are interested in exploiting this distributed mechanical intelligence of different body parts like skin, muscles, and joints, as opposed to the top-down intelligence of the brain.”

Balancing compliance and control

Junge emphasizes that the goal of the ADAPT study was not necessarily to create a robotic hand that can grasp like a human, but to show for the first time how much a robot can achieve through compliance alone.

Now that this has been demonstrated systematically, the EPFL team is building on the potential of compliance by re-integrating elements of closed-loop control into the ADAPT hand, including sensory feedback – via the addition of pressure sensors to the silicone skin – and artificial intelligence. This synergistic approach could lead to robots that combine compliance’s robustness to uncertainty, and the precision of closed-loop control.

“A better understanding of the advantages of compliant robots could greatly improve the integration of robotic systems into highly unpredictable environments, or into environments designed for humans,” Junge summarizes.

ADAPT robotic hand grasp: banana [VIDEO] 

ADAPT hand (Adaptive Dexterous Anthropomorphic Programmable sTiffness) © CREATE Lab EPFL

Credit

CREATE Lab EPFL

Tech meets tornado recovery


Researchers have developed a new AI model to speed up tornado damage assessments and recovery.



Texas A&M University

Tornado graphic 

image: 

Texas A&M researchers have developed a new method to assess damage assessments and estimate recovery times following a tornado using artificial intelligence and restoration modeling.

view more 

Credit: Texas A&M University




It started as a low, haunting roar building in the distance. It grew into a deafening thunder that drowned out all else. The sky turned an unnatural shade of green, then black. The wind lashed at trees and buildings with brutal force. Sirens wailed. Windows and buildings exploded.

In spring 2011, Joplin, Missouri, was devastated by an EF5 tornado with estimated winds exceeding 200 mph. The storm caused 161 fatalities, injured over 1,000 people, and damaged and destroyed around 8,000 homes and businesses. The tornado carved a mile-wide path through the densely populated south-central area of the city, leaving behind miles of splintered rubble and causing over $2 billion in damage.

The powerful winds of tornadoes often surpass the design limits of most residential and commercial buildings. Traditional methods of assessing damage after a disaster can take weeks or even months, delaying emergency response, insurance claims and long-term rebuilding efforts.

New research from Texas A&M University might change that. Led by Dr. Maria Koliou, associate professor and Zachry Career Development Professor II in the Zachry Department of Civil and Environmental Engineering at Texas A&M, researchers have developed a new method that combines remote sensing, deep learning and restoration models to speed up building damage assessments and predict recovery times after a tornado. Once post-event images are available, the model can produce damage assessments and recovery forecasts in less than an hour.

The researchers published their model in Sustainable Cities and Society

“Manual field inspections are labor-intensive and time-consuming, often delaying critical response efforts,” said Abdullah Braik, coauthor and a civil engineering doctoral student at Texas A&M. “Our method uses high-resolution sensing imagery and deep learning algorithms to generate damage assessments within hours, immediately providing first responders and policymakers with actionable intelligence.”

The model does more than assess damage — it also helps predict repair costs and estimate recovery times. Researchers can assess these timelines and costs in different situations by combining deep learning technology, a type of artificial intelligence, with advanced recovery models.

“We aim to provide decision-makers with near-instantaneous damage assessment and probabilistic recovery forecasts, ensuring that resources are allocated efficiently and equitably, particularly for the most vulnerable communities,” Braik said. “This enables proactive decision-making in the aftermath of a disaster.”

How It Works

Researchers combined three tools to create the model: remote sensing, deep learning and restoration modeling. 

Remote sensing uses high-resolution satellite or aerial images from sources such as NOAA to show the extent of damage across large areas. 

“These images are crucial because they offer a macro-scale view of the affected area, allowing for rapid, large-scale damage detection,” Braik said. 

Deep learning automatically analyzes these images to identify the severity of the damage accurately. The AI is trained before disasters by analyzing thousands of images of past events, learning to recognize visible signs of damage such as collapsed roofs, missing walls and scattered debris. The model then classifies each building into categories such as no damage, moderate damage, major damage, or destroyed.

Restoration modeling uses past recovery data, building and infrastructure details and community factors — like income levels or access to resources — to estimate how long it might take for homes and neighborhoods to recover under different funding or policy conditions.

When these three tools are combined, the model can quickly assess the damage and predict short- and long-term recovery timelines for communities affected by disasters.

“Ultimately, this research bridges the gap between rapid disaster assessment and strategic long-term recovery planning, offering a risk-informed yet practical framework for enhancing post-tornado resilience,” Braik said. 

Testing The Model

Koliou and Braik used data from the 2011 Joplin tornado to test their model due to its massive size, intensity and availability of high-quality post-disaster information. The tornado destroyed thousands of buildings, creating a diverse dataset that allowed the model to be trained and tested across various levels of structural damage. Detailed ground-level damage assessments provided a reliable benchmark to check how accurately the model could classify the severity of the damage.

“One of the most interesting findings was that, in addition to detecting damage with high accuracy, we could also estimate the tornado’s track,” Braik said. “By analyzing the damage data, we could reconstruct the tornado’s path, which closely matched the historical records, offering valuable information about the event itself.”

Future Directions

Researchers are working on using this model for other types of disasters, such as hurricanes and earthquakes, as long as satellites can detect damage patterns.

“The key to the model’s generalizability lies in training it to use past images from specific hazards, allowing it to learn the unique damage patterns associated with each event,” Braik said. “We have already tested the model on hurricane data, and the results have shown promising potential for adapting to other hazards.”

The research team believes their model could be critical in future disaster response, helping communities recover faster and more efficiently. The team wants to extend the model beyond damage assessment to include real-time updates on recovery progress and tracking recovery over time. 

“This will allow for more dynamic and informed decision-making as communities rebuild,” he said. “We aim to create a reliable tool that enhances disaster management efficiency and supports quicker recovery efforts.”

The technology has the potential to transform how emergency officials, insurers and policymakers respond in the crucial hours and days after a storm by delivering near-instant assessments and recovery projections.

Funding for this research was provided by the National Science Foundation.

By Alyson Chapman, Texas A&M University College of Engineering

 

Developers, educators view AI harms differently, research finds






Cornell University




ITHACA, N.Y. -- Teachers are increasingly using educational tools that leverage large language models (LLMs) like ChatGPT for lesson planning, personalized tutoring and more in K-12 classrooms around the world.

Cornell researchers have found the developers of such tools and the educators who use them have different ideas about the potential harms they may cause, a finding that researchers say underscores the need for educators to be more involved in the tools’ development.

“Education technology should center educators, and doing that requires researchers, education technology providers, school leaders and policymakers to come together and take action to mitigate potential harms from the use of LLMs in education,” said Emma Harvey, a doctoral student in the field of information science and lead author of “‘Don’t Forget the Teachers’: Towards an Educator-Centered Understanding of Harms from Large Language Models in Education.” The paper was presented April 28 at the Association for Computing Machinery’s Conference on Human Factors in Computing Systems (CHI) in Yokohama, Japan. It received a Best Paper Award. Her coauthors are Allison Koenecke, assistant professor of information science, and Rene Kizilcec, associate professor of information science, both at the Cornell Ann S. Bowers College of Computing and Information Science.

“These harms are not necessarily just the typical ones we hear of about LLMs, like bias or hallucinations,” Harvey said. “It’s this broader set of sociotechnical harms.”

Harvey and her collaborators interviewed six administrators and developers from education technologies (or “edtech”) companies and nearly two dozen educators who are navigating the increasing use of artificial intelligence-powered LLMs in schools.

The researchers found that developers from these companies tend to focus much of their time and energy on solving technical challenges, like preventing the kind of hallucinations, privacy violations or toxic content that LLMs sometimes produce.

Meanwhile, educators were more concerned with the broader impacts of using the tools, like inhibiting the development of students’ critical thinking skills, hampering students’ social development, increasing educator workload, and exacerbating systemic inequality, since disadvantaged school districts may be less able to purchase licenses to use these tools or choose to shift funding away from other resources to license them, to name a few. They were less concerned about the technical issues; they expressed that they knew how to work around them.

“I’ve noticed that as students become more tech aware, they also tend to lose that critical thinking skill. Because they can just ask for answers,” one educator said.

“It’s hard to feel like it’s equitable, or it’s going to be used for public good if it’s only available if your district can pony up for it,” said another.

A good step toward improving these education technologies is correcting the misalignment between what developers and educators see as potentially harmful, Harvey said.

The researchers outlined four recommendations to facilitate the design and development of educator-centered edtech:

  • Companies should design tools to give educators even more agency to question and correct what LLMs produce;
  • Regulators – whether in government or nonprofit agencies – should develop centralized, clear and independent reviews of LLM-based educational technologies;
  • Researchers and developers of education technologies should explore ways to make these tools more customizable for the educators who use them; and
  • Educator input should be prioritized when school district leaders are considering adopting such tools. Additionally, educators should not be penalized if they choose not to use their schools’ LLM-based tools.

“Edtech providers are spending a lot of time on reducing the chance of LLM hallucinations,” Harvey said. “Our findings suggest they could also design tools so that educators can intervene when hallucinations happen to correct students’ misconceptions through their teaching practices. This can free up time to focus on mitigating other types of harm.”

The research team hopes their findings will foster more dialogue between builders of edtech systems and the teachers who use them, Koenecke said.

“The potential harms of LLMs extend far past the technical concerns commonly measured by machine-learning researchers,” she said. “We need to be prepared to study the higher-stakes, difficult-to-measure social and societal harms arising from LLM use in the classroom.”

This research was supported by the Schmidt Futures Foundation and the National Science Foundation.

-30-

@(I)NARCHY

Groups of AI agents spontaneously form their own social norms without human help, suggests study


First-of-its-kind study suggests that groups of artificial intelligence language models can self-organise into societies, and are prone to tipping points in social convention, much like human societies.



City St George’s, University of London





new study suggests that populations of artificial intelligence (AI) agents, similar to ChatGPT, can spontaneously develop shared social conventions through interaction alone. 

The research from City St George’s, University of London and the IT University of Copenhagen suggests that when these large language model (LLM) artificial intelligence (AI) agents communicate in groups, they do not just follow scripts or repeat patterns, but self-organise, reaching consensus on linguistic norms much like human communities. The study has been published today in the journal, Science Advances.

LLMs are powerful deep learning algorithms that can understand and generate human language, with the most famous to date being ChatGPT.

“Most research so far has treated LLMs in isolation,” said lead author Ariel Flint Ashery, a doctoral researcher at City St George’s, “but real-world AI systems will increasingly involve many interacting agents. We wanted to know: can these models coordinate their behaviour by forming conventions, the building blocks of a society? The answer is yes, and what they do together can’t be reduced to what they do alone.”

In the study, the researchers adapted a classic framework for studying social conventions in humans, based on the “naming game” model of convention formation. 

In their experiments, groups of LLM agents ranged in size from 24 to 200 individuals, and in each experiment, two LLM agents were randomly paired and asked to select a ‘name’ (e.g., an alphabet letter, or a random string of characters) from a shared pool of options. If both agents selected the same name, they earned a reward; if not, they received a penalty and were shown each other's choices. 

Agents only had access to a limited memory of their own recent interactions—not of the full population—and were not told they were part of a group. Over many such interactions, a shared naming convention could spontaneously emerge across the population, without any central coordination or predefined solution, mimicking the bottom-up way norms form in human cultures.

Even more strikingly, the team observed collective biases that couldn’t be traced back to individual agents. 

“Bias doesn’t always come from within,” explained Andrea Baronchelli, Professor of Complexity Science at City St George’s and senior author of the study, “we were surprised to see that it can emerge between agents—just from their interactions. This is a blind spot in most current AI safety work, which focuses on single models.”

In a final experiment, the study illustrated how these emergent norms can be fragile: small, committed groups of AI agents can tip the entire group toward a new naming convention, echoing well-known tipping point effects – or ‘critical mass’ dynamics – in human societies.

The study results were also robust to using four different types of LLM called Llama-2-70b-Chat, Llama-3-70B-Instruct, Llama-3.1-70BInstruct, and Claude-3.5-Sonnet respectively.

As LLMs begin to populate online environments – from social media to autonomous vehicles – the researchers envision their work as a steppingstone to further explore how human and AI reasoning both converge and diverge, with the goal of helping to combat some of the most pressing ethical dangers posed by LLM AIs propagating biases fed into them by society, which may harm marginalised groups.

Professor Baronchelli added: “This study opens a new horizon for AI safety research. It shows the dept of the implications of this new species of agents that have begun to interact with us—and will co-shape our future. Understanding how they operate is key to leading our coexistence with AI, rather than being subject to it. We are entering a world where AI does not just talk—it negotiates, aligns, and sometimes disagrees over shared behaviours, just like us.”

The peer-reviewed study, “Emergent Social Conventions and Collective Bias in LLM Populations,” is published in the journal, Science Advances.

ENDS

Notes to editors

For media enquiry, including request for a copy of the research paper shared in confidence ahead of the embargo lifting, please contact corresponding author, Professor Andrea Baronchelli (a.baronchelli.work@gmail.comandrea.baronchelli.1@citystgeorges.ac.uk), cc’ing press officer, Dr Shamim Quadir (shamim.quadir@citystgeorges.ac.uk).

Link to research paper once embargo lifts

https://doi.org/10.1126/sciadv.adu9368


Media Contact

For media enquiry, contact Dr Shamim Quadir, Senior Communications Officer, School of Science & Technology, City St George’s, University of London: Tel: +44 (0) 207 040 8782 email: shamim.quadir@citystgeorges.ac.uk.

Expert Contact

Contact corresponding author, Andrea Baronchelli, Professor of Complexity Science, Department of Mathematics, School of Science & Technology, City St George’s, University of London: Tel: +44 (0) 207 040 8124 email: andrea.baronchelli.1@citystgeorges.ac.uka.baronchelli.work@gmail.com

About the academics

Professor Andrea Baronchelli is a world-renowned expert on social conventions, a field he has been researching for two decades. His pioneering work includes the now-standard naming game framework, as well as groundbreaking lab experiments showing how humans spontaneously create conventions without central authority, and how those conventions can be overturned by small committed groups.

About City St George’s, University of London 

City St George’s, University of London is the University of business, practice and the professions. 

City St George’s attracts around 27,000 students from more than 170 countries. 

Our academic range is broadly-based with world-leading strengths in business; law; health and medical sciences; mathematics; computer science; engineering; social sciences including international politics, economics and sociology; and the arts including journalism, dance and music. 

In August 2024, City, University of London merged with St George’s, University of London creating a powerful multi-faculty institution. The combined university is now one of the largest suppliers of the health workforce in the capital, as well as one of the largest higher education destinations for London students.  

City St George’s campuses are spread across London in Clerkenwell, Moorgate and Tooting, where we share a clinical environment with a major London teaching hospital. 

Our students are at the heart of everything that we do, and we are committed to supporting them to go out and get good jobs. 

Our research is impactful, engaged and at the frontier of practice. In the last REF (2021) 86 per cent of City research was rated as ‘world-leading’ 4* (40%) and ‘internationally excellent’ 3* (46%) and 100 per cent of St George’s impact case studies were judged as ‘world-leading’ or ‘internationally excellent’. As City St George’s we will seize the opportunity to carry out interdisciplinary research which will have positive impact on the world around us. 

Over 175,000 former students in over 170 countries are members of the City St George’s Alumni Network. 

City St George’s is led by Professor Sir Anthony Finkelstein. 

 

New study shows AI can predict child malnutrition, support prevention efforts



AI-driven tool developed for Kenya offers governments and decision-makers critical lead time to save lives by forecasting malnutrition up to six months in advance with up to 89% accuracy



University of Southern California




A multidisciplinary team of researchers from the USC School of Advanced Computing and the Keck School of Medicine, working alongside experts from the Microsoft AI for Good Lab, Amref Health Africa, and Kenya’s Ministry of Health, has developed an artificial intelligence (AI) model that can predict acute child malnutrition in Kenya up to six months in advance.

The tool offers governments and humanitarian organizations critical lead time to deliver life-saving food, health care, and supplies to at-risk areas.The machine learning model outperforms traditional approaches by integrating clinical data from more than 17,000 Kenyan health facilities with satellite data on crop health and productivity.

It achieves 89% accuracy when forecasting one month out, and maintains 86% accuracy over six months — a significant improvement over simpler baseline models that rely only on recent historical child malnutrition prevalence trends.

In contrast to existing models, the new tool is especially effective at forecasting malnutrition in regions where prevalence fluctuates and surges are difficult to anticipate.

“This model is a game-changer,” said Bistra Dilkina, associate professor of computer science and co-director of the USC Center for Artificial Intelligence in Society. “By using data-driven AI models, you can capture more complex relationships between multiple variables that work together to help us predict malnutrition prevalence more accurately.”

The findings are detailed in PLOS One study to be published May 14, 2025, titled “Forecasting acute childhood malnutrition in Kenya using machine learning and diverse sets of indicators.”

The study was co-authored by Girmaw Abebe Tadesse (Microsoft AI for Good Lab), Laura Ferguson (USC Institute on Inequalities in Global Health), Caleb Robinson, Rahul Dodhia, Juan M. Lavista Ferres (Microsoft AI for Good Lab), Shiphrah Kuria, Herbert Wanyonyi, Samuel Mburu (Amref Health Africa), Samuel Murage (Kenyan Ministry of Health), and Bistra Dilkina (USC Center for AI in Society).

Girmaw Abebe Tadesse, principal scientist and manager at the Microsoft AI for Good Lab in Nairobi, Kenya, said he believes the predictive AI tool will make a difference.

“This project is important, as malnutrition poses a significant challenge to children in Africa, a continent that is facing a major food insecurity exacerbated by climate change,” he said. 

A public health emergency

In Kenya, 5% of children under the age of five — an estimated 350,000 individuals—suffer from acute malnutrition, a condition that weakens the immune system and dramatically increases the risk of death from common illnesses like diarrhea and malaria. In some regions, the rate climbs as high as 25%. Globally, undernutrition is linked to nearly half of all deaths in children under five.

“Malnutrition is a public health emergency in Kenya,” said Laura Ferguson, director of research at USC’s Institute on Inequalities in Global Health and associate professor of population and public health sciences at the Keck School of Medicine of USC. “Children are sick unnecessarily. Children are dying unnecessarily.”

Current forecasting efforts in Kenya are largely based on expert judgment and historical knowledge — methods that struggle to anticipate new hotspots or rapid shifts.

Instead, the team’s model uses Kenya’s routine health data, collected through the District Health Information System 2 (DHIS2), alongside satellite-derived indicators like crop health and productivity to identify emerging risk areas with far greater precision.

“The best way to predict the future is to create it using available data for better planning and prepositioning in developing countries,” said Murage S.M. Kiongo, Program Officer for Monitoring and Evaluation, Division of Nutrition and Dietetics, Ministry of Health, Kenya. “Trends tell us a story. Multifaceted data sources, coupled with machine learning, offer an opportunity to improve programming on nutrition and health issues.”

The researchers have developed a prototype dashboard that visualizes regional malnutrition risk, enabling quicker, better-targeted responses to child malnutrition risks. Ferguson and Dilkina are now working with the Kenyan Ministry of Health and Amref Health Africa to integrate the model and dashboard into government systems and decision making, with the goal of creating a sustainable and regularly updated public resource.

“Most global health problems cannot be solved within the health field alone, and this is one of them,” Ferguson said. “So, we absolutely need public health experts. We need medical officials. We need nonprofits. We need engineers. If you take out any single partner, it just doesn’t work and won’t have the impact that we hope for.”

More than 125 countries currently use DHIS2, including about 80 low- and middle-income countries. That means this AI-driven framework — which relies only on existing health and satellite data — could be adapted to fight malnutrition in other countries across the globe.

“If we can do this for Kenya, we can do it for other countries,” Dilkina said. “The sky’s the limit when there is a genuine commitment to work in partnerships.”