Swarm intelligence directs longhorn crazy ants to clear the road ahead for sisters carrying bulky food
Scientists show how collective form of understanding emerges from simple actions of unintelligent worker ants
image:
Examples of experimental set-up and close-up of collective transport of prey and of obstacle-clearing behavior
view moreCredit: E Fonio, D Mersch, O Feinerman
Among the tens of thousands of ant species, incredible ‘intelligent’ behaviors like crop culture, animal husbandry, surgery, ‘piracy’, social distancing, and complex architecture have evolved. Yet at first sight, the brain of an ant seems hardly capable of such feats: it is about the size of a poppy seed, with only 0.25m to 1m neurons, compared to 86bn for humans. Now, researchers from Israel and Switzerland have shown how ‘swarm intelligence’ resembling advance planning can nevertheless emerge from the concerted operation of many of these tiny brains. The results are published in Frontiers in Behavioral Neuroscience.
“Here we show for the first time that workers of the longhorn crazy ant can clear obstacles from a path before they become a problem – anticipating where a large food item will need to go and preparing the way in advance. This is the first documented case of ants showing such forward-looking behavior during cooperative transport,” said Dr Ehud Fonio, a research fellow at the Weizmann Institute of Science in Israel, and the corresponding author of the study.
‘I can see all obstacles in my way’
The researchers were inspired when they made a fascinating chance observation in nature: individual crazy ant workers used their mandibles to pick up and carry away tiny gravel pebbles near groups of workers cooperating to transport large insect prey.
“When we first saw ants clearing small obstacles ahead of the moving load we were in awe. It appeared as if these tiny creatures understand the difficulties that lie ahead and try to help their friends in advance,” said Dr Ofer Feinerman, a professor at the Weizmann Institute, and the study’s final author.
Fonio et al. designed a suite of 83 experiments to study this obstacle-clearing behavior on a single crazy ant ‘supercolony’ on the Weizmann Institute’s campus. For pebbles, they used plastic beads with a diameter of 1.5 millimeter (half the body length of the ants) to block the ants’ route. For prey, they used pellets of cat food, of which the ants are fond.
Triggered into clearing mode by pheromones
Like many ant species, crazy ants are known to alert their sisters to the presence of large food items by laying odor trails: running erratically (hence their ‘crazy’ name), they touch the ground with the tip of their abdomen every 0.2 seconds to deposit a tiny droplet of a pheromone. This pheromone swiftly attracts other workers to the food. But here, the scientists found this pheromone to play a key role in clearing behavior as well.
Their observations showed that workers were most prone to clear beads that lay approximately 40mm away from food towards the direction of the nest. They moved these beads for up to 50mm before dropping them, away from the route leading back to the nest. The record holder cleared 64 beads in succession.
Such clearing behavior always occurred when the pellet was whole, but rarely when it was divided into crumbs. This distinction seemed adaptive, as the observations showed that crumbs were always carried home by single workers, who would simply walk around any beads in their path. Intact pellets, however, always prompted ‘cooperative’ transport by multiple workers, who typically remained stalled by a grid of beads until these were cleared.
That the beads were a real hindrance was also clear from the time that cooperative transport took to pass through a 5cm by 7cm tunnel: this was 18 times longer when the passage was filled with beads than when it was free of obstacles.
Further observations also revealed that workers didn’t need to be in contact with the food to start clearing behavior: they were prompted to do so by pheromones deposited by foragers. A single mark that happened to be near a bead was sufficient to put a worker in ‘clearing mode’, after which they would actively look for more beads to clear.
‘Awe-inspiring’
“Taken together, these results imply that our initial impression was wrong: in reality, individual workers don’t understand the situation at all. This intelligent behavior happens at the level of the colony, not the individual. Each ant follows simple cues – like fresh scent marks left by others – without needing to understand the bigger picture, yet together they create a smart, goal-directed outcome,” concluded Dr Danielle Mersch, formerly a postdoctoral researcher at the same institute.
“We find this to be even more awe-inspiring than our initial guess,” said Feinerman.
“Humans think ahead by imagining future events in their minds; ants don’t do that. But by interacting through chemical signals and shared actions, ant colonies can behave in surprisingly smart ways – achieving tasks that look planned, even though no single ant is doing the planning. These ants thus provide us an analogy to brains, where from the activity of the relatively simple computational units, namely neurons, some high cognition capabilities miraculously emerge.”
Worker of longhorn crazy ant clearing a bead
Credit
Alessandro Crespi
Collective transport [VIDEO] |
Journal
Frontiers in Behavioral Neuroscience
Method of Research
Experimental study
Subject of Research
Animals
Article Title
Ants engaged in cooperative food transport show anticipatory and nest-oriented clearing of the obstacles surrounding the food: goal-directed behavior emerging from collective cognition
Article Publication Date
13-Jun-2025
AI-enabled control system helps autonomous drones stay on target in uncertain environments
The system automatically learns to adapt to unknown disturbances such as gusting winds
Cambridge, MA – An autonomous drone carrying water to help extinguish a wildfire in the Sierra Nevada might encounter swirling Santa Ana winds that threaten to push it off course. Rapidly adapting to these unknown disturbances inflight presents an enormous challenge for the drone’s flight control system.
To help such a drone stay on target, MIT researchers developed a new, machine learning-based adaptive control algorithm that could minimize its deviation from its intended trajectory in the face of unpredictable forces like gusty winds.
Unlike standard approaches, the new technique does not require the person programming the autonomous drone to know anything in advance about the structure of these uncertain disturbances. Instead, the control system’s artificial intelligence model learns all it needs to know from a small amount of observational data collected from 15 minutes of flight time.
Importantly, the technique automatically determines which optimization algorithm it should use to adapt to the disturbances, which improves tracking performance. It chooses the algorithm that best suits the geometry of specific disturbances this drone is facing.
The researchers train their control system to do both things simultaneously using a technique called meta-learning, which teaches the system how to adapt to different types of disturbances.
Taken together, these ingredients enable their adaptive control system to achieve 50 percent less trajectory tracking error than baseline methods in simulations and perform better with new wind speeds it didn’t see during training.
In the future, this adaptive control system could help autonomous drones more efficiently deliver heavy parcels despite strong winds or monitor fire-prone areas of a national park.
“The concurrent learning of these components is what gives our method its strength. By leveraging meta-learning, our controller can automatically make choices that will be best for quick adaptation,” says Navid Azizan, who is the Esther and Harold E. Edgerton Assistant Professor in the MIT Department of Mechanical Engineering and the Institute for Data, Systems, and Society (IDSS), a principal investigator of the Laboratory for Information and Decision Systems (LIDS), and the senior author of a paper on this control system.
Azizan is joined on the paper by lead author Sunbochen Tang, a graduate student in the Department of Aeronautics and Astronautics, and Haoyuan Sun, a graduate student in the Department of Electrical Engineering and Computer Science. The research was recently presented at the Learning for Dynamics and Control Conference.
Finding the right algorithm
Typically, a control system incorporates a function that models the drone and its environment, and includes some existing information on the structure of potential disturbances. But in a real world filled with uncertain conditions, it is often impossible to hand-design this structure in advance.
Many control systems use an adaptation method based on a popular optimization algorithm, known as gradient descent, to estimate the unknown parts of the problem and determine how to keep the drone as close as possible to its target trajectory during flight. However, gradient descent is only one algorithm in a larger family of algorithms available to choose, known as mirror descent.
“Mirror descent is a general family of algorithms, and for any given problem, one of these algorithms can be more suitable than others. The name of the game is how to choose the particular algorithm that is right for your problem. In our method, we automate this choice,” Azizan says.
In their control system, the researchers replaced the function that contains some structure of potential disturbances with a neural network model that learns to approximate them from data. In this way, they don’t need to have an a priori structure of the wind speeds this drone could encounter in advance.
Their method also uses an algorithm to automatically select the right mirror-descent function while learning the neural network model from data, rather than assuming a user has the ideal function picked out already. The researchers give this algorithm a range of functions to pick from, and it finds the one that best fits the problem at hand.
“Choosing a good distance-generating function to construct the right mirror-descent adaptation matters a lot in getting the right algorithm to reduce the tracking error,” Tang adds.
Learning to adapt
While the wind speeds the drone may encounter could change every time it takes flight, the controller’s neural network and mirror function should stay the same so they don’t need to be recomputed each time.
To make their controller more flexible, the researchers use meta-learning, teaching it to adapt by showing it a range of wind speed families during training.
“Our method can cope with different objectives because, using meta-learning, we can learn a shared representation through different scenarios efficiently from data,” Tang explains.
In the end, the user feeds the control system a target trajectory and it continuously recalculates, in real-time, how the drone should produce thrust to keep it as close as possible to that trajectory while accommodating the uncertain disturbance it encounters.
In both simulations and real-world experiments, the researchers showed that their method led to significantly less trajectory tracking error than baseline approaches with every wind speed they tested.
“Even if the wind disturbances are much stronger than we had seen during training, our technique shows that it can still handle them successfully,” Azizan adds.
In addition, the margin by which their method outperformed the baselines grew as the wind speeds intensified, showing that it can adapt to challenging environments.
The team is now performing hardware experiments to test their control system on real drones with varying wind conditions and other disturbances.
They also want to extend their method so it can handle disturbances from multiple sources at once. For instance, changing wind speeds could cause the weight of a parcel the drone is carrying to shift in flight, especially when the drone is carrying sloshing payloads.
They also want to explore continual learning, so the drone could adapt to new disturbances without the need to also be retrained on the data it has seen so far.
###
This research was supported, in part, by MathWorks, the MIT-IBM Watson AI Lab, the MIT-Amazon Science Hub, and the MIT-Google Program for Computing Innovation.
No comments:
Post a Comment