Tuesday, December 16, 2025

Penn and UMich create world’s smallest programmable, autonomous robots



Smaller than a grain of salt, the light-powered bots can think, sense and act on their own, opening up new possibilities in manufacturing and medicine




University of Pennsylvania School of Engineering and Applied Science

Robot on Finger 

image: 

A microrobot, fully integrated with sensors and a computer, small enough to balance on the ridge of a fingerprint.
view more 

Credit: Marc Miskin, University of Pennsylvania




Researchers at the University of Pennsylvania and University of Michigan have created the world’s smallest fully programmable, autonomous robots: microscopic swimming machines that can independently sense and respond to their surroundings, operate for months and cost just a penny each.

Barely visible to the naked eye, each robot measures about 200 by 300 by 50 micrometers, smaller than a grain of salt. Operating at the scale of many biological microorganisms, the robots could advance medicine by monitoring the health of individual cells and manufacturing by helping construct microscale devices.

Powered by light, the robots carry microscopic computers and can be programmed to move in complex patterns, sense local temperatures and adjust their paths accordingly. 

Described in Science Robotics and Proceedings of the National Academy of Sciences (PNAS), the robots operate without tethers, magnetic fields or joystick-like control from the outside, making them the first truly autonomous, programmable robots at this scale.

“We’ve made autonomous robots 10,000 times smaller,” says Marc Miskin, Assistant Professor in Electrical and Systems Engineering at Penn Engineering and the papers’ senior author. “That opens up an entirely new scale for programmable robots.”

Breaking the Sub-Millimeter Barrier

For decades, electronics have gotten smaller and smaller, but robots have struggled to keep pace. “Building robots that operate independently at sizes below one millimeter is incredibly difficult,” says Miskin. “The field has essentially been stuck on this problem for 40 years.” 

The forces that dominate the human world, like gravity and inertia, depend on volume. Shrink down to the size of a cell, however, and forces tied to surface area, like drag and viscosity, take over. “If you’re small enough, pushing on water is like pushing through tar,” says Miskin.

In other words, at the microscale, strategies that move larger robots, like limbs, rarely succeed. “Very tiny legs and arms are easy to break,” says Miskin. “They’re also very hard to build.” 

So the team had to design an entirely new propulsion system, one that worked with — rather than against — the unique physics of locomotion in the microscopic realm. 

Making the Robots Swim

Large aquatic creatures, like fish, move by pushing the water behind them. Thanks to Newton’s Third Law, the water exerts an equal and opposite force on the fish, propelling it forward.

The new robots, by contrast, don’t flex their bodies at all. Rather, they generate an electrical field that nudges ions in the surrounding solution. Those ions, in turn, push on nearby water molecules, animating the water around the robot’s body. “It’s as if the robot is in a moving river,” says Miskin, “but the robot is also causing the river to move.” 

The robots can adjust the electrical field that causes the effect, allowing them to move in complex patterns and even travel in coordinated groups, much like a school of fish, at speeds of up to one body length per second. 

And because the electrodes that generate the field have no moving parts, the robots are extremely durable. “You can repeatedly transfer these robots from one sample to another using a micropipette without damaging them,” says Miskin. Charged by the glow of an LED, the robots can keep swimming for months on end. 

Giving the Robots Brains

To be truly autonomous, a robot needs a computer to make decisions, electronics to sense its surroundings and control its propulsion, and tiny solar panels to power everything, and all that needs to fit on a chip that is a fraction of a millimeter in size. This is where David Blaauw’s team at the University of Michigan came into action. 

Blaauw’s lab holds the record for the world’s smallest computer. When Miskin and Blaauw first met at a presentation hosted by the Defense Advanced Research Projects Agency (DARPA) five years ago, the pair immediately realized that their technologies were a perfect match. “We saw that Penn Engineering’s propulsion system and our tiny electronic computers were just made for each other,” says Blaauw. Still, it took five years of hard work on both sides to deliver their first working robot. 

“The key challenge for the electronics,” says Blaauw, “is that the solar panels are tiny and produce only 75 nanowatts of power. That is over 100,000 times less power than what a smart watch consumes.” To run the robot’s computer on such little power, the Michigan team developed special circuits that operate at extremely low voltages and bring down the computer’s power consumption by more than 1000 times. 

Still, the solar panels occupy the majority of the space on the robot. Therefore, the second challenge was to cram the processor and memory to store a program in the little space that remained. “We had to totally rethink the computer program instructions,” says Blaauw, “condensing what conventionally would require many instructions for propulsion control into a single, special instruction to shrink the program’s length to fit in the robot’s tiny memory space.” 

Robots that Sense, Remember and React

What these innovations made possible is the first sub-millimeter robot that can actually think. To the researchers’ knowledge, no one has previously put a true computer — processor, memory and sensors — into a robot this small. That breakthrough makes these devices the first microscopic robots that can sense and act for themselves. 

The robots have electronic sensors that can detect the temperature to within a third of a degree Celsius. This lets robots move towards areas of increasing temperature, or report the temperature — a proxy for cellular activity — allowing them to monitor the health of individual cells.

“To report out their temperature measurements, we designed a special computer instruction that encodes a value, such as the measured temperature, in the wiggles of a little dance the robot performs,” says Blaauw. “We then look at this dance through a microscope with a camera and decode from the wiggles what the robots are saying to us. It’s very similar to how honey bees communicate with each other.”

The robots are programmed by pulses of light that also power them. Each robot has a unique address that allows the researchers to load different programs on each robot. “This opens up a host of possibilities,” adds Blaauw, “with each robot potentially performing a different role in a larger, joint task.”  

Only the Beginning

Future versions of the robots could store more complex programs, move faster, integrate new sensors or operate in more challenging environments. In essence, the current design is a general platform: its propulsion system works seamlessly with electronics, its circuits can be fabricated cheaply at scale and its design allows for adding new capabilities. 

“This is really just the first chapter,” says Miskin. “We’ve shown that you can put a brain, a sensor and a motor into something almost too small to see, and have it survive and work for months. Once you have that foundation, you can layer on all kinds of intelligence and functionality. It opens the door to a whole new future for robotics at the microscale.”

These studies were conducted at the University of Pennsylvania (Penn) School of Engineering and Applied Science, Penn School of Arts & Sciences, and the University of Michigan, Department of Electrical Engineering and Computer Science and supported by the National Science Foundation (NSF 2221576), the University of Pennsylvania Office of the President, the Air Force Office of Scientific Research (AFOSR FA9550-21-1-0313) the Army Research Office (ARO YIP W911NF-17-S-0002), the Packard Foundation, the Sloan Foundation and the NSF National Nanotechnology Coordinated Infrastructure Program (NNCI-2025608), which supports the Singh Center for Nanotechnology, and Fujitsu Semiconductors. 

Additional co-authors include Maya M. Lassiter, Kyle Skelil, Lucas C. Hanson, Scott Shrager, William H. Reinhardt, Tarunyaa Sivakumar and Mark Yim of the University of Pennsylvania, and Dennis Sylvester, Li Xu, and Jungho Lee of the University of Michigan. 

Eyes for an agricultural robot: AI system identifies weeds in apple orchards



Novel machine vision technology to guide robotic precision herbicide sprayer intended to deal with labor shortages, prevent waste, pollution and excess chemical residues on fruit




Penn State

These photos show images of different weed species that the researchers trained the artificial intelligence (AI) machine vision model to recognize. 

image: 

 

These photos show images of different weed species that the researchers trained the artificial intelligence (AI) machine vision model to recognize. That model is intended to guide an automated robotic precision herbicide spraying unit under development in the Department of Agricultural and Biological Engineering to control weeds in apple orchards.   

view more 

Credit: Penn State



UNIVERSITY PARK, Pa. — Weed control is essential in apple orchards because weeds compete with trees for nutrients, water and sunlight, which can reduce fruit yields. However, physically removing weeds is not only labor-intensive, but it also can damage soil structure and tree roots. Using chemical sprays to kill weeds can lead to other problems, such as pollution, herbicide resistance and excess chemical residues on apples. Another option called precision weed management — detecting and measuring weeds with high accuracy then applying small amounts of herbicide to control them efficiently — can help farmers avoid wasting chemicals or causing injury to crops or the environment, according to a team of researchers at Penn State. To help growers achieve such precise management, the researchers are developing an automated, robotic weed-management system.

The researchers reported on an early step in that process in the December issue of Computers and Electronics in Agriculture: an AI machine vision model they developed that can accurately find, outline, interpret and estimate the density of weeds in apple orchards. The system, intended to guide the eventual robotic precision sprayer, uses a machine vision innovation that allows a side-view camera to detect and identify weeds for treatment — even weeds that are partially obscured.

“In complex environments like apple orchards, it is difficult to develop weed-detection mechanisms because the tree canopy and low branches block the view from above, precluding traditional top-view camera systems, like drones, because they can’t clearly see the weeds on the ground,” said team leader and study senior author Long He, associate professor of agricultural and biological engineering. His research group in the College of Agricultural Sciences has been studying and developing robotic precision agricultural systems over the last decade. “A side-view camera can help, but weeds might be partially visible or hidden behind untargeted objects or tree trunks. This causes problems such as misidentifying weeds or losing track of a weed in real time.”

To overcome those challenges, study first author Lawrence Arthur, doctoral candidate in the Department of Agricultural and Biological Engineering led the team in improving a commercially available deep-learning model for the machine vision computer program. The model was already capable of fast object detection and segmentation, meaning it can find the weed and outline its exact shape, pixel by pixel.

To make it better, the researchers added a module that helps the model “pay attention” to the most important image features while suppressing irrelevant feature information in the scene. This innovation improved accuracy when parts of weeds are hidden or hard to distinguish, He explained. Also, they integrated a tracking algorithm with a filtering mechanism for more effective weed tracking. The algorithm preserves weed identity across video frames and prevents counting the same weed multiple times, He said. This allows the system to track weeds when they disappear briefly due to being blocked by the apple trees or even other weeds.

The study data was collected at Penn State’s Fruit Research and Extension Center in Biglerville and nearby apple orchards. Weed species included dandelion, common sow thistle, horseweed and Carolina horsenettle. The researchers took high-resolution photos of these weeds to form the dataset for training and testing the AI model.

The model achieved high accuracy and recall for weed detection, making it suitable for automated weed management in orchards,” He pointed out. It achieved 84.9% average precision for detecting segmentation and 83.6% average precision for localization. These numbers indicate strong accuracy for finding weeds and outlining them.

In tracking images across frames, the model scored 82% in multiple object tracking accuracy — meaning it achieved high accuracy in tracking multiple weeds; 78% in multiple object tracking precision — meaning it achieved good precision in estimating weed positions; and 88% in identification score — showing a strong ability to correctly maintain weed identities across video frames. Finally, the model only recorded six identity switches in the study, He noted, meaning the model rarely confuses one weed for another as it tracks them.

The research is a big step toward automated, precise weed control in agriculture, according to He.

“By combining better detection and stronger tracking with added density estimation, the model we developed provides more accurate, consistent weed detection, even in difficult orchard conditions,” he said. “By providing actionable data for site-specific weed management, this approach will improve herbicide efficiency and reduce waste.”

Contributing to the research were Caio Brunharo, assistant professor of weed science; Paul Heinemann, professor of agricultural and biological engineering; Magni Hussain, assistant research professor of electronics, instrumentation and control systems; and Sadjad Mahnan, graduate assistant in the Department of Agricultural and Biological Engineering.

This research was partially supported by the U.S. Department of Agriculture’s National Institute of Food and Agriculture, Pennsylvania Department of Agriculture, and the State Horticultural Association of Pennsylvania. 

At Penn State, researchers are solving real problems that impact the health, safety and quality of life of people across the commonwealth, the nation and around the world.    

For decades, federal support for research has fueled innovation that makes our country safer, our industries more competitive and our economy stronger. Recent federal funding cuts threaten this progress.    

Learn more about the implications of federal funding cuts to our future at Research or Regress.

 

Frontiers in Science Deep Dive series: How breaking the ‘memory wall’ using brain-inspired algorithms could help overcome AI energy costs



A complimentary virtual symposium from Frontiers




Frontiers





AI hardware needs to become more brain-like to meet the growing energy demands of real-world applications, according to researchers from Purdue University and Georgia Institute of Technology.   

In their new Frontiers in Science lead articleProf Kaushik Roy and Prof Arijit Raychowdhury present a novel approach to AI hardware design—integrating neuromorphic systems processing capabilities and compute-in-memory (CIM) techniques—to overcome the limitations of modern computing hardware. The article outlines a comprehensive roadmap for future AI-hardware research, emphasizing hardware–algorithm co-design to accelerate innovation across sectors such as healthcare, transportation, and robotics. 

Join the authors at our Frontiers in Science Deep Dive webinar on 12 February 2026, 16:00–17:30 CET, as they discuss emerging strategies that could reduce data center energy use and enable real-time intelligence in compact, power-constrained systems. Potential applications include on-device medical diagnostics, autonomous vehicles, and drones that navigate safely. 

Breaking the memory wall: next-generation artificial intelligence hardware | 12 February 2026 | Register 

Frontiers in Science Deep Dive sessions bring researchers, policy experts, and innovators together from around the world to discuss a specific area of transformational science published in Frontiers' flagship, multidisciplinary journal, Frontiers in Science, and explore next steps for the field.