Light-powered evaporator robot
A sunlight-driven floating device combines photocatalysis, evaporation, and autonomous navigation for sustainable water purification
Chinese Society for Optical Engineering
image:
Schematic mechanism of the light‑propelled photocatalytic evaporator
view moreCredit: Yong‑Lai Zhang https://link.springer.com/article/10.1186/s43074-025-00169-4
A research team from Jilin University has developed a solar-powered floating robot that purifies water and autonomously navigates on its surface, offering a new strategy for smart, energy-free water treatment in complex environments. Their findings, published in PhotoniX, are reported in a study titled "Light-propelled photocatalytic evaporator for robotic solar-driven water purification."
The system—designed as a lightweight, porous foam structure—integrates three critical functions: photocatalytic degradation, solar steam generation, and self-propulsion under light. It is composed of a hybrid material that combines reduced graphene oxide, Ti₃C₂Tâ‚“, and in situ grown TiO₂ nanoparticles, enabling the structure to respond efficiently to sunlight across a broad spectrum.
Unlike conventional solar-driven water purification devices, which are typically static and location-bound, this light-powered robot can move across the water surface by harnessing the Marangoni effect. When light is unevenly applied to one side of the device, it creates a surface tension gradient that propels the robot forward. This motion is entirely light-controlled—no batteries, wires, or motors are needed. By steering the light, researchers can direct the robot along programmable paths, enabling it to navigate obstacles or locate specific contaminated regions.
Simultaneously, the device purifies water through two mechanisms. The TiO₂ nanoparticles catalyze the breakdown of organic pollutants, while the foam structure efficiently converts sunlight into heat to drive evaporation. The dual-action approach enhances water purification capacity and energy utilization, all within a single, self-contained platform.
This study demonstrates the potential of integrating photonic materials, solar energy harvesting, and intelligent motion control into a compact, autonomous system. According to the authors, this work represents a breakthrough in combining light-driven propulsion and multifunctional purification, especially in remote or resource-limited areas where access to clean water and external power is restricted.
As water scarcity becomes an urgent global issue, the development of intelligent, self-powered systems like this robotic evaporator provides an innovative direction toward sustainable water treatment solutions.
Journal
PhotoniX
Method of Research
News article
Subject of Research
Not applicable
Article Title
Light-propelled photocatalytic evaporator for robotic solar-driven water purification
Empowering robots with human-like perception to navigate unwieldy terrain
A new Duke-developed AI system fuses vision, vibrations, touch and its own body states to help robots understand and move through difficult in-the-wild environments
Duke University
video:
WildFusion uses a combination of sight, touch, sound and balance to help four-legged robots better navigate difficult terrain like dense forests.
view moreCredit: Boyuan Chen, Duke University
The wealth of information provided by our senses that allows our brain to navigate the world around us is remarkable. Touch, smell, hearing, and a strong sense of balance are crucial to making it through what to us seem like easy environments such as a relaxing hike on a weekend morning.
An innate understanding of the canopy overhead helps us figure out where the path leads. The sharp snap of branches or the soft cushion of moss informs us about the stability of our footing. The thunder of a tree falling or branches dancing in strong winds lets us know of potential dangers nearby.
Robots, in contrast, have long relied solely on visual information such as cameras or lidar to move through the world. Outside of Hollywood, multisensory navigation has long remained challenging for machines. The forest, with its beautiful chaos of dense undergrowth, fallen logs and ever-changing terrain, is a maze of uncertainty for traditional robots.
Now, researchers from Duke University have developed a novel framework named WildFusion that fuses vision, vibration and touch to enable robots to “sense” complex outdoor environments much like humans do. The work was recently accepted to the IEEE International Conference on Robotics and Automation (ICRA 2025), which will be held May 19-23, 2025, in Atlanta, Georgia.
“WildFusion opens a new chapter in robotic navigation and 3D mapping,” said Boyuan Chen, the Dickinson Family Assistant Professor of Mechanical Engineering and Materials Science, Electrical and Computer Engineering, and Computer Science at Duke University. “It helps robots to operate more confidently in unstructured, unpredictable environments like forests, disaster zones and off-road terrain.”
"Typical robots rely heavily on vision or LiDAR alone, which often falter without clear paths or predictable landmarks," added Yanbaihui Liu, the lead student author and a second-year Ph.D. student in Chen’s lab. “Even advanced 3D mapping methods struggle to reconstruct a continuous map when sensor data is sparse, noisy or incomplete, which is a frequent problem in unstructured outdoor environments. That’s exactly the challenge WildFusion was designed to solve.”
WildFusion, built on a quadruped robot, integrates multiple sensing modalities, including an RGB camera, LiDAR, inertial sensors, and, notably, contact microphones and tactile sensors. As in traditional approaches, the camera and the LiDAR capture the environment’s geometry, color, distance and other visual details. What makes WildFusion special is its use of acoustic vibrations and touch.
As the robot walks, contact microphones record the unique vibrations generated by each step, capturing subtle differences, such as the crunch of dry leaves versus the soft squish of mud. Meanwhile, the tactile sensors measure how much force is applied to each foot, helping the robot sense stability or slipperiness in real time. These added senses are also complemented by the inertial sensor that collects acceleration data to assess how much the robot is wobbling, pitching or rolling as it traverses uneven ground.
Each type of sensory data is then processed through specialized encoders and fused into a single, rich representation. At the heart of WildFusion is a deep learning model based on the idea of implicit neural representations. Unlike traditional methods that treat the environment as a collection of discrete points, this approach models complex surfaces and features continuously, allowing the robot to make smarter, more intuitive decisions about where to step, even when its vision is blocked or ambiguous.
“Think of it like solving a puzzle where some pieces are missing, yet you're able to intuitively imagine the complete picture,” explained Chen. “WildFusion’s multimodal approach lets the robot ‘fill in the blanks’ when sensor data is sparse or noisy, much like what humans do.”
WildFusion was tested at the Eno River State Park in North Carolina near Duke’s campus, successfully helping a robot navigate dense forests, grasslands and gravel paths. “Watching the robot confidently navigate terrain was incredibly rewarding,” Liu shared. “These real-world tests proved WildFusion’s remarkable ability to accurately predict traversability, significantly improving the robot’s decision-making on safe paths through challenging terrain.”
Looking ahead, the team plans to expand the system by incorporating additional sensors, such as thermal or humidity detectors, to further enhance a robot’s ability to understand and adapt to complex environments. With its flexible modular design, WildFusion provides vast potential applications beyond forest trails, including disaster response across unpredictable terrains, inspection of remote infrastructure and autonomous exploration.
“One of the key challenges for robotics today is developing systems that not only perform well in the lab but that reliably function in real-world settings,” said Chen. “That means robots that can adapt, make decisions and keep moving even when the world gets messy.”
This research was supported by DARPA (HR00112490419, HR00112490372) and the Army Research Laboratory (W911NF2320182, W911NF2220113).
“WildFusion: Multimodal Implicit3DReconstructions in the Wild.” Yanbaihui Liu and Boyuan Chen. IEEE International Conference on Robotics and Automation (ICRA 2025)
Project Website: http://generalroboticslab.com/WildFusion
General Robotics Lab Website: http://generalroboticslab.com
Picking Through Grass [VIDEO] |
WildFusion helps robots identify safe paths through challenging terrain, such as tall foliage that might otherwise look unnavigable.
Credit
Boyuan Chen, Duke University
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
WildFusion: Multimodal Implicit3DReconstructions in the Wild
Article Publication Date
19-May-2025
No comments:
Post a Comment