Thursday, May 15, 2025

 

Eldercare robot helps people sit and stand, and catches them if they fall



The new design could assist the elderly as they age in place at home.



Massachusetts Institute of Technology

Eldercare Robot 

image: 

Six of multiple possible assistance scenarios with a prototype of a new robot being developed at MIT. Top row: getting into/out of a bathtub, bending down to reach objects, and catching a fall. Bottom row: powered sit-to-stand transition from a toilet, lifting a person from the floor, and walking assistance.

view more 

Credit: Courtesy of Roberto Bolli and Harry Asada




The United States population is older than it has ever been. Today, the country’s median age is 38.9, which is nearly a decade older than it was in 1980. And the number of adults older than 65 is expected to balloon from 58 million to 82 million by 2050. The challenge of caring for the elderly, amid shortages in care workers, rising health care costs, and evolving family structures, is an increasingly urgent societal issue. 

To help address the eldercare challenge, a team of MIT engineers is looking to robotics. They have built and tested the Elderly Bodily Assistance Robot, or E-BAR, a mobile robot designed to physically support the elderly and prevent them from falling as they move around their homes. 

E-BAR acts as a set of robotic handlebars that follows a person from behind. A user can walk independently or lean on the robot’s arms for support. The robot can support the person’s full weight, lifting them from sitting to standing and vice versa along a natural trajectory. And the arms of the robot can them by rapidly inflating side airbags if they begin to fall.

With their design, the researchers hope to prevent falls, which today are the leading cause of injury in adults who are 65 and older.  

“Many older adults underestimate the risk of fall and refuse to use physical aids, which are cumbersome, while others overestimate the risk and may not to exercise, leading to declining mobility,” says Harry Asada, the Ford Professor of Engineering at MIT. “Our design concept is to provide older adults having balance impairment with robotic handlebars for stabilizing their body. The handlebars go anywhere and provide support anytime, whenever they need.”

In its current version, the robot is operated via remote control. In future iterations, the team plans to automate much of the bot’s functionality, enabling it to autonomously follow and physically assist a user. The researchers are also working on streamlining the device to make it slimmer and more maneuverable in small spaces.

“I think eldercare is the next great challenge,” says E-BAR designer Roberto Bolli, a graduate student in the MIT Department of Mechanical Engineering. “All the demographic trends point to a shortage of caregivers, a surplus of elderly persons, and a strong desire for elderly persons to age in place. We see it as an unexplored frontier in America, but also an intrinsically interesting challenge for robotics.”

Bolli and Asada will present a paper detailing the design of E-BAR at the IEEE Conference on Robotics and Automation (ICRA) later this month. 

Home support

Asada’s group at MIT develops a variety of technologies and robotic aides to assist the elderly. In recent years, others have developed fall prediction algorithms, designed robots and automated devices including robotic walkers, wearable, self-inflating airbags, and robotic frames that secure a person with a harness and move with them as they walk.  

In designing E-BAR, Asada and Bolli aimed for a robot that essentially does three tasks: providing physical support, preventing falls, and safely and unobtrusively moving with a person. What’s more, they looked to do away with any harness, to give a user more independence and mobility. 

“Elderly people overwhelmingly do not like to wear harnesses or assistive devices,” Bolli says. “The idea behind the E-BAR structure is, it provides body weight support, active assistance with gait, and fall catching while also being completely unobstructed in the front. You can just get out anytime.” 

The team looked to design a robot specifically for aging in place at home or helping in care facilities. Based on their interviews with older adults and their caregivers, they came up with several design requirements, including that the robot must fit through home doors, allow the user to take a full stride, and support their full weight to help with balance, posture, and transitions from sitting to standing.

The robot consists of a heavy, 220-pound base whose dimensions and structure were optimized to support the weight of an average human without tipping or slipping. Underneath the base is a set of omnidirectional wheels that allows the robot to move in any direction without pivoting, if needed. (Imagine a car’s wheels shifting to slide into a space between two other cars, without parallel parking.) 

Extending out from the robot’s base is an articulated body made from 18 interconnected bars, or linkages, that can reconfigure like a foldable crane to lift a person from a sitting to standing position, and vice versa. Two arms with handlebars stretch out from the robot in a U-shape, which a person can stand between and lean against if they need additional support. Finally, each arm of the robot is embedded with airbags made from a soft yet grippable material that can inflate instantly to catch a person if they fall, without causing bruising on impact. The researchers believe that E-BAR is the first robot able to catch a falling person without wearable devices or use of a harness.

They tested the robot in the lab with an older adult who volunteered to use the robot in various household scenarios. The team found that E-BAR could actively support the person as they bent down to pick something up from the ground and stretched up to reach an object off a shelf — tasks that can be challenging to do while maintaining balance. The robot also was able to lift the person up and over the lip of a tub, simulating the task of getting out of a bathtub. 

Bolli envisions a design like E-BAR would be ideal for use in the home by elderly people who still have a moderate degree of muscle strength but require assistive devices for activities of daily living.

“Seeing the technology used in real-life scenarios is really exciting,” says Bolli.

In their current paper, the researchers did not incorporate any fall-prediction capabilities in E-BAR’s airbag system. But another project in Asada’s lab, led by graduate student Emily Kamienski, has focused on developing algorithms with machine learning to control a new robot in response to the user’s real-time fall risk level.

Alongside E-BAR, Asada sees different technologies in his lab as providing different levels of assistance for people at certain phases of life or mobility.

“Eldercare conditions can change every few weeks or months,” Asada says. “We’d like to provide continuous and seamless support as a person’s disability or mobility changes with age.” 

This work was supported, in part, by the National Robotics Initiative and the National Science Foundation.

 

SwRI demonstrates SWORD™ robotics programming software at Automate 2025



SWORD integrates CAD with open-source ROS tools to streamline automation



Business Announcement

Southwest Research Institute

Environment Creation 

image: 

SWORD users can use CAD to create a 3D virtual environment to match a robotics hardware setup. This 3D model simulates how a robot interacts with a metal jigsaw puzzle piece.

view more 

Credit: Southwest Research Institute





SAN ANTONIO — May 13, 2025 —Southwest Research Institute (SwRI) is simplifying robotics programming with software that models, plans and executes automation in a user-friendly environment. The SwRI Workbench for Offline Robotics Development (SWORD™) accelerates robotics development by reducing the manual coding required for complex applications.

SWORD users work in a computer-aided design (CAD) 3D environment, leveraging novel robotic modeling to configure systems and plan motion on robotic arms, tools and work cells. SWORD connects 3D visualizations to integrated robot operating system (ROS) software modules, rapidly converting digital simulations into executable processes and free-space robotic motions deployed in physical hardware.

“We put a lot of thought into creating a tool that simulates robotic movement then converts it into a set of commands that run on hardware,” said Michael Ripperger, who is leading SWORD development. “It can be used by robotics experts, and it’s intuitive design is particularly useful for manufacturing engineers who don’t have a coding background.”

SwRI will demonstrate SWORD during the Automate show May 12-15 in Detroit. Attendees can visit Booth No. 5607 to see a live demonstration of a robot arm programmed in SWORD to set up process paths for a part shaped like a jigsaw puzzle piece in the CAD interface.

“The demo shows how SWORD can unlock complex applications that would otherwise be cost prohibitive and time consuming with manual coding,” Ripperger said.

The traditional ROS-Industrial workflow requires developers to be deeply familiar with programming languages and software code libraries. Even experienced developers within the ROS-I ecosystem and beyond may spend weeks on the initial setup of a ROS application.

SwRI manages the ROS-Industrial Americas Consortium and supports ROS-I software repositories, executing training and developer events. SwRI developed SWORD so manufacturing engineers with CAD knowledge can leverage complex capabilities within the ROS codebase.

SWORD is a plugin to the FreeCAD application with a graphical toolkit that builds motion planning environments and collision geometries and tests advanced robotic motion-planning applications.

“A major goal in developing SWORD is to adapt ROS for manufacturing and industrial audiences in a way that is more approachable in a familiar environment,” said Matt Robinson, an SwRI program manager who oversees the ROS-Industrial Americas Consortium.


SWORD users can create a path for robotic movement. The green lines depict the planned path, showing where a robotic arm will move an end-effector around a metal jigsaw puzzle piece.

Credit

Southwest Research Institute

Key SWORD features include:

  • Environmental Modeling: Create or import a CAD model of your robot, including fixtures and end-of-arm-tooling. Users can evaluate and calculate joint configurations by manipulating and controlling robot models using joint sliders and simulating tool movement with an intuitive dragger.
  • Robot Manipulation and Planning: Generate motion plans using commercial path planners, creating custom pipelines for application-specific behavior while predicting and avoiding collisions.
  • Custom Planning Pipeline: Define robot motion using either coordinate-based or joint waypoints, specifying different movement segment types and motion groups while inserting supplementary commands.


​​​​​To inquire about a trial license, visit https://sword.swri.org or listen to Robinson and Ripperger discuss SWORD on the Technology Today Podcast.

 

Smarter skies: A new AI model turns street cameras into rainfall sensors




Chinese Society for Environmental Sciences
Hybrid Deep Learning Framework for Video-Based Rainfall Estimation. 

image: 

A novel framework integrates urban surveillance video data with a two-stage AI pipeline: an enhanced random forest classifier detects rain streaks and selects key image regions, while a hybrid deep neural network combining depthwise separable convolution and gated recurrent units accurately predicts rainfall intensity in real time.

view more 

Credit: Environmental Science and Ecotechnology




Rainfall estimation is vital for flood forecasting, urban planning, and climate modeling. However, current measurement systems face limitations in coverage, cost, and accuracy—especially in complex urban environments. A new study introduces an innovative deep-learning-based framework that uses common surveillance cameras to estimate rainfall in real time. By combining computer vision and a hybrid AI model, researchers achieved high predictive accuracy across various environmental conditions and lighting scenarios. The approach, validated using real-world data from two Chinese cities, outperformed traditional methods while maintaining low computational costs. This work demonstrates a practical and scalable solution for improving hydrological monitoring and mitigating flood risks in cities worldwide.

Conventional methods for measuring rainfall—such as rain gauges, radar, and satellite imaging—often lack the resolution and responsiveness needed for dynamic urban conditions. These systems are typically expensive, limited in spatial granularity, and prone to errors under high-intensity rainfall. Moreover, the global decline in ground-based monitoring stations has further exacerbated data scarcity. Researchers have explored alternative techniques, including audio sensing and cellular networks, but many require complex calibration or infrastructure. Surveillance cameras, already widespread in urban settings, offer untapped potential for fine-scale rainfall detection. However, challenges like low video resolution, background noise, and changing light conditions have hindered broader adoption. Due to these challenges, a deeper investigation into camera-based rainfall estimation is urgently needed.

A research team from Tianjin University has developed an AI-powered method that turns everyday surveillance cameras into rainfall sensors. Their findings (DOI: 10.1016/j.ese.2025.100562) were published in April 2025 in Environmental Science and Ecotechnology. The study presents a hybrid framework combining image-quality analysis, enhanced random forest classifiers, and a deep learning regression model using depthwise separable convolution and gated recurrent units. Tested in the cities of Tianjin and Fuzhou, this novel system achieved superior accuracy and robustness in predicting rainfall, even during night-time or under poor visibility conditions.

The proposed system operates through two key modules: a feature extraction module (FeM) and a rainfall estimation module (RiM). The FeM analyzes video frames using a novel image quality signature (IQS) method that extracts brightness, contrast, and texture features to detect rain streaks, even from noisy or low-light footage. It then uses an enhanced random forest classifier (eRFC) to classify video frames and apply optimal filters, accurately isolating rain features while discarding irrelevant visual information. The RiM employs a hybrid deep learning model combining depthwise separable convolution (DSC) and gated recurrent units (GRU), enabling it to capture both spatial and temporal patterns in rain events. This architecture proved highly effective in estimating rainfall intensity (RI) at minute-level intervals. The model was trained on over 60 hours of video data and validated against rain gauge measurements, achieving an R² value of up to 0.95 and a Kling–Gupta efficiency (KGE) of 0.97. Importantly, the system demonstrated robustness across varying conditions, including daytime and nighttime, and across multiple surveillance cameras. This adaptability marks a significant advancement in cost-effective, scalable rainfall monitoring technologies.

"Our system leverages widely available surveillance infrastructure and advanced AI to fill gaps left by traditional rainfall monitoring techniques," said Dr. Mingna Wang, senior author of the study. "What's most exciting is that we can now provide highly accurate, real-time rainfall estimates using existing urban technology, even under challenging conditions like night-time or high-density rainfall. This opens the door to smarter flood management systems and more resilient cities in the face of climate change."

This research offers a scalable and low-cost solution for urban rainfall monitoring, particularly valuable for cities facing infrastructure and budget constraints. By repurposing existing surveillance camera networks, municipalities can implement real-time rainfall monitoring systems without significant additional investment. The model's ability to function across diverse lighting and environmental conditions makes it ideal for deployment in complex urban settings. Moreover, the framework can enhance predictive flood modeling, support emergency response strategies, and inform infrastructure planning. Future improvements, such as integrating additional data sources or optimizing performance during high-intensity rainfall, could further elevate its utility in climate adaptation and smart city initiatives.

###

References

DOI

10.1016/j.ese.2025.100562

Original Source URL

https://doi.org/10.1016/j.ese.2025.100562

Funding information

This work was supported by the National Key R&D Plan of China (Grant No.2021YFC3001400).

About Environmental Science and Ecotechnology

Environmental Science and Ecotechnology (ISSN 2666-4984) is an international, peer-reviewed, and open-access journal published by Elsevier. The journal publishes significant views and research across the full spectrum of ecology and environmental sciences, such as climate change, sustainability, biodiversity conservation, environment & health, green catalysis/processing for pollution control, and AI-driven environmental engineering. The latest impact factor of ESE is 14, according to the Journal Citation ReportTM 2024.

 

Expert view: AI meets the conditions for having free will – we need to give it a moral compass



AI is advancing at such speed that speculative moral questions, once the province of science fiction, are suddenly real and pressing, says Finnish philosopher and psychology researcher Frank Martela



Peer-Reviewed Publication

Aalto University

Associate Professor Frank Martela 

image: 

Associate Professor Frank Martela from Aalto University

view more 

Credit: Nita Vera / Aalto University




Martela’s latest study finds that generative AI meets all three of the philosophical conditions of free will —  the ability to have goal-directed agency, make genuine choices and to have control over its actions. It will be published in the journal AI and Ethics on Tuesday.

Drawing on the concept of functional free will as explained in the theories of philosophers Daniel Dennett and Christian List, the study examined two generative AI agents powered by large language models (LLMs): the Voyager agent in Minecraft and fictional ‘Spitenik’ killer drones with the cognitive function of today's unmanned aerial vehicles. ‘Both seem to meet all three conditions of free will — for the latest generation of AI agents we need to assume they have free will if we want to understand how they work and be able to predict their behaviour,’ says Martela. He adds that these case studies are broadly applicable to currently available generative agents using LLMs. 

This development brings us to a critical point in human history, as we give AI more power and freedom, potentially in life or death situations. Whether it is a self-help bot, a self-driving car or a killer drone — moral responsibility may move from the AI developer to the AI agent itself. 

‘We are entering new territory. The possession of free will is one of the key conditions for moral responsibility. While it is not a sufficient condition, it is one step closer to AI having moral responsibility for its actions,’ he adds. It follows that issues around how we ‘parent’ our AI technology have become both real and pressing.

‘AI has no moral compass unless it is programmed to have one. But the more freedom you give AI, the more you need to give it a moral compass from the start. Only then will it be able to make the right choices,’ Martela says.

The recent withdrawal of the latest ChatGPT update due to potentially harmful sycophantic tendencies is a red flag that deeper ethical questions must be addressed. We have moved beyond teaching the simplistic morality of a child. 

‘AI is getting closer and closer to being an adult — and it increasingly has to make decisions in the complex moral problems of the adult world. By instructing AI to behave in a certain way, developers are also passing on their own moral convictions to the AI. We need to ensure that the people developing AI have enough knowledge about moral philosophy to be able to teach them to make the right choices in difficult situations,’ says Martela.

Frank Martela is a philosopher and researcher of psychology specialized in human psychology, well-being, and meaning in life. An assistant professor at Aalto University, Finland, he has become a thought leader in explaining to international media why Finland is topping the happiness rankings. His latest book Stop Chasing Happiness – a pessimist’s guide to a good life (Atlantic Books, 2025) was released earlier this year.


AI integration in process manufacturing: Progress, challenges, and future outlook



 News Release 

Higher Education Press

The concept of hybrid AI in PM 

image: 

The concept of hybrid AI in PM, highlighting the connections between the various components of hybrid AI tools for PSE (left), the four selected PM topics (middle), and examples of problems (right) (where * indicates “control and monitoring”).

view more 

Credit: Vipul Mann et al.




A recent perspective article published in Engineering delves into the application of artificial intelligence (AI) in process manufacturing (PM), exploring how AI can be integrated with process systems engineering (PSE) methods and tools to address various challenges in the field.

PM, a crucial activity in chemical, biochemical, and related engineering, involves converting raw materials into products. However, it faces numerous complex problems, such as continuous or batch operations, quality control, and safety hazards. AI, with its ability to provide innovative solutions, has gained significant attention. The paper focuses on the concept of hybrid AI, which combines machine learning (ML) methods with first-principles-based methods of symbolic AI, to create more powerful tools for PSE.

The authors first define four key topics within PM: chemical product design, process synthesis and design, process control and monitoring, and process safety and hazards. They then review the current state of AI applications in these areas. In chemical product design, AI is used in computer-aided molecular or mixture design, with advancements in molecular structure representation and property prediction. For process synthesis and design, hybrid AI approaches are being developed to find optimal processing routes and designs, considering sustainability and other criteria. In process control and monitoring, techniques like neural network modeling and reinforcement learning (RL) are being employed, although challenges such as system safety and stability remain. Regarding process safety and hazards, AI can help in reducing the time and effort of process hazards analysis and identifying potential risks.

Looking ahead, the paper outlines several challenges and opportunities. For chemical product design, better utilization of chemical libraries, more efficient computational algorithms, and improved handling of complexity with hybrid AI are needed. In process synthesis and design, a unified database of process flowsheets, integrating sustainability into flowsheet development, and enhancing the integration of optimization-based methods with hybrid AI are crucial. For process control and monitoring, adapting to changing operational conditions, handling limited feedback signals, incorporating diverse measurement signals, and implementing AI-augmented control algorithms are key areas of focus. In process safety and hazards, creating a database of dangerous chemicals, developing better language models, and integrating hazardous and safety issues more effectively are essential.

While AI has shown promise in PM, there is still much work to be done. Developing AI-augmented PSE tools that can efficiently transfer data to model-based process simulation and optimization techniques is necessary for failure-free decision-making in PM. This research provides valuable insights for engineers and researchers working in the field, guiding future efforts to leverage AI for more sustainable and efficient process manufacturing.

The paper “A Perspective on Artificial Intelligence for Process Manufacturing,” authored by Vipul Mann, Jingyi Lu, Venkat Venkatasubramanian, Rafiqul Gani. Full text of the open access paper: https://doi.org/10.1016/j.eng.2025.01.014. For more information about Engineering, visit the website at https://www.sciencedirect.com/journal/engineering.