Robot performs 1st realistic surgery without human help
System trained on videos of surgeries performs like an expert surgeon
video:
An explaination of the gallbladder procedure performed by Surgical Robot Transformer-Hierarchy.
view moreCredit: Johns Hopkins University
A robot trained on videos of surgeries performed a lengthy phase of a gallbladder removal without human help. The robot operated for the first time on a lifelike patient, and during the operation, responded to and learned from voice commands from the team—like a novice surgeon working with a mentor.
The robot performed unflappably across trials and with the expertise of a skilled human surgeon, even during unexpected scenarios typical in real life medical emergencies.
The federally-funded work, led by Johns Hopkins University researchers, is a transformative advancement in surgical robotics, where robots can perform with both mechanical precision and human-like adaptability and understanding.
“This advancement moves us from robots that can execute specific surgical tasks to robots that truly understand surgical procedures,” said medical roboticist Axel Krieger. “This is a critical distinction that brings us significantly closer to clinically viable autonomous surgical systems that can work in the messy, unpredictable reality of actual patient care.”
The findings are published today in Science Robotics.
In 2022, Krieger’s Smart Tissue Autonomous Robot, STAR, performed the first autonomous robotic surgery on a live animal – a laparoscopic surgery on a pig. But that robot required specially marked tissue, operated in a highly controlled environment, and followed a rigid, predetermined surgical plan. Krieger said it was like teaching a robot to drive along a carefully mapped route.
But his new system, he says, “is like teaching a robot to navigate any road, in any condition, responding intelligently to whatever it encounters.”
Surgical Robot Transformer-Hierarchy, SRT-H, truly performs surgery, adapting to individual anatomical features in real-time, making decisions on the fly, and self-correcting when things don't go as expected.
Built with the same machine learning architecture that powers ChatGPT, SRT-H is also interactive, able respond to spoken commands (“grab the gallbladder head”) and corrections (“move the left arm a bit to the left”). The robot learns from this feedback.
“This work represents a major leap from prior efforts because it tackles some of the fundamental barriers to deploying autonomous surgical robots in the real world,” said lead author Ji Woong "Brian" Kim, a former postdoctoral researcher at Johns Hopkins who’s now with Stanford University. “Our work shows that AI models can be made reliable enough for surgical autonomy—something that once felt far-off but is now demonstrably viable.”
Last year Krieger’s team used the system to train a robot to perform three foundational surgical tasks: manipulating a needle, lifting body tissue, and suturing. Those tasks took just a few seconds each.
The gallbladder removal procedure is much more complex, a minutes-long string of 17 tasks. The robot had to identify certain ducts and arteries and grab them precisely, strategically place clips, and sever parts with scissors.
SRT-H learned how to do the gall bladder work by watching videos of Johns Hopkins surgeons doing it on pig cadavers. The team reinforced the visual training with captions describing the tasks. After watching the videos, the robot performed the surgery with 100% accuracy.
Although the robot took longer to perform the work than a human surgeon, the results were comparable to an expert surgeon.
“Just as surgical residents often master different parts of an operation at different rates, this work illustrates the promise of developing autonomous robotic systems in a similarly modular and progressive manner,” says Johns Hopkins surgeon Jeff Jopling, a co-author.
The robot performed flawlessly across anatomical conditions that weren’t uniform, and during unexpected detours—such as when the researchers changed the robot’s starting position and when they added blood-like dyes that changed the appearance of the gallbladder and surrounding tissues.
“To me it really shows that it’s possible to perform complex surgical procedures autonomously,” Krieger said. “This is a proof of concept that it’s possible and this imitation learning framework can automate such complex procedure with such a high degree of robustness.”
Next the team would like to train and test the system on more types of surgeries and expand its capabilities to perform a complete autonomous surgery.
Authors include Johns Hopkins PhD student Juo-Tung Chen; Johns Hopkins visiting graduate student Pascal Hansen; Stanford University PhD student Lucy X. Shi; Johns Hopkins undergraduate Antony Goldenberg; Johns Hopkins PhD student Samuel Schmidgall; former Johns Hopkins postdoctoral fellow Paul Maria Scheikl; Johns Hopkins research engineer Anton Deguet; surgical fellow Brandon M. White, Stanford University assistant professor Chelsea Finn; and De Ru Tsai and Richard Cha of Optosurgical.
Robot performs gallbladder procedure: Example 1 [VIDEO] |
The gallbladder removal procedure is complex, a minutes-long string of 17 tasks. The robot had to identify certain ducts and arteries and grab them precisely, strategically place clips, and sever parts with scissors.
SRT-H learned how to do the gall bladder work by watching videos of Johns Hopkins surgeons doing it on pig cadavers. After watching the videos, the robot performed the surgery with 100% accuracy.
Robot performs gallbladder procedure: Example 2 [VIDEO] |
Surgical Robot Transformer-Hierarchy, SRT-H, truly performs surgery, adapting to individual anatomical features in real-time, making decisions on the fly, and self-correcting when things don't go as expected.
Surgical Robot Transformer-Hierarchy, SRT-H, truly performs surgery, adapting to individual anatomical features in real-time, making decisions on the fly, and self-correcting when things don't go as expected.
The Surgical Robot Transformer-Hierarchy performing a gallbladder surgery
Credit
Juo-Tung Chen/Johns Hopkins University
The robot used with the Surgical Robot Transformer-Hierarchy to perform gallbladder surgery.
Credit
XinHao Chen/Johns Hopkins University
Journal
Science Robotics
Article Title
SRT-H: A Hierarchical Framework for Autonomous Surgery via Language-Conditioned Imitation Learning
Article Publication Date
9-Jul-2025
The robot will see you now
Humanoid robots could be a game changer in the operating room
image:
Surgie, a humanoid medical robot, is about to give an ultrasound to a patient.
view moreCredit: David Baillot/University of California San Diego
As waiting rooms fill up, doctors get increasingly burnt out, and surgeries take longer to schedule and more get cancelled, humanoid surgical robots offer a solution. That’s the argument that UC San Diego robotics expert Michael Yip makes in a perspective piece out July 9 in Science Robotics.
Why? Today’s surgical robots are costly pieces of equipment designed for specialized tasks and can only be operated by highly trained physicians. However, this model doesn’t scale. Despite the drastic improvements in artificial intelligence and autonomy for industrial and humanoid robots in the past year, these improvements haven’t translated to surgical robots. The scale of data required to train a truly capable artificial intelligence to perform surgery with today’s robots would be too labor-intensive and cost-prohibitive, especially on existing platforms and with current practitioners. Building datasets based on medical procedures also raises privacy issues.
But what if all the training data used by industrial humanoid robots could be made useful for training robots to perform medical procedures? This would be a game-changer, writes Yip, a professor in the UC San Diego Department of Electrical and Computer Engineering. The simplest way is to give arms and multi-fingered hands to our surgical robots, similar to the trend seen in industrial robots. Not only would this produce a new class of robots in the operating room–it would allow those robots to take full advantage of the AI foundation models that have been accelerating the capability of industrial robots learning new skills and assisting in a large variety of tasks.
For example, a humanoid robot could help by holding an ultrasound probe or an endoscopic camera for the surgeon during a procedure, or help as a scrub nurse handing off instruments while maintaining a sterile field. These types of assistance are critical and currently performed by other surgeons or nurses, which take them away from helping other patients and can be physically draining. As a result, robots for these low-risk, common, time-consuming, and physically draining tasks would be hugely beneficial. But one can’t justify purchasing special-purpose robots for every new task. A general-purpose humanoid form factor makes the most sense in the long run in this regard as well, writes Yip.
Ultimately, as humanoid robots in industry build stronger AI foundation models, becoming more skilled at many tasks, and so too will humanoid robots participating in surgery in the operating room. Though not anytime in the near future, the promise is that one day, these robots will be a critical technology to addressing the skilled health care labor shortage challenges facing patients, doctors, and nurses around the world, Yip writes.
Yip lays out his argument in more detail in an upcoming paper, currently in preprint: Humanoids in Hospitals: A Technical Study of Humanoid Surrogates for Dexterous Medical Interventions
The robot will see you now: Foundation models are the path forward for autonomous robotic surgery
Michael Yip, Department of Electrical and Computer Engineering, University of California San Diego
Journal
Science Robotics
Method of Research
Survey
Subject of Research
Not applicable
Article Title
The Robot Will See You Now: Foundation Models are the Path Forward for Autonomous Robotic Surgery
Article Publication Date
9-Jul-2025
A new eel-inspired swimming robot made of soft electroactive polymer can minimize harm to wildlife and fragile structures while exploring underwater environments
image:
An underwater robot developed by NTNU. This study first establishes a cylindrical dynamic model of dielectric elastomers to deeply analyze the deformation behavior of DEs under the action of electric fields and radial forces. Subsequently, taking the eel as the bionic object, tubular actuators and tubular segment drive modules for improving manufacturability are designed. These modules are connected in series and the head component, tail component and tail fin are installed, eventually creating an eel-like robot based on dielectric elastomers.
view moreCredit: Chenghong Zhang, 2025, PLOS One, CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)
A new eel-inspired swimming robot made of soft electroactive polymer can minimize harm to wildlife and fragile structures while exploring underwater environments
Article URL: http://plos.io/4enskqV
Article title: An eel-like robot based on a dielectric elastomer
Author countries: China
Funding: This work was supported by the Joint Open Fund of Guizhou Provincial Department of Education (Grant no. [2022] 439) and the academic new seedling cultivation and free exploration and innovation of Guizhou Provincial Science and Technology Department (Grant no. [2023] 11). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Journal
PLOS One
Article Title
An eel-like robot based on a dielectric elastomer
Article Publication Date
9-Jul-2025
Eel imitation robot.
Credit
Chenghong Zhang, 2025, PLOS One, CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)
No comments:
Post a Comment