Saturday, September 09, 2023

ROBOTICS

Autonomous robot for subsea oil and gas pipeline inspection being developed at University of Houston


Technology makes process safer, more cost effective

Grant and Award Announcement

UNIVERSITY OF HOUSTON

SmartTouch technology rendering 

IMAGE: SMARTTOUCH TECHNOLOGY RENDERING view more 

CREDIT: UNIVERSITY OF HOUSTON




With an increasing number of severe accidents in the global oil and gas industry caused by damaged pipelines, University of Houston researchers are developing an autonomous robot to identify potential pipeline leaks and structural failures during subsea inspections. The transformative technology will make the inspection process far safer and more cost effective, while also protecting subsea environments from disaster.

Thousands of oil spills occur in U.S. waters each year for a variety of reasons. While most are small, spilled crude oil can still cause damage to sensitive areas such as beaches, mangroves and wetlands. When larger spills happen, pipelines are often the culprit. From 1964 through 2015, a total of 514 offshore pipeline–related oil spills were recorded, 20 of which incurred spill volumes of more than 1,000 barrels, according to the Bureau of Ocean Energy Management.

The timely inspection of subsea infrastructure, especially pipelines and offshore wells, is the key to preventing such disasters. However, current inspection techniques often require a well-trained human diver and substantial time and money. The challenges are exacerbated if the inspection target is deep underwater.

The SmartTouch technology now in development at UH consists of Remote Operated Vehicles (ROVs) equipped with multiple stress wave-based smart touch sensors, video cameras and scanning sonars that can swim along a subsea pipeline to inspect flange bolts – bolted connections have accelerated the rate of pipeline accidents that result in leakage, according to the Bureau of Safety and Environmental Enforcement (BSEE).

The BSEE is funding the project with a $960,493 grant to UH researchers Zheng Chen, Bill D. Cook Assistant Professor of Mechanical Engineering and Gangbing Song, John and Rebecca Moores Professor of Mechanical Engineering, who are working in collaboration with Oceaneering International and Chevron.

“By automating the inspection process with this state-of-the art robotic technology, we can dramatically reduce the cost and risk of these important subsea inspections which will lead to safer operations of offshore oil and gas pipelines as less intervention from human divers will be needed,” said Chen, noting that a prototype of the ROV has been tested in his lab and in Galveston Bay. The experiments demonstrated the feasibility of the proposed approach for inspecting the looseness of subsea bolted connections. Preliminary studies were funded by UH’s Subsea Systems Institute.

Oil and gas pipelines fail for a variety of reasons including equipment malfunctions, corrosion, weather and other natural causes, or vessel-related accidents which account for most large leaks. Toxic and corrosive fluids leaked from a damaged pipe can lead to devastating environmental pollution.

“Corrosion is responsible for most small leaks, but the impacts can still be devastating to the environment. Therefore, our technology will be highly accurate in monitoring corrosion and will also help mitigate the chances of pipeline failure from other factors,” said co-principal investigator Gangbing Song, who has conducted significant research in piezoelectric-based structural health monitoring. His prior research efforts include numerous damage detection applications, such as crack detection, hydration monitoring, debonding and other structural anomalies.

The UH researchers are collaborating with Oceaneering International, an industrial leader in ROV development, non-destructive testing and inspections, engineering and project management, and surveying and mapping services. Additionally, Chevron, a major oil and gas operator, will evaluate the technology’s future commercialization.

The SmartTouch sensing solution will open the doors for inspection of other kinds of subsea structures, according to the researchers, by forming a design template for future robotic technologies.

“Ultimately, the project will push the boundaries of what can be accomplished by integrating robotics and structural health monitoring technologies. With proper implementation, the rate of subsea pipeline failure and related accidents will decrease, and subsea operations will be free to expand at faster rate than before,” added Chen.

‘Brainless’ robot can navigate complex obstacles

Peer-Reviewed Publication

NORTH CAROLINA STATE UNIVERSITY

Asymmetrical soft robots 

IMAGE: RESEARCHERS WHO CREATED A SOFT ROBOT THAT COULD NAVIGATE SIMPLE MAZES WITHOUT HUMAN OR COMPUTER DIRECTION HAVE NOW BUILT ON THAT WORK, CREATING A “BRAINLESS” SOFT ROBOT THAT CAN NAVIGATE MORE COMPLEX AND DYNAMIC ENVIRONMENTS. ONE HALF OF THE ROBOT IS SHAPED LIKE A TWISTED RIBBON THAT EXTENDS IN A STRAIGHT LINE, WHILE THE OTHER HALF IS SHAPED LIKE A MORE TIGHTLY TWISTED RIBBON THAT ALSO TWISTS AROUND ITSELF LIKE A SPIRAL STAIRCASE. THIS ASYMMETRICAL DESIGN MEANS THAT ONE END OF THE ROBOT EXERTS MORE FORCE ON THE GROUND THAN THE OTHER END. view more 

CREDIT: JIE YIN, NC STATE UNIVERSITY




Researchers who created a soft robot that could navigate simple mazes without human or computer direction have now built on that work, creating a “brainless” soft robot that can navigate more complex and dynamic environments.

“In our earlier work, we demonstrated that our soft robot was able to twist and turn its way through a very simple obstacle course,” says Jie Yin, co-corresponding author of a paper on the work and an associate professor of mechanical and aerospace engineering at North Carolina State University. “However, it was unable to turn unless it encountered an obstacle. In practical terms this meant that the robot could sometimes get stuck, bouncing back and forth between parallel obstacles.

“We’ve developed a new soft robot that is capable of turning on its own, allowing it to make its way through twisty mazes, even negotiating its way around moving obstacles. And it’s all done using physical intelligence, rather than being guided by a computer.”

Physical intelligence refers to dynamic objects – like soft robots – whose behavior is governed by their structural design and the materials they are made of, rather than being directed by a computer or human intervention.

As with the earlier version, the new soft robots are made of ribbon-like liquid crystal elastomers. When the robots are placed on a surface that is at least 55 degrees Celsius (131 degrees Fahrenheit), which is hotter than the ambient air, the portion of the ribbon touching the surface contracts, while the portion of the ribbon exposed to the air does not. This induces a rolling motion; the warmer the surface, the faster the robot rolls.

However, while the previous version of the soft robot had a symmetrical design, the new robot has two distinct halves. One half of the robot is shaped like a twisted ribbon that extends in a straight line, while the other half is shaped like a more tightly twisted ribbon that also twists around itself like a spiral staircase.

This asymmetrical design means that one end of the robot exerts more force on the ground than the other end. Think of a plastic cup that has a mouth wider than its base. If you roll it across the table, it doesn’t roll in a straight line – it makes an arc as it travels across the table. That’s due to its asymmetrical shape.

“The concept behind our new robot is fairly simple: because of its asymmetrical design, it turns without having to come into contact with an object,” says Yao Zhao, first author of the paper and a postdoctoral researcher at NC State. “So, while it still changes directions when it does come into contact with an object – allowing it to navigate mazes – it cannot get stuck between parallel objects. Instead, its ability to move in arcs allows it to essentially wiggle its way free.”

The researchers demonstrated the ability of the asymmetrical soft robot design to navigate more complex mazes – including mazes with moving walls – and fit through spaces narrower than its body size. The researchers tested the new robot design on both a metal surface and in sand. A video of the asymmetrical robot in action can be found at https://youtu.be/aYpSwuij2DI?si=tNEtvt60_uKkdEsw.

“This work is another step forward in helping us develop innovative approaches to soft robot design – particularly for applications where soft robots would be able to harvest heat energy from their environment,” Yin says.

The paper, “Physically Intelligent Autonomous Soft Robotic Maze Escaper,” will be published Sept. 8 in the journal Science Advances. First author of the paper is Yao Zhao, a postdoctoral researcher at NC State. Hao Su, an associate professor of mechanical and aerospace engineering at NC State, is co-corresponding author. Additional co-authors include Yaoye Hong, a recent Ph.D. graduate of NC State; Yanbin Li, a postdoctoral researcher at NC State; and Fangjie Qi and Haitao Qing, both Ph.D. students at NC State.

The work was done with support from the National Science Foundation under grants 2005374, 2126072, 1944655 and 2026622.

JOURNAL

DOI

METHOD OF RESEARCH

SUBJECT OF RESEARCH

ARTICLE TITLE

ARTICLE PUBLICATION DATE

Team’s new AI technology gives robot recognition skills a big lift


UT Dallas researchers demonstrate new technique to train robots to recognize objects

Reports and Proceedings

UNIVERSITY OF TEXAS AT DALLAS

UTD Researchers with Robot 

IMAGE: FROM LEFT: COMPUTER SCIENCE DOCTORAL STUDENTS SAI HANEESH ALLU AND NINAD KHARGONKAR WITH DR. YU XIANG, ASSISTANT PROFESSOR OF COMPUTER SCIENCE, ARE SHOWN WITH RAMP, A ROBOT THEY ARE TRAINING TO RECOGNIZE AND MANIPULATE COMMON OBJECTS. view more 

CREDIT: UNIVERSITY OF TEXAS AT DALLAS





A robot moves a toy package of butter around a table in the Intelligent Robotics and Vision Lab at The University of Texas at Dallas. With every push, the robot is learning to recognize the object through a new system developed by a team of UT Dallas computer scientists.

The new system allows the robot to push objects multiple times until a sequence of images are collected, which in turn enables the system to segment all the objects in the sequence until the robot recognizes the objects. Previous approaches have relied on a single push or grasp by the robot to “learn” the object.

The team presented its research paper at the Robotics: Science and Systems conference July 10-14 in Daegu, South Korea. Papers for the conference are selected for their novelty, technical quality, significance, potential impact and clarity.

The day when robots can cook dinner, clear the kitchen table and empty the dishwasher is still a long way off. But the research group has made a significant advance with its robotic system that uses artificial intelligence to help robots better identify and remember objects, said Dr. Yu Xiang, senior author of the paper.

“If you ask a robot to pick up the mug or bring you a bottle of water, the robot needs to recognize those objects,” said Xiang, assistant professor of computer science in the Erik Jonsson School of Engineering and Computer Science.

The UTD researchers’ technology is designed to help robots detect a wide variety of objects found in environments such as homes and to generalize, or identify, similar versions of common items such as water bottles that come in varied brands, shapes or sizes.

Inside Xiang’s lab is a storage bin full of toy packages of common foods, such as spaghetti, ketchup and carrots, which are used to train the lab robot, named Ramp. Ramp is a Fetch Robotics mobile manipulator robot that stands about 4 feet tall on a round mobile platform. Ramp has a long mechanical arm with seven joints. At the end is a square “hand” with two fingers to grasp objects.

Xiang said robots learn to recognize items in a comparable way to how children learn to interact with toys.

“After pushing the object, the robot learns to recognize it,” Xiang said. “With that data, we train the AI model so the next time the robot sees the object, it does not need to push it again. By the second time it sees the object, it will just pick it up.”

What is new about the researchers’ method is that the robot pushes each item 15 to 20 times, while the previous interactive perception methods only use a single push. Xiang said multiple pushes enable the robot to take more photos with its RGB-D camera, which includes a depth sensor, to learn about each item in more detail. This reduces the potential for mistakes.

The task of recognizing, differentiating and remembering objects, called segmentation, is one of the primary functions needed for robots to complete tasks.

“To the best of our knowledge, this is the first system that leverages long-term robot interaction for object segmentation,” Xiang said.

Ninad Khargonkar, a computer science doctoral student, said working on the project has helped him improve the algorithm that helps the robot make decisions.

“It’s one thing to develop an algorithm and test it on an abstract data set; it’s another thing to test it out on real-world tasks,” Khargonkar said. “Seeing that real-world performance — that was a key learning experience.”

The next step for the researchers is to improve other functions, including planning and control, which could enable tasks such as sorting recycled materials.

Other UTD authors of the paper included computer science graduate student Yangxiao Lu; computer science seniors Zesheng Xu and Charles Averill; Kamalesh Palanisamy MS’23; Dr. Yunhui Guo, assistant professor of computer science; and Dr. Nicholas Ruozzi, associate professor of computer science. Dr. Kaiyu Hang from Rice University also participated.

The research was supported in part by the Defense Advanced Research Projects Agency as part of its Perceptually-enabled Task Guidance program, which develops AI technologies to help users perform complex physical tasks by providing task guidance with augmented reality to expand their skill sets and reduce errors.

No comments:

Post a Comment