Drexel researchers propose AI-guided system for robotic inspection of buildings, roads and bridges
Peer-Reviewed PublicationOur built environment is aging and failing faster than we can maintain it. Recent building collapses and structural failures of roads and bridges are indicators of a problem that’s likely to get worse, according to experts, because it’s just not possible to inspect every crack, creak and crumble to parse dangerous signs of failure from normal wear and tear. In hopes of playing catch-up, researchers in Drexel University’s College of Engineering are trying to give robotic assistants the tools to help inspectors with the job.
Augmenting visual inspection technologies — that have offered partial solutions to speed damage assessment in recent years — with a new machine learning approach, the researchers have created a system that they believe could enable efficient identification and inspection of problem areas by autonomous robots. Reported in the journal Automation in Construction, their multi-scale system combines computer vision with a deep-learning algorithm to pinpoint problem areas of cracking before directing a series of laser scans of the regions to create a “digital twin” computer model that can be used to assess and monitor the damage.
The system represents a strategy that would significantly reduce the overall inspection workload and enable the focused consideration and care needed to prevent structural failures.
“Cracks can be regarded as a patient’s medical symptoms that should be screened in the early stages,” the authors, Arvin Ebrahimkhanlou, PhD, an assistant professor, and Ali Ghadimzadeh Alamdari, a research assistant, both in Drexel’s College of Engineering, wrote. “Consequently, early and accurate detection and measurement of cracks are essential for timely diagnosis, maintenance, and repair efforts, preventing further deterioration and mitigating potential hazards.”
But right now, they note, so many of the nation’s buildings, bridges, tunnels and dams are among the walking wounded that the first priority should be setting up a triage system. Before the Bipartisan Infrastructure Law, the American Society of Civil Engineers estimated a backlog of $786 billion in repairs to roads and bridges. Adding to the challenge is a growing shortage of skilled infrastructure workers — including inspectors and those who would repair aging structures.
“Civil infrastructures include large-scale structures and bridges, but their defects are often small in scale,” Ebrahimkhanlou said. “We believe taking a multi-scale robotic approach will enable efficient pre-screening of problem areas via computer vision and precise robotic scanning of defects using nondestructive, laser-based scans.”
Instead of a physical measurement interpreted subjectively by human eyes, the system uses a high-resolution stereo-depth camera feed of the structure into a deep-learning program called a convolutional neural network. These programs, which are being used for facial recognition, drug development and deepfake detection, are gaining attention for their ability to spot the finest of patterns and discrepancies in massive volumes of data.
Training the algorithms on datasets of concrete structure images turns them into crack crack-spotters.
“The neural network has been trained on a dataset of sample cracks, and it can identify crack-like patterns in the images that the robotic system collects from the surface of a concrete structure. We call regions containing such patterns, regions of interest,” said Ebrahimkhanlou, who leads research on robotic and artificial-intelligence based assessment of infrastructure, mechanical and aerospace structures in Drexel’s Department of Civil, Architectural, and Environmental Engineering.
Once the “region of interest” — the cracked or damaged area — is identified, the program directs a robotic arm to scan over it with a laser line scanner, which creates a three-dimensional image of the damaged area. At the same time a LiDAR (Light Detection and Ranging) camera scans the structure surrounding the crack. Stitching both plots together creates a digital model of the area that shows the width and dimensions of the crack and allows tracking changes in between inspections.
“Tracking crack growth is one of the advantages of producing a digital twin model,” Alamdari said. “In addition, it allows bridge owners to have a better understanding of the condition of their bridge, and plan maintenance and repair.”
The team tested the system in the lab on a concrete slab with a variety of cracks and deterioration. In a test of its ability to detect and measure small cracks, the system was sensitive enough to pinpoint and accurately size up the smallest of fissures — less than a hundredth of a millimeter wide — outperforming top-of-the-line cameras, scanners and fiber optic sensors by a respectable margin.
While human inspectors would still make the final call on when and how to repair the damages, the robotic assistants could greatly reduce their workload, according to the researchers. In addition, an automated inspection process would reduce oversights and subjective judgement errors that can happen when overworked human inspectors take the first look.
“This approach significantly reduces unnecessary data collection from areas that are in good structural condition while still providing comprehensive and reliable data necessary for condition assessment,” they wrote.
The researchers envision incorporating the multi-scale monitoring system as part of a larger autonomous monitoring framework including drones and other autonomous vehicles — like the one proposed by the Federal Highway Administration’s Nondestructive Evaluation Laboratory, which would use an array of tools and sensing technologies to autonomously monitor and repair infrastructure.
“Moving forward, we aim to integrate this work with an unmanned ground vehicle, enhancing the system's ability to autonomously detect, analyze, and monitor cracks,” Alamdari said. “The goal is to create a more comprehensive, intelligent and efficient system for maintaining structural integrity across various types of infrastructure. Additionally, real-world testing and collaboration with industry and regulatory bodies will be critical for practical application and continuous improvement of the technology.”
Robotic concret crack scanning [VIDEO] |
JOURNAL
Automation in Construction
METHOD OF RESEARCH
Experimental study
ARTICLE TITLE
A multi-scale robotic approach for precise crack measurement in concrete structures
Bringing together real-world sensors and VR to improve building maintenance
A new system that brings together real-world sensing and virtual reality would make it easier for building maintenance personnel to identify and fix issues in commercial buildings that are in operation. The system was developed by computer scientists at the University of California San Diego and Carnegie Mellon University.
The system, dubbed BRICK, consists of a handheld device equipped with a suite of sensors to monitor temperature, CO2 and airflow. It is also equipped with a virtual reality environment that has access to the sensor data and metadata in a specific building while being connected to the building’s electronic control system.
When an issue is reported in a specific location, a building manager can go on-site with the device and quickly scan the space with the Lidar tool on their smartphone, creating a virtual reality version of the space. The scanning can also occur ahead of time. Once they open this mixed reality recreation of the space on a smartphone or laptop, building managers can locate sensors, as well as the data gathered from the handheld device, overlaid onto that mixed reality environment.
The goal is to allow building managers to quickly identify issues by inspecting hardware and gathering and logging relevant data.
“Modern buildings are complex arrangements of multiple systems from climate control, lighting and security to occupant management. BRICK enables their efficient operation, much like a modern computer system,” said Rajesh K. Gupta, one of the paper’s senior authors, director of the UC San Diego Halicioglu Data Science Institute and a professor in the UC San Diego Department of Computer Science and Engineering.
Currently, when building managers receive reports of a problem, they first have to consult the building management database for that specific location. But the system doesn’t tell them where the sensors and hardware are located exactly in that space. So managers have to go to the location, gather more data with cumbersome sensors, then compare that data against the information in the building management system and try to deduce what the issue is. It’s also difficult to log the data gathered at various spatial locations in a precise way.
By contrast, with BRICK, the building manager can directly go to the location equipped with a handheld device and a laptop or smartphone. They will immediately have access on location to all the building management system data, the location of the sensors and the data from the handheld device all overlapping in one mixed reality environment. Using this system, the operators can also detect faults in the building equipment from stuck air-control valves to poorly operating handling systems.
In the future, researchers hope to find CO2, temperature and airflow sensors that can directly connect to a smartphone, to enable occupants to take part in managing local environments as well as to simplify building operations.
A team at Carnegie Mellon built the handheld device. Xiaohan Fu, a computer science Ph.D. student in the research group of Rajesh Gupta, director of the Halicioglu Data Science Institute, built the backend and VR components that build upon their earlier work on BRICK metadata schema that has been adopted by many commercial vendors.
Ensuring that the location used in the VR environment was accurate was a major challenge. GPS is only accurate to a radius of about a meter. In this case, the system needs to be accurate within a few inches. The researchers’ solution was to post a (few) AprilTags–similar to QR codes —in every room that would be read by the handheld device’s camera and recalibrate the system to the correct location.
“It’s an intricate system,” Fu said. “The mixed reality itself is not easy to build. From a software standpoint, connecting the building management system, where hardware, sensors and actuators are controlled, was a complex task that requires safety and security guarantees in a commercial environment. Our system architecture enables us to do it in an interactive and programmable way.”
The team presented their work at the BuildSys 23 Conference on Nov. 15 and 16 in Istanbul, Turkey.
The work was sponsored by the CONIX Research Center, one of the six centers in JUMP, a Semiconductor Research Corporation program sponsored by DARPA.
A handheld device equipped with a suite of sensors to monitor temperature, CO2 and airflow.
CREDIT
Carnegie Mellon University
Debugging buildings with mixed reality
Xiaohan Fu, Rajesh K. Gupta, University of California San Diego
John Pedraza, Anthony Rowe, Carnegie Mellon University
BRICK Consortium:
METHOD OF RESEARCH
Experimental study
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
Debugging buildings with mixed reality
No comments:
Post a Comment