Tuesday, December 13, 2022

Bolstering the safety of self-driving cars with a deep learning-based object detection system

An Internet-of-Things-enabled, real-time object detection system developed by researchers could make autonomous vehicles more reliable and safer

Peer-Reviewed Publication

INCHEON NATIONAL UNIVERSITY

Image of a Google self-driving car. 

IMAGE: TECHNOLOGICAL ADVANCES SUCH AS DEEP LEARNING, NEURAL NETWORKS AND INTERNET-OF-THINGS TECHNOLOGY ARE IMPROVING AUTONOMOUS VEHICLES CONTINUOUSLY. IN THIS STUDY, RESEARCHERS FROM KOREA, UK, AND CANADA PROPOSE A NOVEL 3D OBJECT DETECTION SYSTEM FOR THESE VEHICLES TO REALIZE A SAFER AND MORE RELIABLE DRIVING EXPERIENCE. view more 

CREDIT: SCOTT SCHRANTZ FROM FLICKR (HTTPS://WWW.FLICKR.COM/PHOTOS/SCOTTSCHRANTZ/6125665813/)

Self-driving cars, or autonomous vehicles, have long been earmarked as the next generation mode of transport. To enable the autonomous navigation of such vehicles in different environments, many different technologies relating to signal processing, image processing, artificial intelligence deep learning, edge computing, and IoT, need to be implemented.

One of the largest concerns around the popularization of autonomous vehicles is that of safety and reliability. In order to ensure a safe driving experience for the user, it is essential that an autonomous vehicle accurately, effectively, and efficiently monitors and distinguishes its surroundings as well as potential threats to passenger safety.

To this end, autonomous vehicles employ high-tech sensors, such as Light Detection and Ranging (LiDaR), radar, and RGB cameras that produce large amounts of data as RGB images and 3D measurement points, known as a “point cloud.” The quick and accurate processing and interpretation of this collected information is critical for the identification of pedestrians and other vehicles. This can be realized through the integration of advanced computing methods and Internet-of-Things (IoT) into these vehicles, which allows for fast, on-site data processing and navigation of various environments and obstacles more efficiently.

In a recent study published in the IEEE Transactions of Intelligent Transport Systems journal on 17 October 2022, a group of international researchers, led by Professor Gwanggil Jeon from Incheon National University, Korea have now developed a smart IoT-enabled end-to-end system for 3D object detection in real time based on deep learning and specialized for autonomous driving situations.

“For autonomous vehicles, environment perception is critical to answer a core question, What is around me? It is essential that an autonomous vehicle can effectively and accurately understand its surrounding conditions and environments in order to perform a responsive action,” explains Prof. Jeon. We devised a detection model based on YOLOv3, a well-known identification algorithm. The model was first used for 2D object detection and then modified for 3D objects,” he elaborates.

The team fed the collected RGB images and point cloud data as input to YOLOv3, which, in turn, output classification labels and bounding boxes with confidence scores. They then tested its performance with the Lyft dataset. The early results revealed that YOLOv3 achieved an extremely high accuracy of detection (>96%) for both 2D and 3D objects, outperforming other state-of-the-art detection models.

The method can be applied to autonomous vehicles, autonomous parking, autonomous delivery, and future autonomous robots as well as in applications where object and obstacle detection, tracking, and visual localization is required. “At present, autonomous driving is being performed through LiDAR-based image processing, but it is predicted that a general camera will replace the role of LiDAR in the future. As such, the technology used in autonomous vehicles is changing every moment, and we are at the forefront,” highlights Prof. Jeon. “Based on the development of element technologies, autonomous vehicles with improved safety should be available in the next 5-10 years,” he concludes optimistically.

 

***

Reference

DOI: https://doi.org/10.1109/TITS.2022.3210490

Authors: Imran Ahmed1, Gwanggil Jeon2,*, and Abdellah Chehri3

Affiliations:

1School of Computing and Information Sciences, Anglia Ruskin University

2Department of Embedded Systems Engineering, Incheon National University

3Department of Mathematics and Computer Science, Royal Military College of Canada

 

About Incheon National University

Incheon National University (INU) is a comprehensive, student-focused university. It was founded in 1979 and given university status in 1988. One of the largest universities in South Korea, it houses nearly 14,000 students and 500 faculty members. In 2010, INU merged with Incheon City College to expand capacity and open more curricula. With its commitment to academic excellence and an unrelenting devotion to innovative research, INU offers its students real-world internship experiences. INU not only focuses on studying and learning but also strives to provide a supportive environment for students to follow their passion, grow, and, as their slogan says, be INspired.

Website: http://www.inu.ac.kr/mbshome/mbs/inuengl/index.html

 

About the author

Gwanggil Jeon received a Ph.D. degree from Hanyang University, Korea in 2008, following which he went on to become a postdoctoral researcher at the University of Ottawa, Canada, and an Assistant Professor at Niigata University, Japan, thereafter. He has served as a visiting or adjunct professor at École Normale Supérieure Paris-Saclay in France and Università degli Studi di Milano Statale in Italy. He is currently a Full Professor at Incheon National University in Korea. Additionally, Dr. Jeon is an IEEE Senior Member and has received numerous awards, including the IEEE Chester Sall Award in 2007, the ETRI Journal Paper Award in 2008, and the Industry-Academic Merit Award by the Ministry of SMEs and Startups of Korea in 2020.

No comments:

Post a Comment