Depth Information Fusion Using Radar-LiDAR-Camera Experimental Setup for ADAS Applications

No Thumbnail Available

Date

2024

Journal Title

Journal ISSN

Volume Title

Publisher

Institute of Electrical and Electronics Engineers Inc.

Abstract

Improved scene perception makes the safe driving of automotive vehicles (AVs) feasible. The most common automotive sensors for AV perception for detection, classification, a nd t racking a re t he Light Detection And Ranging (LiDAR), radar, and camera sensors. The most reliable sensors for determining range are LiDAR and radar. In this research, we consider the referencing from the camera-based object recognition to fuse the LiDAR and radar point cloud data. To minimize any unintended effects from sensor orientation and sampling time, all three sensors are installed, calibrated, and time-aligned for this experiment. Subsequently, the obtained camera sensor data is subjected to object detection using a MobileNet-based deep neural network (DNN). The radar and LiDAR point cloud data are projected with the two-dimensional bounding box width, length, and height used for object recognition. Following that, the range information from the radar and LiDAR is retrieved and combined using a weighted average fusion algorithm. This experiment is run on the ROS platform, using AWR1642 radar sensor and RealSense LiDAR camera L515 sensor. The object detection from the camera and conducting fusion on the radar and LiDAR sensor is a potential algorithm for the Advanced Driver Assistant System (ADAS) emergency brake assistant (EBA) function. © 2024 IEEE.

Description

Keywords

ADAS, emergency brake assistant, MobileNet object recognition, radar-lidar fusion, weighted average fusion

Citation

Proceedings - 2024 13th IEEE International Conference on Communication Systems and Network Technologies, CSNT 2024, 2024, Vol., , p. 59-65

Endorsement

Review

Supplemented By

Referenced By