BEV Detection and Localisation using Semantic Segmentation in Autonomous Car Driving Systems
No Thumbnail Available
Date
2021
Journal Title
Journal ISSN
Volume Title
Publisher
Institute of Electrical and Electronics Engineers Inc.
Abstract
In autonomous vehicles, the perception system plays an important role in environment modeling and object detection in 3D space. Existing perception systems use various sensors to localize and track the surrounding obstacles, but have some limitations. Most existing end-to-end autonomous systems are computationally heavy as they are built on multiple deep networks that are trained to detect and localize objects, thus requiring custom, high-end computation devices with high compute power. To address this issue, we propose and experiment with different semantic segmentation-based models for Birds Eye View (BEV) detection and localization of surrounding objects like vehicles and pedestrians from LiDAR (light detection, and ranging) point clouds. Voxelisation techniques are used to transform 3D LiDAR point clouds to 2D RGB images. The semantic segmentation models are trained from the ground up on the Lyft Level 5 dataset. During experimental evaluation, the proposed approach achieved a mean average precision score of 0.044 for UNET, 0.041 for SegNet and 0.033 for FCN, while being significantly less compute-intensive when compared to the state-of-the-art approaches. © 2021 IEEE.
Description
Keywords
Autonomous Driving Systems, Deep Neural Networks, Object Localization, Semantic Segmentation
Citation
Proceedings of CONECCT 2021: 7th IEEE International Conference on Electronics, Computing and Communication Technologies, 2021, Vol., , p. -
