Faculty Publications

Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736

Publications by NITK Faculty

Browse

Search Results

Now showing 1 - 2 of 2
  • Item
    FPGA Accelerated Track to Track Association and Fusion for ADAS Distributed Sensors
    (Institute of Electrical and Electronics Engineers Inc., 2023) Gopala Swamy, B.; Reddy, G.H.; Srihari, P.; Shripathi Acharya, U.; Pardhasaradhi, B.
    The integration and amalgamation of sensor data in the automotive domain play a pivotal role in informing real-time decision-making for advanced driver assistance and safety (ADAS) systems. In a distributed architecture, the track-to-track association (T2TA) modules are responsible for associating the correct track pairs and subsequently fusion modules fuses the information. The T2TA and fusion modules operate within the CPU framework, often leading to elevated latency across the system. This paper introduces digital signal processing (DSP) architectures for the T2TA and fusion modules, designed to meet stringent constraints in terms of both area and latency. These modules encompass critical operations such as matrix inversion, vector-to-matrix multiplications, and matrix-to-matrix multiplications. The challenge of vector-to-matrix multiplications is effectively addressed through the utilization of the constant co-efficient multiplication technique. Additionally, matrix-to-matrix multiplication is performed by employing a vector-to-vector multiplication architecture with Block RAMs (BRAMs). Further-more, matrix inversion is realized through the LU decomposition method. Moreover, this paper presents an innovative approach to expedite the T2TA and fusion modules by harnessing folded DSP architecture within a system-on-chip (SOC) framework. The results of simulations substantiate that the proposed architectures exhibit a remarkable suitability for applications necessitating low area, low power consumption, and high throughput capabilities. © 2023 IEEE.
  • Item
    Depth Information Fusion Using Radar-LiDAR-Camera Experimental Setup for ADAS Applications
    (Institute of Electrical and Electronics Engineers Inc., 2024) Dayananda, N.B.; Srivastava, N.; Achala, G.; Nandagiri, A.; Srihari, P.; Pardhasaradhi, B.; Cenkarmaddi, L.R.
    Improved scene perception makes the safe driving of automotive vehicles (AVs) feasible. The most common automotive sensors for AV perception for detection, classification, a nd t racking a re t he Light Detection And Ranging (LiDAR), radar, and camera sensors. The most reliable sensors for determining range are LiDAR and radar. In this research, we consider the referencing from the camera-based object recognition to fuse the LiDAR and radar point cloud data. To minimize any unintended effects from sensor orientation and sampling time, all three sensors are installed, calibrated, and time-aligned for this experiment. Subsequently, the obtained camera sensor data is subjected to object detection using a MobileNet-based deep neural network (DNN). The radar and LiDAR point cloud data are projected with the two-dimensional bounding box width, length, and height used for object recognition. Following that, the range information from the radar and LiDAR is retrieved and combined using a weighted average fusion algorithm. This experiment is run on the ROS platform, using AWR1642 radar sensor and RealSense LiDAR camera L515 sensor. The object detection from the camera and conducting fusion on the radar and LiDAR sensor is a potential algorithm for the Advanced Driver Assistant System (ADAS) emergency brake assistant (EBA) function. © 2024 IEEE.