Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
5 results
Search Results
Item Automated Parking System in Smart Campus Using Computer Vision Technique(Institute of Electrical and Electronics Engineers Inc., 2019) Banerjee, S.; Ashwin, T.S.; Guddeti, R.M.R.In today's world we need to maintain safety and security of the people around us. So we need to have a well connected surveillance system for keeping active information of various locations according to our needs. A real-time object detection is very important for many applications such as traffic monitoring, classroom monitoring, security rescue, and parking system. From past decade, Convolutional Neural Networks is evolved as a powerful models for recognizing images and videos and it is widely used in the computer vision related work for the best and most used approach for different problem scenario related to object detection and localization. In this work, we have proposed a deep convolutional network architecture to automate the parking system in smart campus with modified Single-shot Multibox Detector (SSD) approach. Further, we created our dataset to train and test the proposed computer vision technique. The experimental results demonstrated an accuracy of 71.2% for the created dataset. © 2019 IEEE.Item Optimized Object Detection Technique in Video Surveillance System Using Depth Images(Springer, 2020) Shahzad Alam, M.; Ashwin, T.S.; Guddeti, R.M.R.In real-time surveillance and intrusion detection, it is difficult to rely only on RGB image-based videos as the accuracy of detected object is low in the low light condition and if the video surveillance area is completely dark then the object will not be detected. Hence, in this paper, we propose a method which can increase the accuracy of object detection even in low light conditions. This paper also shows how the light intensity affects the probability of object detection in RGB, depth, and infrared images. The depth information is obtained from Kinect sensor and YOLO architecture is used to detect the object in real-time. We experimented the proposed method using real-time surveillance system which gave very promising results when applied on depth images which were taken in low light conditions. Further, in real-time object detection, we cannot apply object detection technique before applying any image preprocessing. So we investigated the depth image by which the accuracy of object detection can be improved without applying any image preprocessing. Experimental results demonstrated that depth image (96%) outperforms RGB image (48%) and infrared image (54%) in extreme low light conditions. © 2020, Springer Nature Singapore Pte Ltd.Item UAV based cost-effective real-time abnormal event detection using edge computing(Springer, 2019) Shahzad Alam, M.S.; Natesha, B.V.; Ashwin, T.S.; Guddeti, R.M.R.Recent advancements in computer vision led to the development of a real-time surveillance system which ensures the safety and security of the people in public places. An aerial surveillance system will be advantageous in this scenario using a platform like Unmanned Aerial Vehicle (UAV) will be very reliable and can be considered as a cost-effective option for this task. To make the system fully autonomous, we require real-time abnormal event detection. But, this is computationally complex and time-consuming due to the heavy load on the UAV, which affords limited processing and payload capacity. In this paper, we propose a cost-effective approach for aerial surveillance in which we move the large computation tasks to the cloud while keeping limited computation on-board UAV device using edge computing technique. Further, our proposed system will maintain the minimum communication between UAV and cloud. Thus it not only reduces the network traffic but also reduces the end-to-end delay. The proposed method is based on the state-of-the-art YOLO (You Only Look Once) technique for real-time object detection deployed on edge computing device using Intel neural compute stick Movidius VPU (Vision Processing Unit), and we applied abnormal event detection using motion influence map on the cloud. Experimental results demonstrate that the proposed system reduces the end-to-end delay. Further, Tiny YOLO is six times faster while processing the frames per second (fps) when compared to other state-of-the-art methods. © 2019, Springer Science+Business Media, LLC, part of Springer Nature.Item Multimodal behavior analysis in computer-enabled laboratories using nonverbal cues(Springer Science and Business Media Deutschland GmbH info@springer-sbm.com, 2020) Banerjee, S.; Ashwin, T.S.; Guddeti, R.M.R.In the modern era, there is a growing need for surveillance to ensure the safety and security of the people. Real-time object detection is crucial for many applications such as traffic monitoring, security, search and rescue, vehicle counting, and classroom monitoring. Computer-enabled laboratories are generally equipped with video surveillance cameras in the smart campus. But, from the existing literature, it is observed that the use of video surveillance data obtained from smart campus for any unobtrusive behavioral analysis is seldom performed. Though there are several works on the students’ and teachers’ behavior recognition from devices such as Kinect and handy cameras, there exists no such work which extracts the video surveillance data and predicts the behavioral patterns of both the students and the teachers in real time. Hence, in this study, we unobtrusively analyze the students’ and teachers’ behavioral patterns inside a teaching laboratory (which is considered as an indoor scenario of a smart campus). Here, we propose a deep convolution network architecture to classify and recognize an object in the indoor scenario, i.e., the teaching laboratory environment of the smart campus with modified Single-Shot MultiBox Detector approach. We used six different class labels for predicting the behavioral patterns of both the students and the teachers. We created our dataset with six different class labels for training deep learning architecture. The performance evaluation demonstrates that the proposed method performs better with an accuracy of 0.765 for classification and localization. © 2020, Springer-Verlag London Ltd., part of Springer Nature.Item A framework for low cost, ubiquitous and interactive smart refrigerator(Springer, 2024) Mundody, S.; Guddeti, R.M.R.Internet of Things (IoT) and Artificial Intelligence (AI)-enabled technologies are essential in developing innovative environments and intelligent applications. IoT and AI-enabled appliances are entering our kitchens, adding more comfort and usability. However, these appliances are not economical and are beyond the reach of a commoner with a moderate income. An intelligent fridge is one such appliance. This paper proposes a design for developing a cost-effective, ubiquitous, and intelligent refrigerator. Unlike existing approaches, the proposed method identifies and predicts the fridge items based on Night Vision images and provides minimal natural language interaction with the fridge. The proposed design aims to convert any standard refrigerator into its more intelligent counterpart with minimal hardware and software requirements. The design allows users to view fridge contents on the go using a mobile application and interact with it using natural language. The transfer learning technique enables us to use a YOLOv5n model for object detection. As there are no publicly available Night Vision image datasets of fridge items, we created a custom dataset of Night Vision images to train and validate the object recognition model. Our model for object detection achieved a mAP of 97.1% compared to the YOLOv3-tiny and YOLOv4-tiny models, whose mAP are 94.8% and 96.3%, respectively. The overall cost of the refrigerator after deployment of the module is less than $300, making it an affordable option. The proposed framework meets most of the requirements of a low-cost, ubiquitous, interactive smart refrigerator. © 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
