Faculty Publications

Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736

Publications by NITK Faculty

Browse

Search Results

Now showing 1 - 5 of 5
  • Item
    Automated Parking System in Smart Campus Using Computer Vision Technique
    (Institute of Electrical and Electronics Engineers Inc., 2019) Banerjee, S.; Ashwin, T.S.; Guddeti, R.M.R.
    In today's world we need to maintain safety and security of the people around us. So we need to have a well connected surveillance system for keeping active information of various locations according to our needs. A real-time object detection is very important for many applications such as traffic monitoring, classroom monitoring, security rescue, and parking system. From past decade, Convolutional Neural Networks is evolved as a powerful models for recognizing images and videos and it is widely used in the computer vision related work for the best and most used approach for different problem scenario related to object detection and localization. In this work, we have proposed a deep convolutional network architecture to automate the parking system in smart campus with modified Single-shot Multibox Detector (SSD) approach. Further, we created our dataset to train and test the proposed computer vision technique. The experimental results demonstrated an accuracy of 71.2% for the created dataset. © 2019 IEEE.
  • Item
    Optimized Object Detection Technique in Video Surveillance System Using Depth Images
    (Springer, 2020) Shahzad Alam, M.; Ashwin, T.S.; Guddeti, R.M.R.
    In real-time surveillance and intrusion detection, it is difficult to rely only on RGB image-based videos as the accuracy of detected object is low in the low light condition and if the video surveillance area is completely dark then the object will not be detected. Hence, in this paper, we propose a method which can increase the accuracy of object detection even in low light conditions. This paper also shows how the light intensity affects the probability of object detection in RGB, depth, and infrared images. The depth information is obtained from Kinect sensor and YOLO architecture is used to detect the object in real-time. We experimented the proposed method using real-time surveillance system which gave very promising results when applied on depth images which were taken in low light conditions. Further, in real-time object detection, we cannot apply object detection technique before applying any image preprocessing. So we investigated the depth image by which the accuracy of object detection can be improved without applying any image preprocessing. Experimental results demonstrated that depth image (96%) outperforms RGB image (48%) and infrared image (54%) in extreme low light conditions. © 2020, Springer Nature Singapore Pte Ltd.
  • Item
    UAV based cost-effective real-time abnormal event detection using edge computing
    (Springer, 2019) Shahzad Alam, M.S.; Natesha, B.V.; Ashwin, T.S.; Guddeti, R.M.R.
    Recent advancements in computer vision led to the development of a real-time surveillance system which ensures the safety and security of the people in public places. An aerial surveillance system will be advantageous in this scenario using a platform like Unmanned Aerial Vehicle (UAV) will be very reliable and can be considered as a cost-effective option for this task. To make the system fully autonomous, we require real-time abnormal event detection. But, this is computationally complex and time-consuming due to the heavy load on the UAV, which affords limited processing and payload capacity. In this paper, we propose a cost-effective approach for aerial surveillance in which we move the large computation tasks to the cloud while keeping limited computation on-board UAV device using edge computing technique. Further, our proposed system will maintain the minimum communication between UAV and cloud. Thus it not only reduces the network traffic but also reduces the end-to-end delay. The proposed method is based on the state-of-the-art YOLO (You Only Look Once) technique for real-time object detection deployed on edge computing device using Intel neural compute stick Movidius VPU (Vision Processing Unit), and we applied abnormal event detection using motion influence map on the cloud. Experimental results demonstrate that the proposed system reduces the end-to-end delay. Further, Tiny YOLO is six times faster while processing the frames per second (fps) when compared to other state-of-the-art methods. © 2019, Springer Science+Business Media, LLC, part of Springer Nature.
  • Item
    Multimodal behavior analysis in computer-enabled laboratories using nonverbal cues
    (Springer Science and Business Media Deutschland GmbH info@springer-sbm.com, 2020) Banerjee, S.; Ashwin, T.S.; Guddeti, R.M.R.
    In the modern era, there is a growing need for surveillance to ensure the safety and security of the people. Real-time object detection is crucial for many applications such as traffic monitoring, security, search and rescue, vehicle counting, and classroom monitoring. Computer-enabled laboratories are generally equipped with video surveillance cameras in the smart campus. But, from the existing literature, it is observed that the use of video surveillance data obtained from smart campus for any unobtrusive behavioral analysis is seldom performed. Though there are several works on the students’ and teachers’ behavior recognition from devices such as Kinect and handy cameras, there exists no such work which extracts the video surveillance data and predicts the behavioral patterns of both the students and the teachers in real time. Hence, in this study, we unobtrusively analyze the students’ and teachers’ behavioral patterns inside a teaching laboratory (which is considered as an indoor scenario of a smart campus). Here, we propose a deep convolution network architecture to classify and recognize an object in the indoor scenario, i.e., the teaching laboratory environment of the smart campus with modified Single-Shot MultiBox Detector approach. We used six different class labels for predicting the behavioral patterns of both the students and the teachers. We created our dataset with six different class labels for training deep learning architecture. The performance evaluation demonstrates that the proposed method performs better with an accuracy of 0.765 for classification and localization. © 2020, Springer-Verlag London Ltd., part of Springer Nature.
  • Item
    Surveillance video analysis for student action recognition and localization inside computer laboratories of a smart campus
    (Springer, 2021) Rashmi, M.; Ashwin, T.S.; Guddeti, G.R.M.
    In the era of smart campus, unobtrusive methods for students’ monitoring is a challenging task. The monitoring system must have the ability to recognize and detect the actions performed by the students. Recently many deep neural network based approaches have been proposed to automate Human Action Recognition (HAR) in different domains, but these are not explored in learning environments. HAR can be used in classrooms, laboratories, and libraries to make the teaching-learning process more effective. To make the learning process more effective in computer laboratories, in this study, we proposed a system for recognition and localization of student actions from still images extracted from (Closed Circuit Television) CCTV videos. The proposed method uses (You Only Look Once) YOLOv3, state-of-the-art real-time object detection technology, for localization, recognition of students’ actions. Further, the image template matching method is used to decrease the number of image frames and thus processing the video quickly. As actions performed by the humans are domain specific and since no standard dataset is available for students’ action recognition in smart computer laboratories, thus we created the STUDENT ACTION dataset using the image frames obtained from the CCTV cameras placed in the computer laboratory of a university campus. The proposed method recognizes various actions performed by students in different locations within an image frame. It shows excellent performance in identifying the actions with more samples compared to actions with fewer samples. © 2020, Springer Science+Business Media, LLC, part of Springer Nature.