Conference Papers
Permanent URI for this collectionhttps://idr.nitk.ac.in/handle/123456789/28506
Browse
20 results
Search Results
Item Kinect Based Suspicious Posture Recognition for Real-Time Home Security Applications(Institute of Electrical and Electronics Engineers Inc., 2018) Vikram, M.; Anantharaman, A.; Suhas, B.S.; Ashwin, T.S.; Guddeti, R.M.R.Suspicious posture recognition is a paramount task with numerous applications in everyday life. We explore one such application in real-time home security using the Microsoft Kinect depth camera. We propose a novel method where the remote device itself detects the suspicious activity. The server is alerted by the remote device in case of a suspicious activity which further alerts the home owners immediately. We show that our method, works in real-time, is robust towards changing lighting conditions and the computations happen on the remote device itself which makes it suitable for real-time home security. © 2018 IEEE.Item GA-PSO: Service Allocation in Fog Computing Environment Using Hybrid Bio-Inspired Algorithm(Institute of Electrical and Electronics Engineers Inc., 2019) Yadav, V.; Natesha, B.V.; Guddeti, R.M.R.Internet of Thing (IoT) applications require an efficient platform for processing big data. Different computing techniques such as Cloud, Edge, and Fog are used for processing big data. The main challenge in the fog computing environment is to minimize both energy consumption and makespan for services. The service allocation techniques on a set of virtual machines (VMs) is the decidable factor for energy consumption and latency in fog servers. Hence, the service allocation in fog environment is referred to as NP-hard problem. In this work, we developed a hybrid algorithm using Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) technique to solve this NP-hard problem. The proposed GA-PSO is used for optimal allocation of services while minimizing the total makespan, energy consumption for IoT applications in the fog computing environment. We implemented the proposed GA-PSO using customized C simulator, and the results demonstrate that the proposed GA-PSO outperforms both GA and PSO techniques when applied individually. © 2019 IEEE.Item Smart Cane for Assisting Visually Impaired People(Institute of Electrical and Electronics Engineers Inc., 2019) Nandini, A.V.; Dwivedi, A.; Kumar, N.A.; Ashwin, T.S.; Vishnuvardhan, V.; Guddeti, R.M.R.Blindness disables a person from self-navigating outside well-known environments. It affects their ability to perform several jobs, duties, and activities. They are dependent on external assistance which can be provided by humans, dogs or special electronic devices for better decision making. This motivated us to create a prototype called 'Smart cane for assisting visually impaired people' to overcome the problems they face in their daily life. Our device is a low cost and lightweight system that processes signals and alerts the visually impaired over any obstacle, potholes or water puddles through different beeping patterns. It senses the light intensity of the environment and illuminates the LED accordingly. These are accomplished by incorporating two ultrasonic sensors, a moisture sensor and a LDR sensor along with an Arduino Nano micro-controller. These are placed at specific positions of the cane for efficient guidance. Moreover, a GSM module is also added to the system so that the visually impaired person can send a message to the emergency contact number in case of distress. The developed model showed 89 percent accuracy and 80 percent of the users were satisfied with the developed prototype. © 2019 IEEE.Item Automated Parking System in Smart Campus Using Computer Vision Technique(Institute of Electrical and Electronics Engineers Inc., 2019) Banerjee, S.; Ashwin, T.S.; Guddeti, R.M.R.In today's world we need to maintain safety and security of the people around us. So we need to have a well connected surveillance system for keeping active information of various locations according to our needs. A real-time object detection is very important for many applications such as traffic monitoring, classroom monitoring, security rescue, and parking system. From past decade, Convolutional Neural Networks is evolved as a powerful models for recognizing images and videos and it is widely used in the computer vision related work for the best and most used approach for different problem scenario related to object detection and localization. In this work, we have proposed a deep convolutional network architecture to automate the parking system in smart campus with modified Single-shot Multibox Detector (SSD) approach. Further, we created our dataset to train and test the proposed computer vision technique. The experimental results demonstrated an accuracy of 71.2% for the created dataset. © 2019 IEEE.Item Optimized Object Detection Technique in Video Surveillance System Using Depth Images(Springer, 2020) Shahzad Alam, M.; Ashwin, T.S.; Guddeti, R.M.R.In real-time surveillance and intrusion detection, it is difficult to rely only on RGB image-based videos as the accuracy of detected object is low in the low light condition and if the video surveillance area is completely dark then the object will not be detected. Hence, in this paper, we propose a method which can increase the accuracy of object detection even in low light conditions. This paper also shows how the light intensity affects the probability of object detection in RGB, depth, and infrared images. The depth information is obtained from Kinect sensor and YOLO architecture is used to detect the object in real-time. We experimented the proposed method using real-time surveillance system which gave very promising results when applied on depth images which were taken in low light conditions. Further, in real-time object detection, we cannot apply object detection technique before applying any image preprocessing. So we investigated the depth image by which the accuracy of object detection can be improved without applying any image preprocessing. Experimental results demonstrated that depth image (96%) outperforms RGB image (48%) and infrared image (54%) in extreme low light conditions. © 2020, Springer Nature Singapore Pte Ltd.Item Skeleton based Human Action Recognition for Smart City Application using Deep Learning(Institute of Electrical and Electronics Engineers Inc., 2020) Rashmi, M.; Guddeti, R.M.R.These days the Human Action Recognition (HAR) is playing a vital role in several applications such as surveillance systems, gaming, robotics, and so on. Interpreting the actions performed by a person from the video is one of the essential tasks of intelligent surveillance systems in the smart city, smart building, etc. Human action can be recognized either by using models such as depth, skeleton, or combinations of these models. In this paper, we propose the human action recognition system based on the 3D skeleton model. Since the role of different joints varies while performing the action, in the proposed work, we use the most informative distance and the angle between joints in the skeleton model as a feature set. Further, we propose a deep learning framework for human action recognition based on these features. We performed experiments using MSRAction3D, a publicly available dataset for 3D HAR, and the results demonstrated that the proposed framework obtained the accuracies of 95.83%, 92.9%, and 98.63% on three subsets of the dataset AS1, AS2, and AS3, respectively, using the protocols of [19]. © 2020 IEEE.Item Skeleton-Based Human Action Recognition Using Motion and Orientation of Joints(Springer Science and Business Media Deutschland GmbH, 2022) Ghosh, S.K.; Rashmi, M.; Mohan, B.R.; Guddeti, R.M.R.Perceiving human actions accurately from a video is one of the most challenging tasks demanded by many real-time applications in smart environments. Recently, several approaches have been proposed for human action representation and further recognizing actions from the videos using different data modalities. Especially in the case of images, deep learning-based approaches have demonstrated their classification efficiency. Here, we propose an effective framework for representing actions based on features obtained from 3D skeleton data of humans performing actions. We utilized motion, pose orientation, and transition orientation of skeleton joints for action representation in the proposed work. In addition, we introduced a lightweight convolutional neural network model for learning features from action representations in order to recognize the different actions. We evaluated the proposed system on two publicly available datasets using a cross-subject evaluation protocol, and the results showed better performance compared to the existing methods. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.Item Fake News Detection in Hindi Using Embedding Techniques(Institute of Electrical and Electronics Engineers Inc., 2022) Shailendra, P.; Rashmi, M.; Ramu, S.; Guddeti, R.M.R.Internet users have been rapidly increasing in recent years, especially in India. That is why nearly everything operates in an online mode. Sharing information has also become simple and easy due to the internet and social media. Almost everyone now shares news in the community without even considering the source of information. As a result, there is the issue of disseminating false, misleading, or fabricated data. Detecting fake news is a challenging task because it is presented in such a form that it looks like authentic information. This problem becomes more challenging when it comes to local languages. This paper discusses several deep learning models that utilize LSTM, BiLSTM, CNN+LSTM, and CNN+BiLSTM. On the Hostility detection dataset in Hindi, these models use Word2Vec, IndicNLP fastText, and Facebook's fastText embeddings for fake news detection. The proposed CNN+BiLSTM model with Facebook's fastText embedding achieved an F1-score of 75%, outperforming the baseline model. Additionally, the BiLSTM using Facebook's fastText outperforms CNN+BiLSTM using Facebook's fastText on the F1-score. © 2022 IEEE.Item Multi-stream Multi-attention Deep Neural Network for Context-Aware Human Action Recognition(Institute of Electrical and Electronics Engineers Inc., 2022) Rashmi, M.; Guddeti, R.M.R.Technological innovations in deep learning models have enabled reasonably close solutions to a wide variety of computer vision tasks such as object detection, face recognition, and many more. On the other hand, Human Action Recognition (HAR) is still far from human-level ability due to several challenges such as diversity in performing actions. Due to data availability in multiple modalities, HAR using video data recorded by RGB-D cameras is frequently used in current research. This paper proposes an approach for recognizing human actions using depth and skeleton data captured using the Kinect depth sensor. Attention modules have been introduced in recent years to assist in focusing on the most important features in computer vision tasks. This paper proposes a multi-stream deep learning model with multiple attention blocks for HAR. At first, the depth and skeletal modalities' action data are represented using two distinct action descriptors. Each generates an image from the action data gathered from numerous frames. The proposed deep learning model is trained using these descriptors. Additionally, we propose a set of score fusion techniques for accurate HAR using all the features and trained CNN + LSTM streams. The proposed method is evaluated on two benchmark datasets using well known cross-subject evaluation protocol. The proposed technique achieved 89.83% and 90.7% accuracy on the MSRAction3D and UTDMHAD datasets, respectively. The experimental results establish the validity and effectiveness of the proposed model. © 2022 IEEE.Item Vision-based Hand Gesture Interface for Real-time Computer Operation Control(Institute of Electrical and Electronics Engineers Inc., 2022) Praneeth, G.; Recharla, R.; Prakash, A.S.; Rashmi, M.; Guddeti, R.M.R.Humans typically perform simple actions with hand gestures. If a computer interprets gestures, then human-computer interaction can be enhanced. This paper proposes hand gesture-based interface for controlling computer operations using deep learning and custom dataset. © 2022 IEEE.
