Conference Papers
Permanent URI for this collectionhttps://idr.nitk.ac.in/handle/123456789/28506
Browse
4 results
Search Results
Item Skeleton based Human Action Recognition for Smart City Application using Deep Learning(Institute of Electrical and Electronics Engineers Inc., 2020) Rashmi, M.; Guddeti, R.M.R.These days the Human Action Recognition (HAR) is playing a vital role in several applications such as surveillance systems, gaming, robotics, and so on. Interpreting the actions performed by a person from the video is one of the essential tasks of intelligent surveillance systems in the smart city, smart building, etc. Human action can be recognized either by using models such as depth, skeleton, or combinations of these models. In this paper, we propose the human action recognition system based on the 3D skeleton model. Since the role of different joints varies while performing the action, in the proposed work, we use the most informative distance and the angle between joints in the skeleton model as a feature set. Further, we propose a deep learning framework for human action recognition based on these features. We performed experiments using MSRAction3D, a publicly available dataset for 3D HAR, and the results demonstrated that the proposed framework obtained the accuracies of 95.83%, 92.9%, and 98.63% on three subsets of the dataset AS1, AS2, and AS3, respectively, using the protocols of [19]. © 2020 IEEE.Item Skeleton-Based Human Action Recognition Using Motion and Orientation of Joints(Springer Science and Business Media Deutschland GmbH, 2022) Ghosh, S.K.; Rashmi, M.; Mohan, B.R.; Guddeti, R.M.R.Perceiving human actions accurately from a video is one of the most challenging tasks demanded by many real-time applications in smart environments. Recently, several approaches have been proposed for human action representation and further recognizing actions from the videos using different data modalities. Especially in the case of images, deep learning-based approaches have demonstrated their classification efficiency. Here, we propose an effective framework for representing actions based on features obtained from 3D skeleton data of humans performing actions. We utilized motion, pose orientation, and transition orientation of skeleton joints for action representation in the proposed work. In addition, we introduced a lightweight convolutional neural network model for learning features from action representations in order to recognize the different actions. We evaluated the proposed system on two publicly available datasets using a cross-subject evaluation protocol, and the results showed better performance compared to the existing methods. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.Item Vision-based Hand Gesture Interface for Real-time Computer Operation Control(Institute of Electrical and Electronics Engineers Inc., 2022) Praneeth, G.; Recharla, R.; Prakash, A.S.; Rashmi, M.; Guddeti, R.M.R.Humans typically perform simple actions with hand gestures. If a computer interprets gestures, then human-computer interaction can be enhanced. This paper proposes hand gesture-based interface for controlling computer operations using deep learning and custom dataset. © 2022 IEEE.Item Fall Detection and Elderly Monitoring System Using the CNN(Springer Science and Business Media Deutschland GmbH, 2023) Reddy Anakala, V.M.; Rashmi, M.; Natesha, B.V.; Reddy Guddeti, R.M.Fall detection has become a critical concern in the medical and healthcare fields due to the growing population of the elderly people. The research on fall and movement detection using wearable devices has made strides. Accurately recognizing the fall behavior in surveillance video and providing the early feedback can significantly minimize the fall-related injury and death of elderly people. However, the fall event is highly dynamic, impairing categorization accuracy. The current study sought to construct a fall detection architecture based on deep learning to predict falls and the Activities of Daily Living (ADLs). This paper proposes an efficient method for representing extracted features as RGB images and a CNN model for learning the features needed for accurate fall detection. Additionally, the proposed CNN model is used to test for and locate the target in video using threshold-based categorization. The suggested CNN model was evaluated on the SisFall dataset and was found to be capable of detecting falls prior to impact with a sensitivity of 100%, a specificity of 96.48%, and a response time of 223ms. The experimental findings attained an overall accuracy of 97.43%. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
