Skeleton-Based Human Action Recognition Using Motion and Orientation of Joints

dc.contributor.authorGhosh, S.K.
dc.contributor.authorRashmi, M.
dc.contributor.authorMohan, B.R.
dc.contributor.authorGuddeti, R.M.R.
dc.date.accessioned2026-02-06T06:35:36Z
dc.date.issued2022
dc.description.abstractPerceiving human actions accurately from a video is one of the most challenging tasks demanded by many real-time applications in smart environments. Recently, several approaches have been proposed for human action representation and further recognizing actions from the videos using different data modalities. Especially in the case of images, deep learning-based approaches have demonstrated their classification efficiency. Here, we propose an effective framework for representing actions based on features obtained from 3D skeleton data of humans performing actions. We utilized motion, pose orientation, and transition orientation of skeleton joints for action representation in the proposed work. In addition, we introduced a lightweight convolutional neural network model for learning features from action representations in order to recognize the different actions. We evaluated the proposed system on two publicly available datasets using a cross-subject evaluation protocol, and the results showed better performance compared to the existing methods. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
dc.identifier.citationLecture Notes in Electrical Engineering, 2022, Vol.858, , p. 75-86
dc.identifier.issn18761100
dc.identifier.urihttps://doi.org/10.1007/978-981-19-0840-8_6
dc.identifier.urihttps://idr.nitk.ac.in/handle/123456789/29947
dc.publisherSpringer Science and Business Media Deutschland GmbH
dc.subjectConvolutional neural networks (CNNs)
dc.subjectCross-subject protocol
dc.subjectDeep learning
dc.subjectHuman action recognition (HAR)
dc.subjectMotion and orientation of joints (MOJ)
dc.titleSkeleton-Based Human Action Recognition Using Motion and Orientation of Joints

Files