Ghosh, S.K.Rashmi, M.Mohan, B.R.Guddeti, R.M.R.2026-02-062022Lecture Notes in Electrical Engineering, 2022, Vol.858, , p. 75-8618761100https://doi.org/10.1007/978-981-19-0840-8_6https://idr.nitk.ac.in/handle/123456789/29947Perceiving human actions accurately from a video is one of the most challenging tasks demanded by many real-time applications in smart environments. Recently, several approaches have been proposed for human action representation and further recognizing actions from the videos using different data modalities. Especially in the case of images, deep learning-based approaches have demonstrated their classification efficiency. Here, we propose an effective framework for representing actions based on features obtained from 3D skeleton data of humans performing actions. We utilized motion, pose orientation, and transition orientation of skeleton joints for action representation in the proposed work. In addition, we introduced a lightweight convolutional neural network model for learning features from action representations in order to recognize the different actions. We evaluated the proposed system on two publicly available datasets using a cross-subject evaluation protocol, and the results showed better performance compared to the existing methods. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.Convolutional neural networks (CNNs)Cross-subject protocolDeep learningHuman action recognition (HAR)Motion and orientation of joints (MOJ)Skeleton-Based Human Action Recognition Using Motion and Orientation of Joints