Faculty Publications

Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736

Publications by NITK Faculty

Browse

Search Results

Now showing 1 - 4 of 4
  • Item
    Multimodal group activity state detection for classroom response system using convolutional neural networks
    (Springer Verlag service@springer.de, 2019) Sebastian, A.G.; Singh, S.; Manikanta, P.B.T.; Ashwin, T.S.; Guddeti, R.M.R.
    Human–Computer Interaction is a crucial and emerging field in computer science. This is because computers are replacing humans in many jobs to provide services. This has resulted in the computer being needed to interact with the human in the same way as the human does with another. When humans talk to each other, they gain feedback based on how the other person responds non-verbally. Since computers are now interacting with humans, they need to be able to detect these facial cues and accordingly adjust their services based on this feedback. Our proposed method aims at building a Multimodal Group Activity State Detection for Classroom Response System which tries to recognize the learning behavior of a classroom for providing effective feedback and inputs to the teacher. The key challenges dealt here are to detect and analyze as many students as possible for a non-biased evaluation of the mood of the students and classify them into three activity states defined: Active, passive, and inactive. © Springer Nature Singapore Pte Ltd. 2019
  • Item
    Human Activity Recognition for Online Examination Environment Using CNN
    (Springer Science and Business Media Deutschland GmbH, 2023) Ramu, S.; Guddeti, R.M.R.; Mohan, B.R.
    Human Activity Recognition (HAR) is an intelligent system that recognizes activities based on a sequence of observations about human behavior. Human activity recognition is essential in human-to-human interactions to identify interesting patterns. It is not easy to extract patterns since it contains information about a person’s identity, personality, and state of mind. Many studies have been conducted on recognizing human behavior using machine learning techniques. However, HAR in an online examination environment has not yet been explored. As a result, the primary focus of this work is on the recognition of human activity in the context of an online examination. This work aims to classify normal and abnormal behavior during an online examination employing the Convolutional Neural Network (CNN) technique. In this work, we considered two, three and four layered CNN architectures and we fine-tuned the hyper-parameters of CNN architectures for obtaining better results. The three layered CNN architecture performed better than other CNN architectures in terms of accuracy. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
  • Item
    Automatic detection of students’ affective states in classroom environment using hybrid convolutional neural networks
    (Springer, 2020) Ashwin, A.; Guddeti, R.M.R.
    Predicting the students’ emotional and behavioral engagements using computer vision techniques is a challenging task. Though there are several state-of-the-art techniques for analyzing a student’s affective states in an e-learning environment (single person’s engagement detection in a single image frame), a very few works are available for analyzing the students’ affective states in a classroom environment (multiple people in a single image frame). Hence, in this paper, we propose a novel hybrid convolutional neural network (CNN) architecture for analyzing the students’ affective states in a classroom environment. This proposed architecture consists of two models, the first model (CNN-1) is designed to analyze the affective states of a single student in a single image frame and the second model (CNN-2) uses multiple students in a single image frame. Thus, our proposed hybrid architecture predicts the overall affective state of the entire class. The proposed architecture uses the students’ facial expressions, hand gestures and body postures for analyzing their affective states. Further, due to unavailability of standard datasets for the students’ affective state analysis, we created, annotated and tested on our dataset of over 8000 single face in a single image frame and 12000 multiple faces in a single image frame with three different affective states, namely: engaged, boredom and neutral. The experimental results demonstrate an accuracy of 86% and 70% for posed and spontaneous affective states of classroom data, respectively. © 2019, Springer Science+Business Media, LLC, part of Springer Nature.
  • Item
    Deep learning-based multi-view 3D-human action recognition using skeleton and depth data
    (Springer, 2023) Ghosh, S.K.; Rashmi, M.; Mohan, B.R.; Guddeti, R.M.R.
    Human Action Recognition (HAR) is a fundamental challenge that smart surveillance systems must overcome. With the rising affordability of capturing human actions with more advanced depth cameras, HAR has garnered increased interest over the years, however the majority of these efforts have been on single-view HAR. Recognizing human actions from arbitrary viewpoints is more challenging, as the same action is observed differently from different angles. This paper proposes a multi-stream Convolutional Neural Network (CNN) model for multi-view HAR using depth and skeleton data. We also propose a novel and efficient depth descriptor, Edge Detected-Motion History Image (ED-MHI), based on Canny Edge Detection and Motion History Image. Also, the proposed skeleton descriptor, Motion and Orientation of Joints (MOJ), represent the appropriate action by using joint motion and orientation. Experimental results on two datasets of human actions: NUCLA Multiview Action3D and NTU RGB-D using a Cross-subject evaluation protocol demonstrated that the proposed system exhibits the superior performance as compared to the state-of-the-art works with 93.87% and 85.61% accuracy, respectively. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.