Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
6 results
Search Results
Item Students’ affective content analysis in smart classroom environment using deep learning techniques(Springer New York LLC barbara.b.bertram@gsk.com, 2019) Gupta, S.K.; Ashwin, T.S.; Guddeti, R.M.R.In the era of the smart classroom environment, students’ affective content analysis plays a vital role as it helps to foster the affective states that are beneficial to learning. Some techniques target to improve the learning rate using the students’ affective content analysis in the classroom. In this paper, a novel max margin face detection based method for students’ affective content analysis using their facial expressions is proposed. The affective content analysis includes analyzing four different moods of students’, namely: High Positive Affect, Low Positive Affect, High Negative Affect, and Low Negative Affect. Engagement scores have been calculated based upon the four moods of students as predicted by the proposed method. Further, the classroom engagement analysis is performed by considering the entire classroom as one group and the corresponding group engagement score. Expert feedback and analyzed affect content videos are used as feedback to the faculty member to improve the teaching strategy and hence improving the students’ learning rate. The proposed smart classroom system was tested for more than 100 students of four different Information Technology courses and the corresponding faculty members at National Institute of Technology Karnataka Surathkal, Mangalore, India. The experimental results demonstrate the train and test accuracy of 90.67% and 87.65%, respectively for mood classification. Furthermore, an analysis was performed over incidence, distribution and temporal dynamics of students’ affective states and promising results were obtained. © 2019, Springer Science+Business Media, LLC, part of Springer Nature.Item Affective database for e-learning and classroom environments using Indian students’ faces, hand gestures and body postures(Elsevier B.V., 2020) Ashwin, T.S.; Guddeti, R.M.R.Automatic recognition of the students’ affective states is a challenging task. These affective states are recognized using their facial expressions, hand gestures, and body postures. An intelligent tutoring system and smart classroom environment can be made more personalized using students’ affective state analysis, and it is performed using machine or deep learning techniques. Effective recognition of affective states is mainly dependent on the quality of the database used. But, there exist very few standard databases for the students’ affective state recognition and its analysis that works for both e-learning and classroom environments. In this paper, we propose a new affective database for both the e-learning and classroom environments using the students’ facial expressions, hand gestures, and body postures. The database consists of both posed (acted) and spontaneous (natural) expressions with single and multi-person in a single image frame with more than 4000 manually annotated image frames with object localization. The classification was done manually using the gold standard study for both Ekman's basic emotions and learning-centered emotions, including neutral. The annotators reliably agree when discriminating against the recognized affective states with Cohen's ? = 0.48. The created database is more robust as it considers various image variants such as occlusion, background clutter, pose, illumination, cultural & regional background, intra-class variations, cropped images, multipoint view, and deformations. Further, we analyzed the classification accuracy of our database using a few state-of-the-art machine and deep learning techniques. Experimental results demonstrate that the convolutional neural network based architecture achieved an accuracy of 83% and 76% for detection and classification, respectively. © 2020 Elsevier B.V.Item Impact of inquiry interventions on students in e-learning and classroom environments using affective computing framework(Springer Science and Business Media B.V. editorial@springerplus.com, 2020) Ashwin, T.S.; Guddeti, R.M.R.Effective teaching strategies improve the students’ learning rate within academic learning time. Inquiry-based instruction is one of the effective teaching strategies used in the classrooms. But these teaching strategies are not adapted in other learning environments like intelligent tutoring systems, including auto tutors. In this paper, we propose an automatic inquiry-based instruction teaching strategy, i.e., inquiry intervention using students’ affective states. The proposed model contains two modules: the first module consists of the proposed framework for predicting the unobtrusive multi-modal students’ affective states (teacher-centric attentive and in-attentive states) using the facial expressions, hand gestures and body postures. The second module consists of the proposed automated inquiry-based instruction teaching strategy to compare the learning outcomes with and without inquiry intervention using affective state transitions for both an individual and a group of students. The proposed system is tested on four different learning environments, namely: e-learning, flipped classroom, classroom and webinar environments. Unobtrusive recognition of students’ affective states is performed using deep learning architectures. After student-independent tenfold cross-validation, we obtained the students’ affective state classification accuracy of 77% and object localization accuracy of 81% using students’ faces, hand gestures and body postures. The overall experimental results demonstrate that there is a positive correlation with r= 0.74 between students’ affective states and their performance. Proposed inquiry intervention improved the students’ performance as there is a decrease of 65%, 43%, 43%, and 53% in overall in-attentive affective state instances using the inquiry interventions in e-learning, flipped classroom, classroom and webinar environments, respectively. © 2020, Springer Nature B.V.Item Multimodal behavior analysis in computer-enabled laboratories using nonverbal cues(Springer Science and Business Media Deutschland GmbH info@springer-sbm.com, 2020) Banerjee, S.; Ashwin, T.S.; Guddeti, R.M.R.In the modern era, there is a growing need for surveillance to ensure the safety and security of the people. Real-time object detection is crucial for many applications such as traffic monitoring, security, search and rescue, vehicle counting, and classroom monitoring. Computer-enabled laboratories are generally equipped with video surveillance cameras in the smart campus. But, from the existing literature, it is observed that the use of video surveillance data obtained from smart campus for any unobtrusive behavioral analysis is seldom performed. Though there are several works on the students’ and teachers’ behavior recognition from devices such as Kinect and handy cameras, there exists no such work which extracts the video surveillance data and predicts the behavioral patterns of both the students and the teachers in real time. Hence, in this study, we unobtrusively analyze the students’ and teachers’ behavioral patterns inside a teaching laboratory (which is considered as an indoor scenario of a smart campus). Here, we propose a deep convolution network architecture to classify and recognize an object in the indoor scenario, i.e., the teaching laboratory environment of the smart campus with modified Single-Shot MultiBox Detector approach. We used six different class labels for predicting the behavioral patterns of both the students and the teachers. We created our dataset with six different class labels for training deep learning architecture. The performance evaluation demonstrates that the proposed method performs better with an accuracy of 0.765 for classification and localization. © 2020, Springer-Verlag London Ltd., part of Springer Nature.Item An Optimized Question Classification Framework Using Dual-Channel Capsule Generative Adversarial Network and Atomic Orbital Search Algorithm(Institute of Electrical and Electronics Engineers Inc., 2023) Revanesh, M.; Rudra, B.; Guddeti, R.M.R.The advancement in education has emphasized the need to evaluate the quality of the examination questions and the cognitive levels of students. Many educational institutions now acknowledge Bloom's taxonomy-based students' cognitive levels evaluating subject-related learning. Therefore, in this paper, a novel optimized Examination Question Classification framework, referred to as QC-DcCapsGAN-AOSA, is proposed by combining the Dual-channel Capsule generative Adversarial Network (DcCapsGAN) with Atomic Orbital Search Algorithm (AOSA) for preprocessing a real-time online dataset of university examination questions, thus identify the key features from the raw data using Term Frequency Inverse Document Frequency (TF-IDF) and finally classifying the examination questions. Atomic Orbital Search Algorithm is used to fine-tune the parameters' weights of the DcCapsGAN, and then uses these weights to categorize questions as Knowledge Level, Comprehension Level, Application Level, Analysis Level, Synthesis Level, and Evaluation Level. Experimental results demonstrate the superiority of the proposed method (QC-DcCapsGAN-AOSA) when compared to the state-of-the-art methods such as QC-LSTM-CNN and QC-BiGRU-CNN with an accuracy improvement of 23.65% and 29.04%, respectively. © 2013 IEEE.Item Federated learning approach for human activity recognition in online examination environment(Springer, 2025) Ramu, S.; Guddeti, R.M.R.; Mohan, B.R.In recent years, online exams have become a key method for assessing students’ knowledge and skills. However, with the rise of e-learning, conducting these exams has introduced new challenges, especially due to the increasing tendency of students to engage in cheating during online assessments. To address this, student activities during online exams are monitored to detect behaviors that indicate cheating through Human Activity Recognition (HAR). HAR is a system capable of recognizing various human activities based on observational data. This study focuses on detecting student behavior during online exams, categorizing normal activities as non-cheating and abnormal activities as potential cheating or malpractice. For this purpose, a federated learning architecture was utilized to process online exam data. In this approach, we implemented federated models, including Federated-ResNet50, Federated-DenseNet121, Federated-VGG16, and Federated-CNN, to classify student activities. A OEP dataset is utilized in this work comprising of various activities such as using mobile devices, copying from notes, abnormal head gaze and normal. Model performance for classification was evaluated using accuracy, precision, recall and F1-Score metrics. The results were compared across the federated models, namely Federated-ResNet50, Federated-DenseNet121, Federated-VGG16, and Federated-CNN. Among these, Federated-ResNet50 performed the best, achieving an accuracy of 91.28%. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.
