Faculty Publications

Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736

Publications by NITK Faculty

Browse

Search Results

Now showing 1 - 7 of 7
  • Item
    Unobtrusive Behavioral Analysis of Students in Classroom Environment Using Non-Verbal Cues
    (Institute of Electrical and Electronics Engineers Inc., 2019) Ashwin, T.S.; Guddeti, G.R.
    Pervasive intelligent learning environments can be made more personalized by adapting the teaching strategies according to the students' emotional and behavioral engagements. The students' engagement analysis helps to foster those emotions and behavioral patterns that are beneficial to learning, thus improving the effectiveness of the teaching-learning process. Unobtrusive student engagement analysis is performed using the students' non-verbal cues such as facial expressions, hand gestures, and body postures. Though there exist several techniques for classifying the engagement of a single student present in a single image frame, there are limited works on the students' engagement analysis in a classroom environment. In this paper, we propose a convolutional neural network architecture for unobtrusive students' engagement analysis using non-verbal cues. The proposed architecture is trained and tested on faces, hand gestures and body postures in the wild of more than 350 students present in a classroom environment, with each test image containing multiple students in a single image frame. The data annotation is performed using the gold standard study, and the annotators reliably agree with Cohen's ? = 0.43. We obtained 71% accuracy for the students' engagement level classification. Further, a pre-test/post-test analysis was performed, and it was observed that there is a positive correlation between the students' engagement and their test performance. © 2013 IEEE.
  • Item
    Affective database for e-learning and classroom environments using Indian students’ faces, hand gestures and body postures
    (Elsevier B.V., 2020) Ashwin, T.S.; Guddeti, R.M.R.
    Automatic recognition of the students’ affective states is a challenging task. These affective states are recognized using their facial expressions, hand gestures, and body postures. An intelligent tutoring system and smart classroom environment can be made more personalized using students’ affective state analysis, and it is performed using machine or deep learning techniques. Effective recognition of affective states is mainly dependent on the quality of the database used. But, there exist very few standard databases for the students’ affective state recognition and its analysis that works for both e-learning and classroom environments. In this paper, we propose a new affective database for both the e-learning and classroom environments using the students’ facial expressions, hand gestures, and body postures. The database consists of both posed (acted) and spontaneous (natural) expressions with single and multi-person in a single image frame with more than 4000 manually annotated image frames with object localization. The classification was done manually using the gold standard study for both Ekman's basic emotions and learning-centered emotions, including neutral. The annotators reliably agree when discriminating against the recognized affective states with Cohen's ? = 0.48. The created database is more robust as it considers various image variants such as occlusion, background clutter, pose, illumination, cultural & regional background, intra-class variations, cropped images, multipoint view, and deformations. Further, we analyzed the classification accuracy of our database using a few state-of-the-art machine and deep learning techniques. Experimental results demonstrate that the convolutional neural network based architecture achieved an accuracy of 83% and 76% for detection and classification, respectively. © 2020 Elsevier B.V.
  • Item
    Impact of inquiry interventions on students in e-learning and classroom environments using affective computing framework
    (Springer Science and Business Media B.V. editorial@springerplus.com, 2020) Ashwin, T.S.; Guddeti, R.M.R.
    Effective teaching strategies improve the students’ learning rate within academic learning time. Inquiry-based instruction is one of the effective teaching strategies used in the classrooms. But these teaching strategies are not adapted in other learning environments like intelligent tutoring systems, including auto tutors. In this paper, we propose an automatic inquiry-based instruction teaching strategy, i.e., inquiry intervention using students’ affective states. The proposed model contains two modules: the first module consists of the proposed framework for predicting the unobtrusive multi-modal students’ affective states (teacher-centric attentive and in-attentive states) using the facial expressions, hand gestures and body postures. The second module consists of the proposed automated inquiry-based instruction teaching strategy to compare the learning outcomes with and without inquiry intervention using affective state transitions for both an individual and a group of students. The proposed system is tested on four different learning environments, namely: e-learning, flipped classroom, classroom and webinar environments. Unobtrusive recognition of students’ affective states is performed using deep learning architectures. After student-independent tenfold cross-validation, we obtained the students’ affective state classification accuracy of 77% and object localization accuracy of 81% using students’ faces, hand gestures and body postures. The overall experimental results demonstrate that there is a positive correlation with r= 0.74 between students’ affective states and their performance. Proposed inquiry intervention improved the students’ performance as there is a decrease of 65%, 43%, 43%, and 53% in overall in-attentive affective state instances using the inquiry interventions in e-learning, flipped classroom, classroom and webinar environments, respectively. © 2020, Springer Nature B.V.
  • Item
    Surveillance video analysis for student action recognition and localization inside computer laboratories of a smart campus
    (Springer, 2021) Rashmi, M.; Ashwin, T.S.; Guddeti, G.R.M.
    In the era of smart campus, unobtrusive methods for students’ monitoring is a challenging task. The monitoring system must have the ability to recognize and detect the actions performed by the students. Recently many deep neural network based approaches have been proposed to automate Human Action Recognition (HAR) in different domains, but these are not explored in learning environments. HAR can be used in classrooms, laboratories, and libraries to make the teaching-learning process more effective. To make the learning process more effective in computer laboratories, in this study, we proposed a system for recognition and localization of student actions from still images extracted from (Closed Circuit Television) CCTV videos. The proposed method uses (You Only Look Once) YOLOv3, state-of-the-art real-time object detection technology, for localization, recognition of students’ actions. Further, the image template matching method is used to decrease the number of image frames and thus processing the video quickly. As actions performed by the humans are domain specific and since no standard dataset is available for students’ action recognition in smart computer laboratories, thus we created the STUDENT ACTION dataset using the image frames obtained from the CCTV cameras placed in the computer laboratory of a university campus. The proposed method recognizes various actions performed by students in different locations within an image frame. It shows excellent performance in identifying the actions with more samples compared to actions with fewer samples. © 2020, Springer Science+Business Media, LLC, part of Springer Nature.
  • Item
    A novel receptive field-regularized V-net and nodule classification network for lung nodule detection
    (John Wiley and Sons Inc, 2022) Dodia, S.; Annappa, B.; Mahesh, M.
    Recent advancements in deep learning have achieved great success in building a reliable computer-aided diagnosis (CAD) system. In this work, a novel deep-learning architecture, named receptive field regularized V-net (RFR V-Net), is proposed for detecting lung cancer nodules with reduced false positives (FP). The method uses a receptive regularization on the encoder block's convolution and deconvolution layer of the decoder block in the V-Net model. Further, nodule classification is performed using a new combination of SqueezeNet and ResNet, named nodule classification network (NCNet). Postprocessing image enhancement is performed on the 2D slice by increasing the image's intensity by adding pseudo-color or fluorescence contrast. The proposed RFR V-Net resulted in dice similarity coefficient of 95.01% and intersection over union of 0.83, respectively. The proposed NCNet achieved the sensitivity of 98.38% and FPs/Scan of 2.3 for 3D representations. The proposed NCNet resulted in considerable improvements over existing CAD systems. © 2021 Wiley Periodicals LLC.
  • Item
    Novel edge detection method for nuclei segmentation of liver cancer histopathology images
    (Springer Science and Business Media Deutschland GmbH, 2023) Roy, S.; Das, D.; Lal, S.; Kini, J.
    In automatic cancer detection, nuclei segmentation is a very essential step which enables the classification task simpler and computationally more efficient. However, automatic nuclei detection is fraught with the problems of inter-class variability of nuclei size and shapes. In this research article, a novel unsupervised edge detection technique, is proposed for segmenting the nuclei regions in liver cancer Hematoxylin and Eosin (H&E) stained histopathology images. In this novel edge detection technique, the notion of computing local standard deviation is incorporated, instead of computing gradients. Since, local standard deviation value is correlated with the edge information of image, this novel method can extract the nuclei edges efficiently, even at multiscale. The edge-detected image is further converted into a binary image by employing Ostu (IEEE Trans Syst Man Cybern 9(1):62–66, 1979)’s thresholding operation. Subsequently, an adaptive morphological filter is also employed in order to refine the final segmented image. The proposed nuclei segmentation method is also tested on a well-recognized multi-organ dataset, in order to check its effectiveness over wide variety of dataset. The visual results of both datasets indicate that the proposed segmentation method overcomes the limitations of existing unsupervised methods, moreover, its performance is comparable with the same of recent deep neural models like DIST, HoverNet, etc. Furthermore, three quality metrics are computed in order to measure the performance of several nuclei segmentation methods quantitatively. The mean value of quality metrics reveals that proposed segmentation method indeed outperformed other existing nuclei segmentation methods. © 2021, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
  • Item
    KAC SegNet: A Novel Kernel-Based Active Contour Method for Lung Nodule Segmentation and Classification Using Dense AlexNet Framework
    (World Scientific, 2024) Dodia, S.; Annappa, B.; Mahesh, P.A.
    Lung cancer is known to be one of the leading causes of death worldwide. There is a chance of increasing the survival rate of the patients if detected at an early stage. Computed Tomography (CT) scans are prominently used to detect and classify lung cancer nodules/tumors in the thoracic region. There is a need to develop an efficient and reliable computer-aided diagnosis model to detect lung cancer nodules accurately from CT scans. This work proposes a novel kernel-based active-contour (KAC) SegNet deep learning model to perform lung cancer nodule detection from CT scans. The active contour uses a snake method to detect internal and external boundaries of the curves, which is used to extract the Region Of Interest (ROI) from the CT scan. From the extracted ROI, the nodules are further classified into benign and malignant using a Dense AlexNet deep learning model. The key contributions of this work are the fusion of an edge detection method with a deep learning segmentation method which provides enhanced lung nodule segmentation performance, and an ensemble of state-of-the-art deep learning classifiers, which encashes the advantages of both DenseNet and AlexNet to learn better discriminative information from the detected lung nodules. The experimental outcome shows that the proposed segmentation approach achieves a Dice Score Coefficient of 97.8% and an Intersection-over-Union of 92.96%. The classification performance resulted in an accuracy of 95.65%, a False Positive Rate, and False Negative Rate values of 0.0572 and 0.0289. The proposed model is robust compared to the existing state-of-the-art methods. © 2024 World Scientific Publishing Company.