Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
5 results
Search Results
Item A novel sentiment analysis of social networks using supervised learning(Springer-Verlag Wien michaela.bolli@springer.at, 2014) Anjaria, M.; Guddeti, R.M.R.Online microblog-based social networks have been used for expressing public opinions through short messages. Among popular microblogs, Twitter has attracted the attention of several researchers in areas like predicting the consumer brands, democratic electoral events, movie box office, popularity of celebrities, the stock market, etc. Sentiment analysis over a Twitter-based social network offers a fast and efficient way of monitoring the public sentiment. This paper studies the sentiment prediction task over Twitter using machine-learning techniques, with the consideration of Twitter-specific social network structure such as retweet. We also concentrate on finding both direct and extended terms related to the event and thereby understanding its effect. We employed supervised machine-learning techniques such as support vector machines (SVM), Naive Bayes, maximum entropy and artificial neural networks to classify the Twitter data using unigram, bigram and unigram + bigram (hybrid) feature extraction model for the case study of US Presidential Elections 2012 and Karnataka State Assembly Elections (India) 2013. Further, we combined the results of sentiment analysis with the influence factor generated from the retweet count to improve the prediction accuracy of the task. Experimental results demonstrate that SVM outperforms all other classifiers with maximum accuracy of 88 % in predicting the outcome of US Elections 2012, and 68 % for Indian State Assembly Elections 2013. © 2014, Springer-Verlag Wien.Item Affective database for e-learning and classroom environments using Indian students’ faces, hand gestures and body postures(Elsevier B.V., 2020) Ashwin, T.S.; Guddeti, R.M.R.Automatic recognition of the students’ affective states is a challenging task. These affective states are recognized using their facial expressions, hand gestures, and body postures. An intelligent tutoring system and smart classroom environment can be made more personalized using students’ affective state analysis, and it is performed using machine or deep learning techniques. Effective recognition of affective states is mainly dependent on the quality of the database used. But, there exist very few standard databases for the students’ affective state recognition and its analysis that works for both e-learning and classroom environments. In this paper, we propose a new affective database for both the e-learning and classroom environments using the students’ facial expressions, hand gestures, and body postures. The database consists of both posed (acted) and spontaneous (natural) expressions with single and multi-person in a single image frame with more than 4000 manually annotated image frames with object localization. The classification was done manually using the gold standard study for both Ekman's basic emotions and learning-centered emotions, including neutral. The annotators reliably agree when discriminating against the recognized affective states with Cohen's ? = 0.48. The created database is more robust as it considers various image variants such as occlusion, background clutter, pose, illumination, cultural & regional background, intra-class variations, cropped images, multipoint view, and deformations. Further, we analyzed the classification accuracy of our database using a few state-of-the-art machine and deep learning techniques. Experimental results demonstrate that the convolutional neural network based architecture achieved an accuracy of 83% and 76% for detection and classification, respectively. © 2020 Elsevier B.V.Item Impact of inquiry interventions on students in e-learning and classroom environments using affective computing framework(Springer Science and Business Media B.V. editorial@springerplus.com, 2020) Ashwin, T.S.; Guddeti, R.M.R.Effective teaching strategies improve the students’ learning rate within academic learning time. Inquiry-based instruction is one of the effective teaching strategies used in the classrooms. But these teaching strategies are not adapted in other learning environments like intelligent tutoring systems, including auto tutors. In this paper, we propose an automatic inquiry-based instruction teaching strategy, i.e., inquiry intervention using students’ affective states. The proposed model contains two modules: the first module consists of the proposed framework for predicting the unobtrusive multi-modal students’ affective states (teacher-centric attentive and in-attentive states) using the facial expressions, hand gestures and body postures. The second module consists of the proposed automated inquiry-based instruction teaching strategy to compare the learning outcomes with and without inquiry intervention using affective state transitions for both an individual and a group of students. The proposed system is tested on four different learning environments, namely: e-learning, flipped classroom, classroom and webinar environments. Unobtrusive recognition of students’ affective states is performed using deep learning architectures. After student-independent tenfold cross-validation, we obtained the students’ affective state classification accuracy of 77% and object localization accuracy of 81% using students’ faces, hand gestures and body postures. The overall experimental results demonstrate that there is a positive correlation with r= 0.74 between students’ affective states and their performance. Proposed inquiry intervention improved the students’ performance as there is a decrease of 65%, 43%, 43%, and 53% in overall in-attentive affective state instances using the inquiry interventions in e-learning, flipped classroom, classroom and webinar environments, respectively. © 2020, Springer Nature B.V.Item Exploiting skeleton-based gait events with attention-guided residual deep learning model for human identification(Springer, 2023) Rashmi, M.; Guddeti, R.M.R.Human identification using unobtrusive visual features is a daunting task in smart environments. Gait is among adequate biometric features when the camera cannot correctly capture the human face due to environmental factors. In recent years, gait-based human identification using skeleton data has been intensively studied using a variety of feature extractors and more sophisticated deep learning models. Although skeleton data is susceptible to changes in covariate variables, resulting in noisy data, most existing algorithms employ a single feature extraction technique for all frames to generate frame-level feature maps. This results in degraded performance and additional features, necessitating increased computing power. This paper proposes a robust feature extractor that extracts a quantitative summary of gait event-specific information, thereby reducing the total number of features throughout the gait cycle. In addition, a novel Attention-guided LSTM-based deep learning model with residual connections is proposed to learn the extracted features for gait recognition. The proposed approach outperforms the state-of-the-art works on five publicly available datasets on various benchmark evaluation protocols and metrics. Further, the CMC test revealed that the proposed model obtained higher than 97% Accuracy in lower-level ranks on these datasets. © 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.Item A framework for low cost, ubiquitous and interactive smart refrigerator(Springer, 2024) Mundody, S.; Guddeti, R.M.R.Internet of Things (IoT) and Artificial Intelligence (AI)-enabled technologies are essential in developing innovative environments and intelligent applications. IoT and AI-enabled appliances are entering our kitchens, adding more comfort and usability. However, these appliances are not economical and are beyond the reach of a commoner with a moderate income. An intelligent fridge is one such appliance. This paper proposes a design for developing a cost-effective, ubiquitous, and intelligent refrigerator. Unlike existing approaches, the proposed method identifies and predicts the fridge items based on Night Vision images and provides minimal natural language interaction with the fridge. The proposed design aims to convert any standard refrigerator into its more intelligent counterpart with minimal hardware and software requirements. The design allows users to view fridge contents on the go using a mobile application and interact with it using natural language. The transfer learning technique enables us to use a YOLOv5n model for object detection. As there are no publicly available Night Vision image datasets of fridge items, we created a custom dataset of Night Vision images to train and validate the object recognition model. Our model for object detection achieved a mAP of 97.1% compared to the YOLOv3-tiny and YOLOv4-tiny models, whose mAP are 94.8% and 96.3%, respectively. The overall cost of the refrigerator after deployment of the module is less than $300, making it an affordable option. The proposed framework meets most of the requirements of a low-cost, ubiquitous, interactive smart refrigerator. © 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
