Browsing by Author "Rashmi, M."
Now showing 1 - 15 of 15
- Results Per Page
- Sort Options
Item A Novel Fake Job Posting Detection: An Empirical Study and Performance Evaluation Using ML and Ensemble Techniques(Springer Science and Business Media Deutschland GmbH, 2023) Srikanth, C.; Rashmi, M.; Ramu, S.; Guddeti, R.M.R.Recently, everything can be accomplished online, including education, shopping, banking, etc. This technological advancement makes it easy for fraudsters to scam people online and acquire easy money. Numerous cyber crimes worldwide exist, including identity theft and fake job postings. Nowadays, many companies post job openings online, making recruitment simple. Consequently, fraudsters also post job openings online to obtain money and personal information from job seekers. In the proposed work, we aimed to decrease the frequency of such scams by using ensemble techniques such as AdaBoost, Gradient Boost, Stacking classifier, XgBoost, Bagging, and Random Forest to identify fake job postings from genuine ones. This paper proposes various featurization techniques such as Response coding with Laplace smoothing, Average Word2vec, and term frequency-inverse document frequency weighted Word2vec. We compared the performance of ensemble techniques with machine learning (ML) algorithms on publicly available EMSCAD dataset using accuracy and F1-score. Bagging classifier outperformed all the models with an accuracy of 98.85% and an F1-score of 0.88 on imbalanced dataset. On balanced dataset, XgBoost achieved 97.89% accuracy and 0.98 F1-score. From the experimental results, it is observed that a combination of ensemble and featurization techniques using Laplace smoothed Response coding and BoW stood superior to most of the state-of-the-art works on fake job posting detection. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.Item Algorithmic aspects of k-part degree restricted domination in graphs(World Scientific wspc@wspc.com.sg, 2020) Kamath, S.S.; Senthil Thilak, A.; Rashmi, M.The concept of network is predominantly used in several applications of computer communication networks. It is also a fact that the dominating set acts as a virtual backbone in a communication network. These networks are vulnerable to breakdown due to various causes, including traffic congestion. In such an environment, it is necessary to regulate the traffic so that these vulnerabilities could be reasonably controlled. Motivated by this, k-part degree restricted domination is defined as follows. For a positive integer k, a dominating set D of a graph G is said to be a k-part degree restricted dominating set (k-DRD set) if for all u ? D, there exists a set Cu ? N(u) ?(V ? D) such that |Cu| ? ?d(ku) ? and Su?D Cu = V ? D. The minimum cardinality of a k-DRD set of a graph G is called the k-part degree restricted domination number of G and is denoted by ? dk (G). In this paper, we present a polynomial time reduction that proves the NP-completeness of the k-part degree restricted domination problem for bipartite graphs, chordal graphs, undirected path graphs, chordal bipartite graphs, circle graphs, planar graphs and split graphs. We propose a polynomial time algorithm to compute a minimum k-DRD set of a tree and minimal k-DRD set of a graph. © 2020 World Scientific Publishing Co. Pte Ltd. All rights reserved.Item Deep learning-based multi-view 3D-human action recognition using skeleton and depth data(Springer, 2023) Ghosh, S.K.; Rashmi, M.; Mohan, B.R.; Guddeti, R.M.R.Human Action Recognition (HAR) is a fundamental challenge that smart surveillance systems must overcome. With the rising affordability of capturing human actions with more advanced depth cameras, HAR has garnered increased interest over the years, however the majority of these efforts have been on single-view HAR. Recognizing human actions from arbitrary viewpoints is more challenging, as the same action is observed differently from different angles. This paper proposes a multi-stream Convolutional Neural Network (CNN) model for multi-view HAR using depth and skeleton data. We also propose a novel and efficient depth descriptor, Edge Detected-Motion History Image (ED-MHI), based on Canny Edge Detection and Motion History Image. Also, the proposed skeleton descriptor, Motion and Orientation of Joints (MOJ), represent the appropriate action by using joint motion and orientation. Experimental results on two datasets of human actions: NUCLA Multiview Action3D and NTU RGB-D using a Cross-subject evaluation protocol demonstrated that the proposed system exhibits the superior performance as compared to the state-of-the-art works with 93.87% and 85.61% accuracy, respectively. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.Item Exploiting skeleton-based gait events with attention-guided residual deep learning model for human identification(Springer, 2023) Rashmi, M.; Guddeti, R.M.R.Human identification using unobtrusive visual features is a daunting task in smart environments. Gait is among adequate biometric features when the camera cannot correctly capture the human face due to environmental factors. In recent years, gait-based human identification using skeleton data has been intensively studied using a variety of feature extractors and more sophisticated deep learning models. Although skeleton data is susceptible to changes in covariate variables, resulting in noisy data, most existing algorithms employ a single feature extraction technique for all frames to generate frame-level feature maps. This results in degraded performance and additional features, necessitating increased computing power. This paper proposes a robust feature extractor that extracts a quantitative summary of gait event-specific information, thereby reducing the total number of features throughout the gait cycle. In addition, a novel Attention-guided LSTM-based deep learning model with residual connections is proposed to learn the extracted features for gait recognition. The proposed approach outperforms the state-of-the-art works on five publicly available datasets on various benchmark evaluation protocols and metrics. Further, the CMC test revealed that the proposed model obtained higher than 97% Accuracy in lower-level ranks on these datasets. © 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.Item Fake News Detection in Hindi Using Embedding Techniques(Institute of Electrical and Electronics Engineers Inc., 2022) Shailendra, P.; Rashmi, M.; Ramu, S.; Guddeti, R.M.R.Internet users have been rapidly increasing in recent years, especially in India. That is why nearly everything operates in an online mode. Sharing information has also become simple and easy due to the internet and social media. Almost everyone now shares news in the community without even considering the source of information. As a result, there is the issue of disseminating false, misleading, or fabricated data. Detecting fake news is a challenging task because it is presented in such a form that it looks like authentic information. This problem becomes more challenging when it comes to local languages. This paper discusses several deep learning models that utilize LSTM, BiLSTM, CNN+LSTM, and CNN+BiLSTM. On the Hostility detection dataset in Hindi, these models use Word2Vec, IndicNLP fastText, and Facebook's fastText embeddings for fake news detection. The proposed CNN+BiLSTM model with Facebook's fastText embedding achieved an F1-score of 75%, outperforming the baseline model. Additionally, the BiLSTM using Facebook's fastText outperforms CNN+BiLSTM using Facebook's fastText on the F1-score. © 2022 IEEE.Item Fall Detection and Elderly Monitoring System Using the CNN(Springer Science and Business Media Deutschland GmbH, 2023) Reddy Anakala, V.M.; Rashmi, M.; Natesha, B.V.; Reddy Guddeti, R.M.Fall detection has become a critical concern in the medical and healthcare fields due to the growing population of the elderly people. The research on fall and movement detection using wearable devices has made strides. Accurately recognizing the fall behavior in surveillance video and providing the early feedback can significantly minimize the fall-related injury and death of elderly people. However, the fall event is highly dynamic, impairing categorization accuracy. The current study sought to construct a fall detection architecture based on deep learning to predict falls and the Activities of Daily Living (ADLs). This paper proposes an efficient method for representing extracted features as RGB images and a CNN model for learning the features needed for accurate fall detection. Additionally, the proposed CNN model is used to test for and locate the target in video using threshold-based categorization. The suggested CNN model was evaluated on the SisFall dataset and was found to be capable of detecting falls prior to impact with a sensitivity of 100%, a specificity of 96.48%, and a response time of 223ms. The experimental findings attained an overall accuracy of 97.43%. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.Item Human action recognition using multi-stream attention-based deep networks with heterogeneous data from overlapping sub-actions(Springer Science and Business Media Deutschland GmbH, 2024) Rashmi, M.; Guddeti, R.M.R.Vision-based Human Action Recognition is difficult owing to the variations in the same action performed by various people, the temporal variations in actions, and the difference in viewing angles. Researchers have recently adopted multi-modal visual data fusion strategies to address the limitations of single-modality methodologies. Many researchers strive to produce more discriminative features because most existing techniques’ success relies on feature representation in the data modality under consideration. Human action consists of several sub-actions whose duration vary between individuals. This paper proposes a multifarious learning framework employing action data in depth and skeleton formats. Firstly, a novel action representation named Multiple Sub-action Enhanced Depth Motion Map (MS-EDMM), integrating depth features from overlapping sub-actions, is proposed. Secondly, an efficient method is introduced for extracting spatio-temporal features from skeleton data. This is achieved by dividing the skeleton sequence into sub-actions and summarizing skeleton joint information for five distinct human body regions. Next, a multi-stream deep learning model with Attention-guided CNN and residual LSTM is proposed for classification, followed by several score fusion operations to reap the benefits of streams trained with multiple data types. The proposed method demonstrated a superior performance of 1.62% over an existing method that utilized skeleton and depth data, achieving an accuracy 89.76% on a single-view UTD-MHAD dataset. Furthermore, on the multi-view NTU RGB+D dataset demonstrated encouraging performance with an accuracy of 89.75% in cross-view and 83.8% in cross-subject evaluations. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.Item Human identification system using 3D skeleton-based gait features and LSTM model(Academic Press Inc., 2022) Rashmi, M.; Guddeti, R.M.R.Vision-based gait emerged as the preferred biometric in smart surveillance systems due to its unobtrusive nature. Recent advancements in low-cost depth sensors resulted in numerous 3D skeleton-based gait analysis techniques. For spatial–temporal analysis, existing state-of-the-art algorithms use frame-level information as the timestamp. This paper proposes gait event-level spatial–temporal features and LSTM-based deep learning model that treats each gait event as a timestamp to identify individuals from walking patterns observed in single and multi-view scenarios. On four publicly available datasets, the proposed system stands superior to state-of-the-art approaches utilizing a variety of conventional benchmark protocols. The proposed system achieved a recognition rate of greater than 99% in low-level ranks during the CMC test, making it suitable for practical applications. The statistical study of gait event-level features demonstrated retrieved features’ discriminating capacity in classification. Additionally, the ANOVA test performed on findings from K folds demonstrated the proposed system's significance in human identification. © 2021 Elsevier Inc.Item Interactive System for Toddlers Using Doodle Recognition(Springer Science and Business Media Deutschland GmbH, 2024) Gagandeep, K.N.; Belagali, A.R.; Rashmi, M.; Guddeti, R.M.R.Typing using the keyboard or using a mouse is hard for small children. In this paper, we proposed an interactive system to improve the learning ability of a toddler. The proposed doodle recognition system provides an attractive and efficient way to interact toddlers with computer systems by following the Human-Computer Interaction guidelines and deep learning. The most common practice that toddlers develop is scribbling random images, so we decided to use this skill to provide a gateway for the toddlers to interact thus and learning with computers by using our proposed simple interface. When the toddler (user) starts to scribble or draw something on the screen, whiteboard, or paper; the application goes into input mode, and as soon as the drawing is stopped the image on the screen or whiteboard is processed by the trained CNN model and the action is carried out based on the output of the model. © Springer Nature Switzerland AG 2024.Item Molecular-InChI: Automated Recognition of Optical Chemical Structure(Institute of Electrical and Electronics Engineers Inc., 2022) Kumar, N.; Rashmi, M.; Ramu, S.; Reddy Guddeti, R.M.With the advent of a new era dominated by digital media and publications in recent years, the importance of striking a balance between traditional and new modes of operation has become increasingly apparent. It has been standard practice in the field of chemistry for decades to express chemical compounds using their structural forms, referred to as the Skeletal formula. In this research, we tried to interpret these old chemical structure images, extracted from old literature, to transform pictures back to the underlying chemical structure labeled as InChI text using a huge set of synthetic image data produced by Bristol-Myers Squibb. In this paper, we propose an improved synthetic data and an Encoder-Decoder-based deep learning-based model to automatically represent these molecular images into their underlying InChI representation. © 2022 IEEE.Item Multi-stream Multi-attention Deep Neural Network for Context-Aware Human Action Recognition(Institute of Electrical and Electronics Engineers Inc., 2022) Rashmi, M.; Guddeti, R.M.R.Technological innovations in deep learning models have enabled reasonably close solutions to a wide variety of computer vision tasks such as object detection, face recognition, and many more. On the other hand, Human Action Recognition (HAR) is still far from human-level ability due to several challenges such as diversity in performing actions. Due to data availability in multiple modalities, HAR using video data recorded by RGB-D cameras is frequently used in current research. This paper proposes an approach for recognizing human actions using depth and skeleton data captured using the Kinect depth sensor. Attention modules have been introduced in recent years to assist in focusing on the most important features in computer vision tasks. This paper proposes a multi-stream deep learning model with multiple attention blocks for HAR. At first, the depth and skeletal modalities' action data are represented using two distinct action descriptors. Each generates an image from the action data gathered from numerous frames. The proposed deep learning model is trained using these descriptors. Additionally, we propose a set of score fusion techniques for accurate HAR using all the features and trained CNN + LSTM streams. The proposed method is evaluated on two benchmark datasets using well known cross-subject evaluation protocol. The proposed technique achieved 89.83% and 90.7% accuracy on the MSRAction3D and UTDMHAD datasets, respectively. The experimental results establish the validity and effectiveness of the proposed model. © 2022 IEEE.Item Skeleton based Human Action Recognition for Smart City Application using Deep Learning(Institute of Electrical and Electronics Engineers Inc., 2020) Rashmi, M.; Guddeti, R.M.R.These days the Human Action Recognition (HAR) is playing a vital role in several applications such as surveillance systems, gaming, robotics, and so on. Interpreting the actions performed by a person from the video is one of the essential tasks of intelligent surveillance systems in the smart city, smart building, etc. Human action can be recognized either by using models such as depth, skeleton, or combinations of these models. In this paper, we propose the human action recognition system based on the 3D skeleton model. Since the role of different joints varies while performing the action, in the proposed work, we use the most informative distance and the angle between joints in the skeleton model as a feature set. Further, we propose a deep learning framework for human action recognition based on these features. We performed experiments using MSRAction3D, a publicly available dataset for 3D HAR, and the results demonstrated that the proposed framework obtained the accuracies of 95.83%, 92.9%, and 98.63% on three subsets of the dataset AS1, AS2, and AS3, respectively, using the protocols of [19]. © 2020 IEEE.Item Skeleton-Based Human Action Recognition Using Motion and Orientation of Joints(Springer Science and Business Media Deutschland GmbH, 2022) Ghosh, S.K.; Rashmi, M.; Mohan, B.R.; Guddeti, R.M.R.Perceiving human actions accurately from a video is one of the most challenging tasks demanded by many real-time applications in smart environments. Recently, several approaches have been proposed for human action representation and further recognizing actions from the videos using different data modalities. Especially in the case of images, deep learning-based approaches have demonstrated their classification efficiency. Here, we propose an effective framework for representing actions based on features obtained from 3D skeleton data of humans performing actions. We utilized motion, pose orientation, and transition orientation of skeleton joints for action representation in the proposed work. In addition, we introduced a lightweight convolutional neural network model for learning features from action representations in order to recognize the different actions. We evaluated the proposed system on two publicly available datasets using a cross-subject evaluation protocol, and the results showed better performance compared to the existing methods. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.Item Surveillance video analysis for student action recognition and localization inside computer laboratories of a smart campus(Springer, 2021) Rashmi, M.; Ashwin, T.S.; Guddeti, G.R.M.In the era of smart campus, unobtrusive methods for students’ monitoring is a challenging task. The monitoring system must have the ability to recognize and detect the actions performed by the students. Recently many deep neural network based approaches have been proposed to automate Human Action Recognition (HAR) in different domains, but these are not explored in learning environments. HAR can be used in classrooms, laboratories, and libraries to make the teaching-learning process more effective. To make the learning process more effective in computer laboratories, in this study, we proposed a system for recognition and localization of student actions from still images extracted from (Closed Circuit Television) CCTV videos. The proposed method uses (You Only Look Once) YOLOv3, state-of-the-art real-time object detection technology, for localization, recognition of students’ actions. Further, the image template matching method is used to decrease the number of image frames and thus processing the video quickly. As actions performed by the humans are domain specific and since no standard dataset is available for students’ action recognition in smart computer laboratories, thus we created the STUDENT ACTION dataset using the image frames obtained from the CCTV cameras placed in the computer laboratory of a university campus. The proposed method recognizes various actions performed by students in different locations within an image frame. It shows excellent performance in identifying the actions with more samples compared to actions with fewer samples. © 2020, Springer Science+Business Media, LLC, part of Springer Nature.Item Vision-based Hand Gesture Interface for Real-time Computer Operation Control(Institute of Electrical and Electronics Engineers Inc., 2022) Praneeth, G.; Recharla, R.; Prakash, A.S.; Rashmi, M.; Guddeti, R.M.R.Humans typically perform simple actions with hand gestures. If a computer interprets gestures, then human-computer interaction can be enhanced. This paper proposes hand gesture-based interface for controlling computer operations using deep learning and custom dataset. © 2022 IEEE.
