Browsing by Author "Mohan, B.R."
Now showing 1 - 20 of 89
- Results Per Page
- Sort Options
Item A Comparative Study of Deep Learning Models for Word-Sense Disambiguation(Springer Science and Business Media Deutschland GmbH, 2022) Jadiya, A.; Dondemadahalli Manjunath, T.; Mohan, B.R.Word-sense disambiguation (WSD) has been a persistent issue since its introduction to the community of natural language processing (NLP). It has a wide range of applications in different areas like information retrieval (IR), sentiment analysis, knowledge graph construction, machine translation, lexicography, text mining, information extraction, and so on. Analysis of the performance of deep learning algorithms with different word embeddings is required to be done since various deep learning models are deployed for the task of disambiguation of word sense. In this paper, comparison of several deep learning models like CNN, LSTM, bidirectional LSTM, and CNN + LSTM is done with trainable as well as pretrained GloVe embeddings with common preprocessing methods. Performance evaluation of temporal convolutional network (TCN) model is done along with the comparison of the same with the formerly mentioned models. This paper shows that using GloVe embeddings may not result in better accuracy in the case of word-sense disambiguation, i.e., trainable embeddings perform better. It also includes a framework for evaluating deep learning models for WSD and analysis of the usage of embeddings for the same. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.Item A Synergetic Approach to Ethereum Option Valuation Using XGBoost and Soft Reordering 1D Convolutional Neural Networks(Springer, 2025) Sapna, S.; Mohan, B.R.In an ever-evolving realm of cryptocurrencies, Ethereum has emerged as a prominent player, captivating both investors and enthusiasts alike. Within the diverse financial landscape of cryptocurrencies, options stand out as a versatile tool, offering flexibility and hedging opportunities. This paper introduces a cutting-edge approach to pricing Ethereum options, harnessing the formidable power of XGBoost and the visionary capabilities of Convolutional Neural Networks (CNN). This research proposes a novel method that utilizes XGBoost for implied volatility estimation by integrating historical volatility, and generalized auto-regressive conditional heteroscedasticity (GARCH) model-predicted volatility. Subsequently, a soft reordering 1-dimensional CNN (1D-CNN) model is employed to enhance the pricing accuracy of Ethereum options. The soft reordering mechanism is used to dynamically rearrange the initial tabular dataset, optimizing it for enhanced learning within the CNN framework. The outcome indicates the ability of the proposed model in estimating implied volatility and pricing options with remarkable accuracy, outperforming traditional option pricing models and data-driven models documented in literature. The proposed model also exhibits the lowest pricing error across all maturities and various moneyness criteria, with the exception of long term put and deep out of the money (DOTM) options. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.Item A TFD Approach to Stock Price Prediction(Springer, 2020) Chanduka, B.; Bhat, S.S.; Rajput, N.; Mohan, B.R.Accurate stock price predictions can help investors take correct decisions about the selling/purchase of stocks. With improvements in data analysis and deep learning algorithms, a variety of approaches has been tried for predicting stock prices. In this paper, we deal with the prediction of stock prices for automobile companies using a novel TFD—Time Series, Financial Ratios, and Deep Learning approach. We then study the results over multiple activation functions for multiple companies and reinforce the viability of the proposed algorithm. © 2020, Springer Nature Singapore Pte Ltd.Item Analysis of free physical memory in server virtualized system(2015) Mohan, B.R.; Ram Mohana Reddy, GuddetiDegradation of the performance is the part of any long running software systems. This is due to memory leakage, unreleased file descriptors, round off errors and disk and memory fragmentation. It has been found that the memory leakage is the primary cause of any software performance degradation. In order to predict the software performance degradation, the analysis of the resource usage is essential. Here the free physical memory of a server virtualised system is analysed using time series analysis. � 2015 IEEE.Item Bayesian Belief Network Analysis for SPAD System in Railways(Institute of Electrical and Electronics Engineers Inc., 2024) Das, M.; Mohan, B.R.; Reddy G, R.M.; Chinmaya, C.; Umesh; Reddy G, V.M.; Vismay, P.Even with a very strong network of signaling and warning systems in the country, there have been many examples of trains crossing the red signal due to various factors, even in the modern day. These occurrences, known as Signal Passed at Danger (SPAD) events, could potentially result in severe consequences such as train derailments, train collisions, infrastructure collisions, and other dangerous events. Traditionally, these events have been analyzed using the Fault Tree Analysis (FTA) approach. However, when the system grows more complex, FTA too becomes more complex, and tough to maintain simplicity and ease of analysis. This opens the gateway to the exploration of other methods to model and assess such SPAD incidents and similar safety-critical systems in railways. Bayesian belief network (BBN) is considered to be a better model to represent this situation when it comes to handling complexity. This paper focuses on the implementation and advantages of the BBN model over FTA by considering the SPAD system as a case study. Both the FTA and BBN methods are then compared concerning modeling and analysis aspects. © 2024 IEEE.Item Bio-Inspired Hyperparameter Tuning of Federated Learning for Student Activity Recognition in Online Exam Environment(Multidisciplinary Digital Publishing Institute (MDPI), 2024) Ramu, R.; Prasad, N.; Guddeti, R.M.R.; Mohan, B.R.Nowadays, online examination (exam in short) platforms are becoming more popular, demanding strong security measures for digital learning environments. This includes addressing key challenges such as head pose detection and estimation, which are integral for applications like automatic face recognition, advanced surveillance systems, intuitive human–computer interfaces, and enhancing driving safety measures. The proposed work holds significant potential in enhancing the security and reliability of online exam platforms. It achieves this by accurately classifying students’ attentiveness based on distinct head poses, a novel approach that leverages advanced techniques like federated learning and deep learning models. The proposed work aims to classify students’ attentiveness with the help of different head poses. In this work, we considered five head poses: front face, down face, right face, up face, and left face. A federated learning (FL) framework with a pre-trained deep learning model (ResNet50) was used to accomplish the classification task. To classify students’ activity (behavior) in an online exam environment using the FL framework’s local client device, we considered the ResNet50 model. However, identifying the best hyperparameters in the local client ResNet50 model is challenging. Hence, in this study, we proposed two hybrid bio-inspired optimized methods, namely, Particle Swarm Optimization with Genetic Algorithm (PSOGA) and Particle Swarm Optimization with Elitist Genetic Algorithm (PSOEGA), to fine-tune the hyperparameters of the ResNet50 model. The bio-inspired optimized methods employed in the ResNet50 model will train and classify the students’ behavior in an online exam environment. The FL framework trains the client model locally and sends the updated weights to the server model. The proposed hybrid bio-inspired algorithms outperform the GA and PSO when independently used. The proposed PSOGA not only outperforms the proposed PSOEGA but also outperforms the benchmark algorithms considered for performance evaluation by giving an accuracy of 95.97%. © 2024 by the authors.Item Channel Pruning of Transfer Learning Models Using Novel Techniques(Institute of Electrical and Electronics Engineers Inc., 2024) Pragnesh, P.; Mohan, B.R.This research paper delves into the challenges associated with deep learning models, specifically focusing on transfer learning. Despite the effectiveness of widely used models such as VGGNet, ResNet, and GoogLeNet, their deployment on resource-constrained devices is impeded by high memory bandwidth and computational costs, and to overcome these limitations, the study proposes pruning as a viable solution. Numerous parameters, particularly in fully connected layers, contribute minimally to computational costs, so we focus on convolution layers' pruning. The research explores and evaluates three innovative pruning methods: the Max3 Saliency pruning method, the K-Means clustering algorithm, and the Singular Value Decomposition (SVD) approach. The Max3 Saliency pruning method introduces a slight variation by using the three maximum values of the kernel instead of all nine to compute the saliency score. This method is the most effective, substantially reducing parameter and Floating Point Operations (FLOPs) for both VGG16 and ResNet56 models. Notably, VGG16 demonstrates a remarkable 46.19% reduction in parameters and a 61.91% reduction in FLOPs. Using the Max3 Saliency pruning method, ResNet56 shows a 35.15% reduction in parameters and FLOPs. The K-Means pruning algorithm is also successful, resulting in a 40.00% reduction in parameters for VGG16 and a 49.20% reduction in FLOPs. In the case of ResNet56, the K-Means algorithm achieved a 31.01% reduction in both parameters and FLOPs. While the Singular Value Decomposition (SVD) approach provides a new set of values for condensed channels, its overall pruning ratio is smaller than the Max3 Saliency and K-Means methods. The SVD pruning method prunes 20.07% parameter reduction and a 24.64% reduction in FLOPs achieved for VGG16, along with a 16.94% reduction in both FLOPs and parameters for ResNet56. Compared with the state-of-the-art methods, the Max3 Saliency and K-Means pruning methods performed better in Flops reduction metrics. © 2024 The Authors.Item Comparative Analysis Of JavaScript And WebAssembly In The Browser Environment(Institute of Electrical and Electronics Engineers Inc., 2022) Tushar; Mohan, B.R.As World Wide Web is evolving, larger and high-performance applications are being entirely run on the browsers. Web applications have their own advantages like they are more accessible and platform independent. JavaScript was the only programming-language which was historically supported to be ran on the web browsers, but it is quite limited to high-performance applications as it is dynamically-typed and interpreted language. So, as the high-performance applications started to come to web there have been always a need for another language which could run in the browser environment but also take advantage of system resources. WebAssembly was one such effort by the vendors of different browsers coming together. WebAssembly is claimed to be portable and size and time efficient binary format which could be compiled to run on the web browsers at near native speed. This paper will try to verify the claim by running various experiments on both WebAssembly and JavaScript and measuring resource used and time taken by those programs to execute and will later do a comparative analysis between the both. © 2022 IEEE.Item Comparative Analysis of Root Finding Algorithms for Implied Volatility Estimation of Ethereum Options(Springer, 2024) Sapna, S.; Mohan, B.R.In this paper, a comparative analysis of traditional and hybrid root finding algorithms is performed in estimating implied volatility for Ethereum Options using the Black–Scholes model. Results indicate the efficiency of Newton–Raphson method in terms of algorithmic convergence as well as computational time. Since Newton–Raphson method may not always lead to convergence, the best approximation technique is chosen from the convergent bracketed methods. The hybrid Bisection–Regula Falsi method serves as the best choice for root estimation among the bracketed methods under consideration. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023.Item Comparative analysis of Software Reliability using Grey Wolf Optimisation and Machine Learning(Institute of Electrical and Electronics Engineers Inc., 2024) Kelkar, S.; Vishvasrao, S.P.; Agarwal, A.; Rajput, C.; Mohan, B.R.; Das, M.Software reliability is a crucial aspect of software quality. In this paper, we aim to explore the application of Gray Wolf Optimization (GWO) for feature selection and classification on various software dataset, such as KC1, JM1, and PC5. We compare the performance of Machine Learning models (Random Forest, Decision Tree, Support Vector Machine, XGBoost and Neural Networks) with and without GWO-based feature selection. Our results demonstrate the effectiveness of GWO in enhancing the accuracy of software reliability analysis. Or Math in Paper Title or Abstract. © 2024 IEEE.Item Comparative Performance Evaluation of Web-Based Book Recommender Systems(Institute of Electrical and Electronics Engineers Inc., 2022) Bhat, S.S.; Pranav, P.; Shashank, K.V.; Raghunandan, A.; Mohan, B.R.In today's world, recommendation algorithms are popularly utilised for personalization. To improve their business, e-commerce behemoths rely heavily on their recommendation algorithms. As a result, the quality of suggestions can have a big impact on how much money they make. As a result, effective evaluation of recommender systems is critical. Traditional evaluation measures are limited to error-based and accuracy-based metrics, and do not account for characteristics such as novelty, informedness, markedness, and so on. This research study aims to compare the effectiveness of two web-based book recommendation systems by using the measures like diversity, informedness, and markedness, which are less well-known but equally essential. © 2022 IEEE.Item Comparative Study of Pruning Techniques in Recurrent Neural Networks(Springer Science and Business Media Deutschland GmbH, 2023) Choudhury, S.; Rout, A.K.; Pragnesh, T.; Mohan, B.R.In recent years, there has been a drastic development in the field of neural networks. They have evolved from simple feed-forward neural networks to more complex neural networks such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs). CNNs are used for tasks such as image recognition where the sequence is not essential, while RNNs are useful when order is important such as machine translation. By increasing the number of layers in the network, we can improve the performance of the neural network (Alford et al. in Pruned and structurally sparse neural networks, 2018 [1]). However, this will also increase the complexity of the network, and also training will require more power and time. By introducing sparsity in the architecture of the neural network, we can tackle this problem. Pruning is one of the processes through which a neural network can be made sparse (Zhu and Gupta in To prune, or not to prune: exploring the efficacy of pruning for model compression, 2017 [2]). Sparse RNNs can be easily implemented on mobile devices and resource-constraint servers (Wen et al. in Learning intrinsic sparse structures within long short-term memory, 2017 [3]). We investigate the following methods to induce sparsity in RNNs: RNN pruning and automated gradual pruning. We also investigate how the pruning techniques impact the model’s performance and provide a detailed comparison between the two techniques. We also experiment by pruning input-to-hidden and hidden-to-hidden weights. Based on the results of pruning experiments, we conclude that it is possible to reduce the complexity of RNNs by more than 80%. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.Item Comparing Different Sequences of Pruning Algorithms for Hybrid Pruning(Institute of Electrical and Electronics Engineers Inc., 2023) Pragnesh, T.; Mohan, B.R.Most developers face two significant issues while designing the architecture of a neural network. First, the available dataset for many real-life problems is relatively small, leading to overfitting. Second, When a dataset is large enough, the computation cost to train the model on a given dataset is enormous. Thus most developers use transfer learning with a standard model like VGGNet, ResNet, and GoogleNet. These standard models are memory and computationally expensive during inference, making them infeasible to deploy on resource-constrained devices. The recent research trend is to compress the standard model used for transfer learning to reduce memory and computing costs. In CNN, approximately 10% parameters are present in the convolution layer, contributing to 90% of computational cost, while 90% parameters are present in dense, contributing 10% of computational cost. This paper focuses on the structure pruning of parameters in the convolution layer to reduce computational costs. Here we explore and compare the following pruning technique, 1) Channel pruning with a quantitative score, 2) Kernel pruning with a quantitative score, 3) Channel pruning with a similarity score, and 4) Kernel pruning with a similarity score. Finally, as mentioned earlier, we try several combinations of pruning to form a hybrid pruning. © 2023 IEEE.Item Compression of Convolution Neural Network Using Structured Pruning(Institute of Electrical and Electronics Engineers Inc., 2022) Pragnesh, T.; Mohan, B.R.Deep Neural Network(DNN) is currently solving many real-life problems with excellent accuracy. However, de-signing a compact neural network and training them from scratch face two challenges. First, as in many problems, data-sets are relatively small; the model starts to overfit and has low validation accuracy. Second, training from scratch requires substantial computational resources. So many developers use transfer learning where we start from a standard model such as VGGNet with pre-trained weights. The pre-trained model, trained on a similar problem of high complexity. For example, for the Image Classification Problem, one can use VGG16, ResNet, AlexaNet, and GoogleNet. These pre-trained models are trained on ImageNet Dataset with millions of images of 1000 different classes. Such pre-trained models are enormous, and computation cost is huge during inference, making it unusable for many real-life situations where we need to deploy the model on resource-constrained devices. Thus, much work is going on to compress the standard pre-trained model to achieve the required accuracy with minimum computational cost. There are two types of pruning techniques. (i) Unstructured pruning: parameter-based pruning that prunes individual parameters. (ii) Structured pruning: Here, we prune a set of parameters that perform specific operations such as activation neurons and convolution operations. This paper focuses on structured pruning as it directly results in compression and faster execution. There are two strategies for structured pruning. (i) Saliency-based approach where we compute the impact of parameters on output and remove parameters with minimum value. The second one is similarities based where we find the redundant features and remove one of them such that pruning makes a minimum change in output. In this paper, we combine both the approach where we for the initial iteration we perform pruning based on saliency and later iteration; we perform pruning based on similarity-aware approach. Here we observed that a combined approach leads to better results for pruning. © 2022 IEEE.Item Deep Learning Framework Based on Audio–Visual Features for Video Summarization(Springer Science and Business Media Deutschland GmbH, 2022) Rhevanth, M.; Ahmed, R.; Shah, V.; Mohan, B.R.The techniques of video summarization (VS) has garnered immense interests in current generation leading to enormous applications in different computer vision domains, such as video extraction, image captioning, indexing, and browsing. By the addition of high-quality features and clusters to pick representative visual elements, conventional VS studies often aim at the success of the VS algorithms. Many of the existing VS mechanisms only take into consideration the visual aspect of the video input, thereby ignoring the influence of audio features in the generated summary. To cope with such issues, we propose an efficient video summarization technique that processes both visual and audio content while extracting key frames from the raw video input. Structural similarity index is used to check similarity between the frames, while mel-frequency cepstral coefficient (MFCC) helps in extracting features from the corresponding audio signals. By combining the previous two features, the redundant frames of the video are removed. The resultant key frames are refined using a deep convolution neural network (CNN) model to retrieve a list of candidate key frames which finally constitute the summarization of the data. The proposed system is experimented on video datasets from YouTube that contain events within them which helps in better understanding the video summary. Experimental observations indicate that with the inclusion of audio features and an efficient refinement technique, followed by an optimization function, provides better summary results as compared to standard VS techniques. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.Item Deep learning-based multi-view 3D-human action recognition using skeleton and depth data(Springer, 2023) Ghosh, S.K.; Rashmi, M.; Mohan, B.R.; Guddeti, R.M.R.Human Action Recognition (HAR) is a fundamental challenge that smart surveillance systems must overcome. With the rising affordability of capturing human actions with more advanced depth cameras, HAR has garnered increased interest over the years, however the majority of these efforts have been on single-view HAR. Recognizing human actions from arbitrary viewpoints is more challenging, as the same action is observed differently from different angles. This paper proposes a multi-stream Convolutional Neural Network (CNN) model for multi-view HAR using depth and skeleton data. We also propose a novel and efficient depth descriptor, Edge Detected-Motion History Image (ED-MHI), based on Canny Edge Detection and Motion History Image. Also, the proposed skeleton descriptor, Motion and Orientation of Joints (MOJ), represent the appropriate action by using joint motion and orientation. Experimental results on two datasets of human actions: NUCLA Multiview Action3D and NTU RGB-D using a Cross-subject evaluation protocol demonstrated that the proposed system exhibits the superior performance as compared to the state-of-the-art works with 93.87% and 85.61% accuracy, respectively. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.Item Dynamics of Nonlinear Causality: Exploring the Influence of Positive and Negative Financial News on the Indian Equity Market(Institute of Electrical and Electronics Engineers Inc., 2023) Varghese, R.R.; Mohan, B.R.Recent attention has focused on the interplay between news media and stock prices, prompted by extensive exploration of stock market dynamics. This study is designed to examine the existence of non-linear causal links between positive and negative financial news and stock market valuations. Employing sentiment analysis, the Finbert model evaluates news content, while the Transfer Entropy method assesses the impact of both positive financial news and negative news. Investigating the causal relationships between fluctuations in positive and negative news and stock price performance across diverse companies through transfer entropy analysis, our findings confirm the evident disparity in the influence of positive and negative news on daily stock prices. Quantification of these effects adopts the Sliding Window approach. Furthermore, our evaluations indicate that negative financial news exerts a more significant influence on stock prices compared to positive financial news. These outcomes bolster the concept of an asymmetric effect, wherein negative sentiment wields a more substantial influence compared to its positive counterpart. © 2023 IEEE.Item The effect of software aging on power usage(2015) Mohan, B.R.; Ram Mohana Reddy, GuddetiThis paper tries to establish relation between the power usage and software aging. Software aging is the performance degradation of long running software due to shrinking in physical memory, increase in swap read and write rate and increase in CPU utilization. This paper tries to establish the relation between the Software aging and the power usage. Experimental results demonstrate that CPU utilization increases over a period of time, when the work load remains the constant. Linear Regression analysis is used for establishing this trend. � 2015 IEEE.Item Enhancing Deep Compression of CNNs: A Novel Regularization Loss and the Impact of Distance Metrics(Institute of Electrical and Electronics Engineers Inc., 2024) Pragnesh, P.; Mohan, B.R.Transfer learning models tackle two critical problems in deep learning. First, for small datasets, it reduces the problem of overfitting. Second, for large datasets, it reduces the computational cost as fewer iterations are required to train the model. Standard transfer learning models such as VGGNet, ResNet, and GoogLeNet require significant memory and computational power, limiting their use on devices with limited resources. The research paper contributes to overcoming this problem by compressing the transfer learning model using channel pruning. In current times, computational cost is more significant compared to memory cost. The convolution layer with fewer parameters contributes more to computational cost. Thus, we focus on pruning the convolution layer to reduce computational cost. Total loss is a combination of prediction loss and regularization loss. Regularization loss is the sum of the magnitudes of parameter values. The training process aims to reduce total loss. In order to reduce total loss, the regularization loss also needs to be reduced. Therefore, training not only minimizes prediction error but also manages the magnitude of the model's weights. Important weights are maintained at higher values to keep the prediction loss low, while unimportant weight values can be reduced to decrease regularization loss. Thus regularization adjusts the magnitudes of parameters at varying rates, depending on their importance. Quantitative pruning methods select parameters based on their magnitude, which improves the effectiveness of the pruning process. Standard L1 and L2 regularization focus on individual parameters, aiding in unstructured pruning. However, group regularization is required for structured pruning. To address this, we introduce a novel group regularization loss designed specifically for structured channel pruning. This new regularization loss optimizes the pruning process by focusing on entire groups of parameters belonging to the channel rather than just individual ones. This method ensures that structured pruning is more efficient and targeted. Custom Standard Deviation (CSD) is calculated by summing the absolute differences between each parameter value and the mean value. To evaluate the parameters of a given channel, both the L1 norm and CSD are computed. The novel regularization loss for a channel in the convolutional layer is defined as the ratio of L1 norm to CSD (L1Norm/CSD). This approach groups the regularization loss for all parameters within a channel, making the pruning process more structured and efficient. Custom regularization loss further improves pruning efficiency, enabling a 46.14% reduction in parameters and a 61.91% decrease in FLOPs. This paper also employs the K-Means algorithm for similarity-based pruning and evaluates three distance metrics: Manhattan, Euclidean, and Cosine. Results indicate that pruning by K-Means algorithms using Manhattan distance leads to a 35.15% reduction in parameters and a 49.11% decrease in FLOPs, outperforming Euclidean and Cosine distances using the same algorithm. © 2013 IEEE.Item Estimation of Implied Volatility for Ethereum Options Using Numerical Approximation Methods(Springer Science and Business Media Deutschland GmbH, 2023) Sapna, S.; Mohan, B.R.This study demonstrates the use of numerical approximation techniques like Newton-Raphson Method, Bisection Method, Brent Method, and Secant Method to estimate the market implied volatility for short-dated Ethereum options with 21-day maturity, obtained from Deribit Crypto Options and Futures Exchange. The numerical approximation techniques are compared based on their convergence and time taken for execution. It is found that Newton-Raphson Method converges faster and performs computation in the least time in comparison to the other methods under consideration. This study further focuses on the determination of implied volatility structure for short maturity Ethereum options. The results show that the implied volatility assumes a deep smile far from the day of expiry and as we approach the expiry date, the volatility smile broadens. To the best of our knowledge, this is the first work to use approximation techniques to estimate the implied volatility for Ethereum options. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
