Conference Papers
Permanent URI for this collectionhttps://idr.nitk.ac.in/handle/123456789/28506
Browse
4 results
Search Results
Item Stock price movements classification using machine and deep learning techniques-the case study of indian stock market(Springer Verlag service@springer.de, 2019) Naik, N.; Mohan, B.R.Stock price movements forecasting is an important topic for traders and stock analyst. Timely prediction in stock yields can get more profits and returns. The predicting stock price movement on a daily basis is a difficult task due to more ups and down in the financial market. Therefore, there is a need for a more powerful predictive model to predict the stock prices. Most of the existing work is based on machine learning techniques and considered very few technical indicators to predict the stock prices. In this paper, we have extracted 33 technical indicators based on daily stock price such as open, high, low and close price. This paper addresses the two problems, first is the technical indicator feature selection and identification of the relevant technical indicators by using Boruta feature selection technique. The second is an accurate prediction model for stock price movements. To predict stock price movements we have proposed machine learning techniques and deep learning based model. The performance of the deep learning model is better than the machine learning techniques. The experimental results are significant improves the classification accuracy rate by 5% to 6%. National Stock Exchange, India (NSE) stocks are considered for the experiment. © Springer Nature Switzerland AG 2019.Item A TFD Approach to Stock Price Prediction(Springer, 2020) Chanduka, B.; Bhat, S.S.; Rajput, N.; Mohan, B.R.Accurate stock price predictions can help investors take correct decisions about the selling/purchase of stocks. With improvements in data analysis and deep learning algorithms, a variety of approaches has been tried for predicting stock prices. In this paper, we deal with the prediction of stock prices for automobile companies using a novel TFD—Time Series, Financial Ratios, and Deep Learning approach. We then study the results over multiple activation functions for multiple companies and reinforce the viability of the proposed algorithm. © 2020, Springer Nature Singapore Pte Ltd.Item Skeleton-Based Human Action Recognition Using Motion and Orientation of Joints(Springer Science and Business Media Deutschland GmbH, 2022) Ghosh, S.K.; Rashmi, M.; Mohan, B.R.; Guddeti, R.M.R.Perceiving human actions accurately from a video is one of the most challenging tasks demanded by many real-time applications in smart environments. Recently, several approaches have been proposed for human action representation and further recognizing actions from the videos using different data modalities. Especially in the case of images, deep learning-based approaches have demonstrated their classification efficiency. Here, we propose an effective framework for representing actions based on features obtained from 3D skeleton data of humans performing actions. We utilized motion, pose orientation, and transition orientation of skeleton joints for action representation in the proposed work. In addition, we introduced a lightweight convolutional neural network model for learning features from action representations in order to recognize the different actions. We evaluated the proposed system on two publicly available datasets using a cross-subject evaluation protocol, and the results showed better performance compared to the existing methods. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.Item Comparative Study of Pruning Techniques in Recurrent Neural Networks(Springer Science and Business Media Deutschland GmbH, 2023) Choudhury, S.; Rout, A.K.; Pragnesh, T.; Mohan, B.R.In recent years, there has been a drastic development in the field of neural networks. They have evolved from simple feed-forward neural networks to more complex neural networks such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs). CNNs are used for tasks such as image recognition where the sequence is not essential, while RNNs are useful when order is important such as machine translation. By increasing the number of layers in the network, we can improve the performance of the neural network (Alford et al. in Pruned and structurally sparse neural networks, 2018 [1]). However, this will also increase the complexity of the network, and also training will require more power and time. By introducing sparsity in the architecture of the neural network, we can tackle this problem. Pruning is one of the processes through which a neural network can be made sparse (Zhu and Gupta in To prune, or not to prune: exploring the efficacy of pruning for model compression, 2017 [2]). Sparse RNNs can be easily implemented on mobile devices and resource-constraint servers (Wen et al. in Learning intrinsic sparse structures within long short-term memory, 2017 [3]). We investigate the following methods to induce sparsity in RNNs: RNN pruning and automated gradual pruning. We also investigate how the pruning techniques impact the model’s performance and provide a detailed comparison between the two techniques. We also experiment by pruning input-to-hidden and hidden-to-hidden weights. Based on the results of pruning experiments, we conclude that it is possible to reduce the complexity of RNNs by more than 80%. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
