Conference Papers
Permanent URI for this collectionhttps://idr.nitk.ac.in/handle/123456789/28506
Browse
4 results
Search Results
Item Human Activity Recognition in Smart Home using Deep Learning Techniques(Institute of Electrical and Electronics Engineers Inc., 2021) Kolkar, R.; Geetha, V.To understand the human activities and anticipate his intentions Human Activity Recognition(HAR) research is rapidly developing in tandem with the widespread availability of sensors. Various applications like elderly care and health monitoring systems in smart homes use smartphones and wearable devices. This paper proposes an effective HAR framework that uses deep learning methodology like Convolution Neural Networks(CNN), variations of LSTM(Long Short term Memory) and Gated Recurrent Units(GRU) Networks to recognize the activities based on smartphone sensors. The hybrid use of CNN-LSTM eliminates the handcrafted feature engineering and uses spatial and temporal data deep. The experiments are carried on UCI HAR and WISDM data sets, and the comparison results are obtained. The result shows a better 96.83 % and 98.00% for the UCI-HAR and WISDM datasets, respectively. © 2021 IEEE.Item IoT-based Human Activity Recognition Models based on CNN, LSTM and GRU(Institute of Electrical and Electronics Engineers Inc., 2022) Kolkar, R.; Singh Tomar, R.P.; Vasantha, G.Smartphones' ability to generate data with their inbuilt sensors has made them used for Human Activity Recognition. The work highlights the importance of Human Activity Recognition (HAR) systems capable of sensing human activities like the inertial motion of a human body. The sensors are worn on a body part and tracked from whole-body motions and monitoring. Real-time signal processing is used to sense human body movements using wearable sensors. The work aims to provide opportunities for promising health applications using IoT. There are many challenges to recognising human activities, including accuracy. This work analyses Human Activity recognition concerning CNN, LSTM, and GRU deep learning models to improve the accuracy of the human activity recognition in the UCI-HAR and WISDM datasets. The comparative analysis shows promising results for Human activity recognition. © 2022 IEEE.Item Comparative Study of Pruning Techniques in Recurrent Neural Networks(Springer Science and Business Media Deutschland GmbH, 2023) Choudhury, S.; Rout, A.K.; Pragnesh, T.; Mohan, B.R.In recent years, there has been a drastic development in the field of neural networks. They have evolved from simple feed-forward neural networks to more complex neural networks such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs). CNNs are used for tasks such as image recognition where the sequence is not essential, while RNNs are useful when order is important such as machine translation. By increasing the number of layers in the network, we can improve the performance of the neural network (Alford et al. in Pruned and structurally sparse neural networks, 2018 [1]). However, this will also increase the complexity of the network, and also training will require more power and time. By introducing sparsity in the architecture of the neural network, we can tackle this problem. Pruning is one of the processes through which a neural network can be made sparse (Zhu and Gupta in To prune, or not to prune: exploring the efficacy of pruning for model compression, 2017 [2]). Sparse RNNs can be easily implemented on mobile devices and resource-constraint servers (Wen et al. in Learning intrinsic sparse structures within long short-term memory, 2017 [3]). We investigate the following methods to induce sparsity in RNNs: RNN pruning and automated gradual pruning. We also investigate how the pruning techniques impact the model’s performance and provide a detailed comparison between the two techniques. We also experiment by pruning input-to-hidden and hidden-to-hidden weights. Based on the results of pruning experiments, we conclude that it is possible to reduce the complexity of RNNs by more than 80%. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.Item Optimizing Performance of OpenMP Parallel Applications through Variable Classification(Institute of Electrical and Electronics Engineers Inc., 2024) Kumar, S.; Talib, M.OpenMP provides a versatile framework for parallel computing, allowing developers to transform sequential programs into parallel applications for shared-memory architectures efficiently. One of the central challenges in this transformation lies in accurately identifying appropriate parallel constructs and clauses, which are critical for maximizing performance and ensuring the correctness of the resulting parallel code. A particularly intricate aspect of this process is the classification of variables according to their data-sharing semantics, including first-private, private, last-private, shared, and reduction clauses. Manual classification is laborintensive and significantly susceptible to errors as the program's scale and complexity grow. Although various tools have been developed to assist with variable classification, they often rely on extensive data-dependence analyses and rigid classification schemes, limiting their effectiveness when applied to large-scale programs with complex scoping requirements. This paper presents a novel, cost-effective approach to automate and enhance the accuracy of variable classification in OpenMP parallelization. By reducing the manual effort required and improving the precision of parallel construct insertion, this approach aims to significantly optimize the performance of parallel applications, thereby advancing the utility and accessibility of OpenMP for a wide range of computational tasks. © 2024 IEEE.
