Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
13 results
Search Results
Item Soil Type Identification via Deep Learning and Machine Learning Methods(Springer Science and Business Media Deutschland GmbH, 2024) Jalapur, S.; Patil, N.Soil type identification stands as a crucial concern in numerous countries, to ensure optimal crop yield, farmers need to accurately identify the suitable soil type for specific crops, which plays a significant role in meeting the heightened global food demand. The objective of this survey paper is to present a thorough and up-to-date overview of prevailing methodologies in soil identification, primarily focusing on image analysis, machine learning, and deep learning techniques. The paper initiates by highlighting the significance of soil identification and the limitations inherent in traditional methods. It then delves into the fundamental principles of image processing, deep learning, and spectroscopy, explaining how these techniques can be applied to soil identification. The survey presents an in-depth analysis of various image processing techniques employed for soil identification, including image segmentation, feature extraction, and classification algorithms. Furthermore, it discusses the application of deep learning models for soil classification based on image data. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.Item Utilizing Deep Learning Methods for Cancer Detection through Analysis of MicroRNA Expression Profiles(Institute of Electrical and Electronics Engineers Inc., 2024) Kantamneni, S.; Hegde, P.; Patil, N.Integration of cutting-edge computational methods and genomic data analysis has become crucial in the quest for early cancer diagnosis and enhanced diagnostic accuracy. The genomic sequences of microRNAs (miRNAs), which are important cancer biomarkers, provide important information for this. In this study, we propose a novel deep learning-based framework for cancer detection with a focus on FNNs and a hybrid DNN model with an accuracy of over 90.7%. Our method aims to identify detailed genomic patterns and features that improve the sensitivity and specificity of cancer detection by painstakingly curating and preprocessing large miRNA datasets gathered from various patient cohorts. This research sets the stage for further exploration of deep learning methodologies within the context of miRNA-based cancer detection, promising advancements in personalized diagnosis and prognosis. Our method aims to identify detailed genomic patterns and features that improve the sensitivity and specificity of cancer detection by painstakingly curating and preprocessing large miRNA datasets gathered from various patient cohorts. Our approach seeks to improve sensitivity and specificity by deciphering complex genetic patterns. By utilizing these datasets, we show off the effectiveness of our model and its clinical potential, giving an accuracy of 90.7% for our Hybrid Feedforward and Dense Neural Network model as compared to current state of the art machine learning models. This research promises revolutionary advances in customized oncology, providing a route towards improved diagnostic accuracy and early intervention. It also proves that miRNA expressions values are not sequential in nature. It also lays the groundwork for the development of deep learning in miRNA-based cancer detection. © 2024 IEEE.Item Osteosarcoma Bone Cancer Detection(Springer Science and Business Media Deutschland GmbH, 2025) Payani, C.A.; Gupta, C.; Vamsidhar, K.; Bhat, P.; Patil, N.Osteosarcoma is a type of bone cancer commonly found in the elongated bones found in the upper and lower limbs. The precise cause is unknown, but experts believe it’s linked to changes in the DNA of the bones, resulting in the growth of abnormal and harmful bone tissue. If caught early, osteosarcoma is treatable, with about 75% people cured when the cancer hasn’t spread to other body parts. Analyzing biopsy samples can be time-consuming, but there are advanced computer programs, known as supervised deep learning methods, that can help speed up the process and enhance the efficiency of the diagnosis. Previous studies have already evaluated the performance of deep learning models such as VGG16, VGG19, DenseNet201, and ResNet101, among which ResNet101 performed better with 90.36% accuracy. When it comes to understanding complex image features, previous models lack behind. We propose EfficientNetV2, Xception, and InceptionV3 models, among which Xception outperformed other models with 94.5% accuracy on the image dataset. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.Item A Deep Learning Framework for Plant Disease Detection(Springer Science and Business Media Deutschland GmbH, 2025) Munda, K.K.; Patil, N.As a major source of nutritious food, the agriculture industry supports economies and feeds people. Yet, the production of food is severely hampered by plant diseases. Major crops like wheat (21.5%), rice (30.0%), maize (22.6%), potatoes (17.2%), and soybeans (21.4%) have significant annual output declines due to numerous diseases, according to recent studies. Since deep learning technologies have been developed, image categorization accuracy has increased dramatically. Using CNN and vision transformer models, we examine the Plant Village dataset in this study, which consists of 54,305 sample images that illustrate various plant disease species in 38 classifications. Using a focus on potato leaves and a total of 2151 samples, we evaluate the model’s performance in comparison to other models in terms of training and testing accuracy, and we obtained impressive results. The models’ respective training accuracy is 97.27% for the CNN and 94.7% for the ViT model, while their validation accuracy is 100% and 94.27%. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.Item Automated Colorization of Grayscale Images Using Superpixels and K-Means Clustering(Springer Science and Business Media Deutschland GmbH, 2025) Kulkarni, B.C.; Teja, B.; Hegde, A.R.; Bhat, P.; Patil, N.The process of transforming grayscale photos into aesthetically pleasing color images is called colorization. Convincing the audience of the realism of the outcome is the primary objective of colorization. Natural scenery makes up the majority of the grayscale photographs that require colorization. A broad range of colorization techniques have been created over the past 20 years; these vary from algorithmically basic procedures that need time and energy due to inevitable human participation to more complex ways that are also more automated. The complex field of automatic conversion mixes deep learning, machine learning, and art. Most of the earlier works which use deep learning, use every pixel values to train the models which is computationally expensive. We present a methodology for colorizing grayscale images using convolutional neural network (CNN), our method uses a combination of superpixel segmentation and K-Means clustering to significantly reduce number of pixel values. The process begins with the conversion of grayscale images to superpixels, which are perceptually uniform regions that aid in efficient colorization. Subsequently, K-Means clustering is applied within each superpixel to identify dominant color clusters, followed by quantization of color information to simplify representation. The prepared input, comprising grayscale images and quantized color information, is then fed into a CNN for colorization, leveraging spatial coherence and semantic context to predict plausible colors for grayscale pixels. The proposed methodology is evaluated on a diverse set of grayscale images, demonstrating its effectiveness in producing vibrant and visually appealing colorized outputs. Through experiments and analysis, we showcase the potential applications and benefits of the proposed approach in historical photograph restoration, movie colorization, and other domains requiring accurate and efficient grayscale image colorization. We use SSIM and PSNR as our evaluation metrics. SSIM is calculated based on the similarity of the luminance and brightness values of the considered and obtained rgb images for the grayscale images, and PSNR is calculated using Mean Squared Error (MSE) of the peak signal values within images. Our methodology’s SSIM and PSNR for the considered flower class is 81.5 and 25.6, respectively. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.Item Image Analysis of Nuclei Histopathology Using Deep Learning: A Review of Segmentation, Detection, and Classification(Springer, 2023) Kadaskar, M.; Patil, N.Deep learning has recently advanced in its applicability to computer vision challenges, and medical imaging has become the most used technique in histopathology image analysis. Nuclei instance segmentation, detection, and classification are one such task. Reliable analysis of these image slides is critical in cancer identification, treatment, and care. Researchers have recently been interested in this issue. This study reviews the categorization and investigation of strategies utilized in recent works to improve the effectiveness of automated nuclei segmentation, detection, and classification in histopathology images. It critically examines state-of-the-art deep learning techniques, analyzes the trends, identifies the challenges, and highlights and helps with the future directions for research. The taxonomy includes deep learning techniques, enhancement, and optimization methods. The survey findings will help to overcome the challenges of nuclei segmentation, detection, and classification while improving the performance of models and, thus, aid future research plans. © 2023, The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd.Item An enhanced protein secondary structure prediction using deep learning framework on hybrid profile based features(Elsevier Ltd, 2020) Kumar, P.; Bankapur, S.; Patil, N.Accurate protein secondary structure prediction (PSSP) is essential to identify structural classes, protein folds, and its tertiary structure. To identify the secondary structure, experimental methods exhibit higher precision with the trade-off of high cost and time. In this study, we propose an effective prediction model which consists of hybrid features of 42-dimensions with the combination of convolutional neural network (CNN) and bidirectional recurrent neural network (BRNN). The proposed model is accessed on four benchmark datasets such as CB6133, CB513, CASP10, and CAP11 using Q3, Q8, and segment overlap (Sov) metrics. The proposed model reported Q3 accuracy of 85.4%, 85.4%, 83.7%, 81.5%, and Q8 accuracy 75.8%, 73.5%, 72.2%, and 70% on CB6133, CB513, CASP10, and CAP11 datasets respectively. The results of the proposed model are improved by a minimum factor of 2.5% and 2.1% in Q3 and Q8 accuracy respectively, as compared to the popular existing models on CB513 dataset. Further, the quality of the Q3 results is validated by structural class prediction and compared with PSI-PRED. The experiment showed that the quality of the Q3 results of the proposed model is higher than that of PSI-PRED. © 2019 Elsevier B.V.Item Cardamom Plant Disease Detection Approach Using EfficientNetV2(Institute of Electrical and Electronics Engineers Inc., 2022) Sunil, C.K.; Jaidhar, C.D.; Patil, N.Cardamom is a queen of spices. It is indigenously grown in the evergreen forests of Karnataka, Kerala, Tamil Nadu, and the northeastern states of India. India is the third largest producer of cardamom. Plant diseases cause a catastrophic influence on food production safety; they reduce the eminence and quantum of agricultural products. Plant diseases may cause significantly high loss or no harvest in dreadful cases. Various diseases and pests affect the growth of cardamom plants at different stages and crop yields. This study concentrated on two diseases of cardamom plants, Colletotrichum Blight and Phyllosticta Leaf Spot of cardamom and three diseases of grape, Black Rot, ESCA, and Isariopsis Leaf Spot. Various methods have been proposed for plant disease detection, and deep learning has become the preferred method because of its spectacular accomplishment. In this study, U2-Net was used to remove the unwanted background of an input image by selecting multiscale features. This work proposes a cardamom plant disease detection approach using the EfficientNetV2 model. A comprehensive set of experiments was carried out to ascertain the performance of the proposed approach and compare it with other models such as EfficientNet and Convolutional Neural Network (CNN). The experimental results showed that the proposed approach achieved a detection accuracy of 98.26%. © 2013 IEEE.Item Utilizing Deep Learning Models and Transfer Learning for COVID-19 Detection from X-Ray Images(Springer, 2023) Agrawal, S.; Venkatesh, V.; Nara, M.; Patil, N.COVID-19 has been a global pandemic. Flattening the curve requires intensive testing, and the world has been facing a shortage of testing equipment and medical personnel with expertise. There is a need to automate and aid the detection process. Several diagnostic tools are currently being used for COVID-19, including X-Rays and CT-scans. This study focuses on detecting COVID-19 from X-Rays. We pursue two types of problems: binary classification (COVID-19 and No COVID-19) and multi-class classification (COVID-19, No COVID-19 and Pneumonia). We examine and evaluate several classic models, namely VGG19, ResNet50, MobileNetV2, InceptionV3, Xception, DenseNet121, and specialized models such as DarkCOVIDNet and COVID-Net and prove that ResNet50 models perform best. We also propose a simple modification to the ResNet50 model, which gives a binary classification accuracy of 99.20% and a multi-class classification accuracy of 86.13%, hence cementing the ResNet50’s abilities for COVID-19 detection and ability to differentiate pneumonia and COVID-19. The proposed model’s explanations were interpreted via LIME which provides contours, and Grad-CAM, which provides heat-maps over the area(s) of interest of the classifier, i.e., COVID-19 concentrated regions in the lungs, and realize that LIME explains the results better. These explanations support our model’s ability to generalize. The proposed model is intended to be deployed for free use. © 2023, The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd.Item Tomato plant disease classification using Multilevel Feature Fusion with adaptive channel spatial and pixel attention mechanism(Elsevier Ltd, 2023) Sunil, C.K.; Jaidhar, C.D.; Patil, N.Agriculture's productivity has decreased in the last decade due to climate change and inappropriate usage of water, fertilizer, and pesticides, which stimulate plant diseases. Plant pathogens are the prime threat to agriculture; diseases causes the development of plant and affects the quality and yield of the crop. To enhance crop yield and quality, early perceive the pathogens and insinuation of the proper medications are essential. Deep learning approaches produce promising results for classifying the input images, and the results vary for many reasons, such as data imbalance and fewer or identical features among other classes of the dataset. In this work, tomato plant disease classification is proposed by using Multilevel Feature Fusion Network (MFFN). It employs ResNet50, MFFN, and Adaptive Attention Mechanism, which combines channel, spatial, and pixel attention to classify the tomato plant leaf images. The proposed deep learning-based approach is trained and tested on a tomato plant leaves dataset and achieved 99.88% training accuracy, 99.88% validation accuracy, and 99.83% external testing accuracy. It outperformed the existing approaches relevant to the tomato plant dataset. Further, this work also proposes a pesticide prescription module that provides pesticide information based on the type of leaf disease. © 2023 Elsevier Ltd
