Conference Papers
Permanent URI for this collectionhttps://idr.nitk.ac.in/handle/123456789/28506
Browse
5 results
Search Results
Item Soil Type Identification via Deep Learning and Machine Learning Methods(Springer Science and Business Media Deutschland GmbH, 2024) Jalapur, S.; Patil, N.Soil type identification stands as a crucial concern in numerous countries, to ensure optimal crop yield, farmers need to accurately identify the suitable soil type for specific crops, which plays a significant role in meeting the heightened global food demand. The objective of this survey paper is to present a thorough and up-to-date overview of prevailing methodologies in soil identification, primarily focusing on image analysis, machine learning, and deep learning techniques. The paper initiates by highlighting the significance of soil identification and the limitations inherent in traditional methods. It then delves into the fundamental principles of image processing, deep learning, and spectroscopy, explaining how these techniques can be applied to soil identification. The survey presents an in-depth analysis of various image processing techniques employed for soil identification, including image segmentation, feature extraction, and classification algorithms. Furthermore, it discusses the application of deep learning models for soil classification based on image data. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.Item Utilizing Deep Learning Methods for Cancer Detection through Analysis of MicroRNA Expression Profiles(Institute of Electrical and Electronics Engineers Inc., 2024) Kantamneni, S.; Hegde, P.; Patil, N.Integration of cutting-edge computational methods and genomic data analysis has become crucial in the quest for early cancer diagnosis and enhanced diagnostic accuracy. The genomic sequences of microRNAs (miRNAs), which are important cancer biomarkers, provide important information for this. In this study, we propose a novel deep learning-based framework for cancer detection with a focus on FNNs and a hybrid DNN model with an accuracy of over 90.7%. Our method aims to identify detailed genomic patterns and features that improve the sensitivity and specificity of cancer detection by painstakingly curating and preprocessing large miRNA datasets gathered from various patient cohorts. This research sets the stage for further exploration of deep learning methodologies within the context of miRNA-based cancer detection, promising advancements in personalized diagnosis and prognosis. Our method aims to identify detailed genomic patterns and features that improve the sensitivity and specificity of cancer detection by painstakingly curating and preprocessing large miRNA datasets gathered from various patient cohorts. Our approach seeks to improve sensitivity and specificity by deciphering complex genetic patterns. By utilizing these datasets, we show off the effectiveness of our model and its clinical potential, giving an accuracy of 90.7% for our Hybrid Feedforward and Dense Neural Network model as compared to current state of the art machine learning models. This research promises revolutionary advances in customized oncology, providing a route towards improved diagnostic accuracy and early intervention. It also proves that miRNA expressions values are not sequential in nature. It also lays the groundwork for the development of deep learning in miRNA-based cancer detection. © 2024 IEEE.Item Osteosarcoma Bone Cancer Detection(Springer Science and Business Media Deutschland GmbH, 2025) Payani, C.A.; Gupta, C.; Vamsidhar, K.; Bhat, P.; Patil, N.Osteosarcoma is a type of bone cancer commonly found in the elongated bones found in the upper and lower limbs. The precise cause is unknown, but experts believe it’s linked to changes in the DNA of the bones, resulting in the growth of abnormal and harmful bone tissue. If caught early, osteosarcoma is treatable, with about 75% people cured when the cancer hasn’t spread to other body parts. Analyzing biopsy samples can be time-consuming, but there are advanced computer programs, known as supervised deep learning methods, that can help speed up the process and enhance the efficiency of the diagnosis. Previous studies have already evaluated the performance of deep learning models such as VGG16, VGG19, DenseNet201, and ResNet101, among which ResNet101 performed better with 90.36% accuracy. When it comes to understanding complex image features, previous models lack behind. We propose EfficientNetV2, Xception, and InceptionV3 models, among which Xception outperformed other models with 94.5% accuracy on the image dataset. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.Item A Deep Learning Framework for Plant Disease Detection(Springer Science and Business Media Deutschland GmbH, 2025) Munda, K.K.; Patil, N.As a major source of nutritious food, the agriculture industry supports economies and feeds people. Yet, the production of food is severely hampered by plant diseases. Major crops like wheat (21.5%), rice (30.0%), maize (22.6%), potatoes (17.2%), and soybeans (21.4%) have significant annual output declines due to numerous diseases, according to recent studies. Since deep learning technologies have been developed, image categorization accuracy has increased dramatically. Using CNN and vision transformer models, we examine the Plant Village dataset in this study, which consists of 54,305 sample images that illustrate various plant disease species in 38 classifications. Using a focus on potato leaves and a total of 2151 samples, we evaluate the model’s performance in comparison to other models in terms of training and testing accuracy, and we obtained impressive results. The models’ respective training accuracy is 97.27% for the CNN and 94.7% for the ViT model, while their validation accuracy is 100% and 94.27%. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.Item Automated Colorization of Grayscale Images Using Superpixels and K-Means Clustering(Springer Science and Business Media Deutschland GmbH, 2025) Kulkarni, B.C.; Teja, B.; Hegde, A.R.; Bhat, P.; Patil, N.The process of transforming grayscale photos into aesthetically pleasing color images is called colorization. Convincing the audience of the realism of the outcome is the primary objective of colorization. Natural scenery makes up the majority of the grayscale photographs that require colorization. A broad range of colorization techniques have been created over the past 20 years; these vary from algorithmically basic procedures that need time and energy due to inevitable human participation to more complex ways that are also more automated. The complex field of automatic conversion mixes deep learning, machine learning, and art. Most of the earlier works which use deep learning, use every pixel values to train the models which is computationally expensive. We present a methodology for colorizing grayscale images using convolutional neural network (CNN), our method uses a combination of superpixel segmentation and K-Means clustering to significantly reduce number of pixel values. The process begins with the conversion of grayscale images to superpixels, which are perceptually uniform regions that aid in efficient colorization. Subsequently, K-Means clustering is applied within each superpixel to identify dominant color clusters, followed by quantization of color information to simplify representation. The prepared input, comprising grayscale images and quantized color information, is then fed into a CNN for colorization, leveraging spatial coherence and semantic context to predict plausible colors for grayscale pixels. The proposed methodology is evaluated on a diverse set of grayscale images, demonstrating its effectiveness in producing vibrant and visually appealing colorized outputs. Through experiments and analysis, we showcase the potential applications and benefits of the proposed approach in historical photograph restoration, movie colorization, and other domains requiring accurate and efficient grayscale image colorization. We use SSIM and PSNR as our evaluation metrics. SSIM is calculated based on the similarity of the luminance and brightness values of the considered and obtained rgb images for the grayscale images, and PSNR is calculated using Mean Squared Error (MSE) of the peak signal values within images. Our methodology’s SSIM and PSNR for the considered flower class is 81.5 and 25.6, respectively. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
