Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
7 results
Search Results
Item DETECTION OF BUILDING INFRASTRUCTURE CHANGES FROM BI-TEMPORAL REMOTE SENSING IMAGES(Institute of Electrical and Electronics Engineers Inc., 2024) Sravya, N.; Kevala, V.D.; Akshaya, P.; Basavaraju, K.S.; Lal, S.; Gupta, D.Change detection (CD) from satellite images is crucial for Earth observation, especially in monitoring urban growth patterns. Recent research has largely focused on using Deep Learning (DL) techniques, particularly variations of Convolutional Neural Network (CNN) architectures. While DL methods have shown promise, many of the models could not preserve the changed areas shape and it fails in predicting the correct edges of changed areas. This paper introduces a CNN based Building Infrastructure Change Detection Network (BICDNet) for predicting changes from bi-temporal remote sensing images. The model leverages a modified Fully Convolutional Siamese-Difference Network to extract detailed features from given images which includes a Multi-Feature Extraction (MFE) block designed to capture features from changed areas of various size within the given input images. To further refine these feature pairs, a modified Atrous Spatial Pyramid Pooling (MASPP) module is integrated, which effectively captures contextual information at multiple scales. The comparison study shows that the proposed BICDNet performs better than the existing CD models. © 2024 IEEE.Item LiverNet: efficient and robust deep learning model for automatic diagnosis of sub-types of liver hepatocellular carcinoma cancer from H&E stained liver histopathology images(Springer Science and Business Media Deutschland GmbH, 2021) Aatresh, A.A.; Alabhya, K.; Lal, S.; Kini, J.; Saxena, P.P.Purpose: Liver cancer is one of the most common types of cancers in Asia with a high mortality rate. A common method for liver cancer diagnosis is the manual examination of histopathology images. Due to its laborious nature, we focus on alternate deep learning methods for automatic diagnosis, providing significant advantages over manual methods. In this paper, we propose a novel deep learning framework to perform multi-class cancer classification of liver hepatocellular carcinoma (HCC) tumor histopathology images which shows improvements in inference speed and classification quality over other competitive methods. Method: The BreastNet architecture proposed by Togacar et al. shows great promise in using convolutional block attention modules (CBAM) for effective cancer classification in H&E stained breast histopathology images. As part of our experiments with this framework, we have studied the addition of atrous spatial pyramid pooling (ASPP) blocks to effectively capture multi-scale features in H&E stained liver histopathology data. We classify liver histopathology data into four classes, namely the non-cancerous class, low sub-type liver HCC tumor, medium sub-type liver HCC tumor, and high sub-type liver HCC tumor. To prove the robustness and efficacy of our models, we have shown results for two liver histopathology datasets—a novel KMC dataset and the TCGA dataset. Results: Our proposed architecture outperforms state-of-the-art architectures for multi-class cancer classification of HCC histopathology images, not just in terms of quality of classification, but also in computational efficiency on the novel proposed KMC liver data and the publicly available TCGA-LIHC dataset. We have considered precision, recall, F1-score, intersection over union (IoU), accuracy, number of parameters, and FLOPs as metrics for comparison. The results of our meticulous experiments have shown improved classification performance along with added efficiency. LiverNet has been observed to outperform all other frameworks in all metrics under comparison with an approximate improvement of 2 % in accuracy and F1-score on the KMC and TCGA-LIHC datasets. Conclusion: To the best of our knowledge, our work is among the first to provide concrete proof and demonstrate results for a successful deep learning architecture to handle multi-class HCC histopathology image classification among various sub-types of liver HCC tumor. Our method shows a high accuracy of 90.93 % on the proposed KMC liver dataset requiring only 0.5739 million parameters and 1.1934 million floating point operations per second. © 2021, CARS.Item Efficient deep learning architecture with dimension-wise pyramid pooling for nuclei segmentation of histopathology images(Elsevier Ltd, 2021) Aatresh, A.A.; Yatgiri, R.P.; Chanchal, A.K.; Kumar, A.; Ravi, A.; Das, D.; Raghavendra, B.S.; Lal, S.; Kini, J.Image segmentation remains to be one of the most vital tasks in the area of computer vision and more so in the case of medical image processing. Image segmentation quality is the main metric that is often considered with memory and computation efficiency overlooked, limiting the use of power hungry models for practical use. In this paper, we propose a novel framework (Kidney-SegNet) that combines the effectiveness of an attention based encoder-decoder architecture with atrous spatial pyramid pooling with highly efficient dimension-wise convolutions. The segmentation results of the proposed Kidney-SegNet architecture have been shown to outperform existing state-of-the-art deep learning methods by evaluating them on two publicly available kidney and TNBC breast H&E stained histopathology image datasets. Further, our simulation experiments also reveal that the computational complexity and memory requirement of our proposed architecture is very efficient compared to existing deep learning state-of-the-art methods for the task of nuclei segmentation of H&E stained histopathology images. The source code of our implementation will be available at https://github.com/Aaatresh/Kidney-SegNet. © 2021 Elsevier LtdItem High-resolution deep transferred ASPPU-Net for nuclei segmentation of histopathology images(Springer Science and Business Media Deutschland GmbH, 2021) Chanchal, A.K.; Lal, S.; Kini, J.Purpose: Increasing cancer disease incidence worldwide has become a major public health issue. Manual histopathological analysis is a common diagnostic method for cancer detection. Due to the complex structure and wide variability in the texture of histopathology images, it has been challenging for pathologists to diagnose manually those images. Automatic segmentation of histopathology images to diagnose cancer disease is a continuous exploration field in recent times. Segmentation and analysis for diagnosis of histopathology images by using an efficient deep learning algorithm are the purpose of the proposed method. Method: To improve the segmentation performance, we proposed a deep learning framework that consists of a high-resolution encoder path, an atrous spatial pyramid pooling bottleneck module, and a powerful decoder. Compared to the benchmark segmentation models having a deep and thin path, our network is wide and deep that effectively leverages the strength of residual learning as well as encoder–decoder architecture. Results: We performed careful experimentation and analysis on three publically available datasets namely kidney dataset, Triple Negative Breast Cancer (TNBC) dataset, and MoNuSeg histopathology image dataset. We have used the two most preferred performance metrics called F1 score and aggregated Jaccard index (AJI) to evaluate the performance of the proposed model. The measured values of F1 score and AJI score are (0.9684, 0.9394), (0.8419, 0.7282), and (0.8344, 0.7169) on the kidney dataset, TNBC histopathology dataset, and MoNuSeg dataset, respectively. ConclusionItem ProsGradNet: An effective and structured CNN approach for prostate cancer grading from histopathology images(Elsevier Ltd, 2025) Prabhu, A.; Sravya, N.; Lal, S.; Kini, J.Prostate cancer (PCa) is one of the most prevalent and potentially fatal malignancies affecting men globally. The incidence of prostate cancer is expected to double by 2040, posing significant health challenges. This anticipated increase underscores the urgent need for early and precise diagnosis to facilitate effective treatment and management. Histopathological analysis using Gleason grading system plays a pivotal role in clinical decision making by classifying cancer subtypes based on their cellular characteristics. This paper proposes a novel deep CNN model named as Prostate Grading Network (ProsGradNet), for the automatic grading of PCa from histopathological images. Central to the approach is the novel Context Guided Shared Channel Residual (CGSCR) block, that introduces structured methods for channel splitting and clustering, by varying group sizes. By grouping channels into 2, 4, and 8, it prioritizes deeper layer features, enhancing local semantic content and abstract feature representation. This methodological advancement significantly boosts classification accuracy, achieving an impressive 92.88% on Prostate Gleason dataset, outperforming other CNN models. To demonstrate the generalizability of ProsGradNet over different datasets, experiments are performed on Kasturba Medical College (KMC) Kidney dataset as well. The results further confirm the superiority of the proposed ProsGradNet model, with a classification accuracy of 92.68% on the KMC Kidney dataset. This demonstrates the model's potential to be applied effectively across various histopathological datasets, making it a valuable tool to fight against cancer. © 2025 Elsevier LtdItem Development and evaluation of deep neural networks for the classification of subtypes of renal cell carcinoma from kidney histopathology images(Nature Research, 2025) Chanchal, A.K.; Lal, S.; Suresh, S.Kidney cancer is a leading cause of cancer-related mortality, with renal cell carcinoma (RCC) being the most prevalent form, accounting for 80–85% of all renal tumors. Traditional diagnosis of kidney cancer requires manual examination and analysis of histopathology images, which is time-consuming, error-prone, and depends on the pathologist’s expertise. Recently, deep learning algorithms have gained significant attention in histopathology image analysis. In this study, we developed an efficient and robust deep learning architecture called RenalNet for the classification of subtypes of RCC from kidney histopathology images. The RenalNet is designed to capture cross-channel and inter-spatial features at three different scales simultaneously and combine them together. Cross-channel features refer to the relationships and dependencies between different data channels, while inter-spatial features refer to patterns within small spatial regions. The architecture contains a CNN module called multiple channel residual transformation (MCRT), to focus on the most relevant morphological features of RCC by fusing the information from multiple paths. Further, to improve the network’s representation power, a CNN module called Group Convolutional Deep Localization (GCDL) has been introduced, which effectively integrates three different feature descriptors. As a part of this study, we also introduced a novel benchmark dataset for the classification of subtypes of RCC from kidney histopathology images. We obtained digital hematoxylin and eosin (H&E) stained WSIs from The Cancer Genome Atlas (TCGA) and acquired region of interest (ROIs) under the supervision of experienced pathologists resulted in the creation of patches. To demonstrate that the proposed model is generalized and independent of the dataset, it has experimented on three well-known datasets. Compared to the best-performing state-of-the-art model, RenalNet achieves accuracies of 91.67%, 97.14%, and 97.24% on three different datasets. Additionally, the proposed method significantly reduces the number of parameters and FLOPs, demonstrating computationally efficient with 2.71 × FLOPs & 0.2131 × parameters. © The Author(s) 2025.Item Multi head attention based deep learning framework for waxberry fruit object segmentation from high resolution remote sensing images(Nature Research, 2025) Vaghela, R.; Sravya, N.; Lal, S.; Sarda, J.; Thakkar, A.; Patil, S.In some Asian countries, waxberries are special fruit that demand substantial labour for harvesting each season. To ease this burden, automated fruit-picking equipment has seen extensive development over the past decade. However, accurately segmenting waxberries in orchards remains challenging due to complex environments with overlapping fruits, foliage occlusions, and variable lighting conditions. Most existing segmentation methods are optimized for controlled environments with steady lighting and unobstructed views of the fruit, which limits their effectiveness in real-world scenarios. This paper introduces a fully convolutional neural network namely Multi-Attention Waxberry Network (MAWNet) which effectively addresses challenges such as occlusions, overlapping fruits and variable lighting conditions. The MAWNet is a UNet based architecture and it consist of enhanced residual block, transformer block, Atrous Spatial Pyramid Pooling (ASPP) block and introduced Multiple Dilation Convolutional (MDC) Block. The experimental results validate that the proposed MAWNet model surpasses several State-of-the-Art (SOTA) architectures, in terms of performance with achieving a remarkable accuracy of 99.63%, an Intersection over Union (IoU) of 96.77%, and a Dice coefficient of 98.34%. © The Author(s) 2025.
