Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
5 results
Search Results
Item LiverNet: efficient and robust deep learning model for automatic diagnosis of sub-types of liver hepatocellular carcinoma cancer from H&E stained liver histopathology images(Springer Science and Business Media Deutschland GmbH, 2021) Aatresh, A.A.; Alabhya, K.; Lal, S.; Kini, J.; Saxena, P.P.Purpose: Liver cancer is one of the most common types of cancers in Asia with a high mortality rate. A common method for liver cancer diagnosis is the manual examination of histopathology images. Due to its laborious nature, we focus on alternate deep learning methods for automatic diagnosis, providing significant advantages over manual methods. In this paper, we propose a novel deep learning framework to perform multi-class cancer classification of liver hepatocellular carcinoma (HCC) tumor histopathology images which shows improvements in inference speed and classification quality over other competitive methods. Method: The BreastNet architecture proposed by Togacar et al. shows great promise in using convolutional block attention modules (CBAM) for effective cancer classification in H&E stained breast histopathology images. As part of our experiments with this framework, we have studied the addition of atrous spatial pyramid pooling (ASPP) blocks to effectively capture multi-scale features in H&E stained liver histopathology data. We classify liver histopathology data into four classes, namely the non-cancerous class, low sub-type liver HCC tumor, medium sub-type liver HCC tumor, and high sub-type liver HCC tumor. To prove the robustness and efficacy of our models, we have shown results for two liver histopathology datasets—a novel KMC dataset and the TCGA dataset. Results: Our proposed architecture outperforms state-of-the-art architectures for multi-class cancer classification of HCC histopathology images, not just in terms of quality of classification, but also in computational efficiency on the novel proposed KMC liver data and the publicly available TCGA-LIHC dataset. We have considered precision, recall, F1-score, intersection over union (IoU), accuracy, number of parameters, and FLOPs as metrics for comparison. The results of our meticulous experiments have shown improved classification performance along with added efficiency. LiverNet has been observed to outperform all other frameworks in all metrics under comparison with an approximate improvement of 2 % in accuracy and F1-score on the KMC and TCGA-LIHC datasets. Conclusion: To the best of our knowledge, our work is among the first to provide concrete proof and demonstrate results for a successful deep learning architecture to handle multi-class HCC histopathology image classification among various sub-types of liver HCC tumor. Our method shows a high accuracy of 90.93 % on the proposed KMC liver dataset requiring only 0.5739 million parameters and 1.1934 million floating point operations per second. © 2021, CARS.Item Efficient deep learning architecture with dimension-wise pyramid pooling for nuclei segmentation of histopathology images(Elsevier Ltd, 2021) Aatresh, A.A.; Yatgiri, R.P.; Chanchal, A.K.; Kumar, A.; Ravi, A.; Das, D.; Raghavendra, B.S.; Lal, S.; Kini, J.Image segmentation remains to be one of the most vital tasks in the area of computer vision and more so in the case of medical image processing. Image segmentation quality is the main metric that is often considered with memory and computation efficiency overlooked, limiting the use of power hungry models for practical use. In this paper, we propose a novel framework (Kidney-SegNet) that combines the effectiveness of an attention based encoder-decoder architecture with atrous spatial pyramid pooling with highly efficient dimension-wise convolutions. The segmentation results of the proposed Kidney-SegNet architecture have been shown to outperform existing state-of-the-art deep learning methods by evaluating them on two publicly available kidney and TNBC breast H&E stained histopathology image datasets. Further, our simulation experiments also reveal that the computational complexity and memory requirement of our proposed architecture is very efficient compared to existing deep learning state-of-the-art methods for the task of nuclei segmentation of H&E stained histopathology images. The source code of our implementation will be available at https://github.com/Aaatresh/Kidney-SegNet. © 2021 Elsevier LtdItem A novel dataset and efficient deep learning framework for automated grading of renal cell carcinoma from kidney histopathology images(Nature Research, 2023) Chanchal, A.K.; Lal, S.; Kumar, R.; Kwak, J.T.; Kini, J.Trends of kidney cancer cases worldwide are expected to increase persistently and this inspires the modification of the traditional diagnosis system to respond to future challenges. Renal Cell Carcinoma (RCC) is the most common kidney cancer and responsible for 80–85% of all renal tumors. This study proposed a robust and computationally efficient fully automated Renal Cell Carcinoma Grading Network (RCCGNet) from kidney histopathology images. The proposed RCCGNet contains a shared channel residual (SCR) block which allows the network to learn feature maps associated with different versions of the input with two parallel paths. The SCR block shares the information between two different layers and operates the shared data separately by providing beneficial supplements to each other. As a part of this study, we also introduced a new dataset for the grading of RCC with five different grades. We obtained 722 Hematoxylin & Eosin (H &E) stained slides of different patients and associated grades from the Department of Pathology, Kasturba Medical College (KMC), Mangalore, India. We performed comparable experiments which include deep learning models trained from scratch as well as transfer learning techniques using pre-trained weights of the ImageNet. To show the proposed model is generalized and independent of the dataset, we experimented with one additional well-established data called BreakHis dataset for eight class-classification. The experimental result shows that proposed RCCGNet is superior in comparison with the eight most recent classification methods on the proposed dataset as well as BreakHis dataset in terms of prediction accuracy and computational complexity. © 2023, The Author(s).Item Development and evaluation of deep neural networks for the classification of subtypes of renal cell carcinoma from kidney histopathology images(Nature Research, 2025) Chanchal, A.K.; Lal, S.; Suresh, S.Kidney cancer is a leading cause of cancer-related mortality, with renal cell carcinoma (RCC) being the most prevalent form, accounting for 80–85% of all renal tumors. Traditional diagnosis of kidney cancer requires manual examination and analysis of histopathology images, which is time-consuming, error-prone, and depends on the pathologist’s expertise. Recently, deep learning algorithms have gained significant attention in histopathology image analysis. In this study, we developed an efficient and robust deep learning architecture called RenalNet for the classification of subtypes of RCC from kidney histopathology images. The RenalNet is designed to capture cross-channel and inter-spatial features at three different scales simultaneously and combine them together. Cross-channel features refer to the relationships and dependencies between different data channels, while inter-spatial features refer to patterns within small spatial regions. The architecture contains a CNN module called multiple channel residual transformation (MCRT), to focus on the most relevant morphological features of RCC by fusing the information from multiple paths. Further, to improve the network’s representation power, a CNN module called Group Convolutional Deep Localization (GCDL) has been introduced, which effectively integrates three different feature descriptors. As a part of this study, we also introduced a novel benchmark dataset for the classification of subtypes of RCC from kidney histopathology images. We obtained digital hematoxylin and eosin (H&E) stained WSIs from The Cancer Genome Atlas (TCGA) and acquired region of interest (ROIs) under the supervision of experienced pathologists resulted in the creation of patches. To demonstrate that the proposed model is generalized and independent of the dataset, it has experimented on three well-known datasets. Compared to the best-performing state-of-the-art model, RenalNet achieves accuracies of 91.67%, 97.14%, and 97.24% on three different datasets. Additionally, the proposed method significantly reduces the number of parameters and FLOPs, demonstrating computationally efficient with 2.71 × FLOPs & 0.2131 × parameters. © The Author(s) 2025.Item Multi head attention based deep learning framework for waxberry fruit object segmentation from high resolution remote sensing images(Nature Research, 2025) Vaghela, R.; Sravya, N.; Lal, S.; Sarda, J.; Thakkar, A.; Patil, S.In some Asian countries, waxberries are special fruit that demand substantial labour for harvesting each season. To ease this burden, automated fruit-picking equipment has seen extensive development over the past decade. However, accurately segmenting waxberries in orchards remains challenging due to complex environments with overlapping fruits, foliage occlusions, and variable lighting conditions. Most existing segmentation methods are optimized for controlled environments with steady lighting and unobstructed views of the fruit, which limits their effectiveness in real-world scenarios. This paper introduces a fully convolutional neural network namely Multi-Attention Waxberry Network (MAWNet) which effectively addresses challenges such as occlusions, overlapping fruits and variable lighting conditions. The MAWNet is a UNet based architecture and it consist of enhanced residual block, transformer block, Atrous Spatial Pyramid Pooling (ASPP) block and introduced Multiple Dilation Convolutional (MDC) Block. The experimental results validate that the proposed MAWNet model surpasses several State-of-the-Art (SOTA) architectures, in terms of performance with achieving a remarkable accuracy of 99.63%, an Intersection over Union (IoU) of 96.77%, and a Dice coefficient of 98.34%. © The Author(s) 2025.
