Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Kini, J."

Filter results by typing the first few letters
Now showing 1 - 14 of 14
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Item
    A novel dataset and efficient deep learning framework for automated grading of renal cell carcinoma from kidney histopathology images
    (Nature Research, 2023) Chanchal, A.K.; Lal, S.; Kumar, R.; Kwak, J.T.; Kini, J.
    Trends of kidney cancer cases worldwide are expected to increase persistently and this inspires the modification of the traditional diagnosis system to respond to future challenges. Renal Cell Carcinoma (RCC) is the most common kidney cancer and responsible for 80–85% of all renal tumors. This study proposed a robust and computationally efficient fully automated Renal Cell Carcinoma Grading Network (RCCGNet) from kidney histopathology images. The proposed RCCGNet contains a shared channel residual (SCR) block which allows the network to learn feature maps associated with different versions of the input with two parallel paths. The SCR block shares the information between two different layers and operates the shared data separately by providing beneficial supplements to each other. As a part of this study, we also introduced a new dataset for the grading of RCC with five different grades. We obtained 722 Hematoxylin & Eosin (H &E) stained slides of different patients and associated grades from the Department of Pathology, Kasturba Medical College (KMC), Mangalore, India. We performed comparable experiments which include deep learning models trained from scratch as well as transfer learning techniques using pre-trained weights of the ImageNet. To show the proposed model is generalized and independent of the dataset, we experimented with one additional well-established data called BreakHis dataset for eight class-classification. The experimental result shows that proposed RCCGNet is superior in comparison with the eight most recent classification methods on the proposed dataset as well as BreakHis dataset in terms of prediction accuracy and computational complexity. © 2023, The Author(s).
  • No Thumbnail Available
    Item
    A novel deep classifier framework for automated molecular subtyping of breast carcinoma using immunohistochemistry image analysis
    (Elsevier Ltd, 2022) Mathew, T.; Niyas, S.; Johnpaul, C.I.; Kini, J.; Rajan, J.
    Breast carcinoma has various subtypes based on the genetic factors involved in the pathogenesis of the malignancy. Identifying the exact subtype and providing targeted treatment to the patient can improve the survival chances. Molecular subtyping through immunohistochemistry analysis is a pathology procedure to determine the subtype of breast cancer. The existing manual procedure is tedious and involves assessing the status of the four vital molecular biomarkers present in the tumor tissues. In this paper, a deep learning-based framework for automated molecular subtyping of breast cancer is proposed. Digital slide images of the four biomarkers are separately processed by the proposed framework. In the preprocessing stage, the non-informative background regions from the images are separated. The patches extracted from the foreground regions are classified into target classes using convolutional neural network models trained for this purpose. Classification results are post-processed to predict the status of all the four biomarkers. The predictions for the individual biomarkers are finally consolidated as per clinical guidelines to determine the subtype of the cancer. The proposed system is evaluated for the performance of individual biomarker status prediction and patient-level subtype classification.For patient-level evaluation of biomarkers ER, PR, K67, and HER2, the proposed method gives F1 Scores 1.00, 1.00, 0.90, and 0.94 respectively, whereas for molecular subtyping an F1 score of 0.89 is obtained. In both these aspects, the proposed framework has given significant results that show the effectiveness of our approach. © 2022 Elsevier Ltd
  • No Thumbnail Available
    Item
    A robust method for nuclei segmentation of HE stained histopathology images
    (Institute of Electrical and Electronics Engineers Inc., 2020) Lal, S.; Desouza, R.; Maneesh, M.; Kanfade, A.; Kumar, A.; Perayil, G.; Alabhya, K.; Chanchal, A.K.; Kini, J.
    Segmentation of histopathology images is an initial and vital step for image understanding. To increase the throughput and to maintain high accuracy, we have to go for an automatic image segmentation method. Here, a robust method for segmentation of cell nuclei in Hematoxylin and Eosin (HE) stained histopathology images is proposed. The proposed segmentation step consists of an initial pre-processing step containing adaptive colour de-convolution and a succession of morphological operations, followed by multilevel thresholding and post-processing steps. Minimum region size is the one parameter which is necessary for this method and set according to the resolution of histopathology image. The proposed nuclei segmentation method does not require any assumptions or prior information about cell morphology. Hence, proposed method applies to the analysis of a wide range of tissues such as liver, kidney, breast, gastric mucosa, and bone marrow and HE stained liver histopathology images from the Hospital. Results yield that proposed nuclei segmentation provides better results in terms of quantitatively and qualitatively on two datasets. © 2020 IEEE.
  • No Thumbnail Available
    Item
    An Efficient Parallel Branch Network for Multi-Class Classification of Prostate Cancer From Histopathological Images
    (John Wiley and Sons Inc, 2025) Srivastava, V.; Prabhu, A.; Sravya, S.; Vibha Damodara, K.; Lal, S.; Kini, J.
    Prostate cancer is one of the prevalent forms of cancer, posing a significant health concern for men. Accurate detection and classification of prostate cancer are crucial for effective diagnosis and treatment planning. Histopathological images play a pivotal role in identifying prostate cancer by enabling pathologists to identify cellular abnormalities and tumor characteristics. With the rapid advancements in deep learning, Convolutional Neural Networks (CNNs) have emerged as a powerful tool for tackling complex computer vision tasks, including object detection, classification, and segmentation. This paper proposes a Parallel Branch Network (PBN), a CNN architecture specifically designed for the automatic classification of prostate cancer into its subtypes from histopathological images. The paper introduces a novel Efficient Residual (ER) block that enhances feature representation using residual learning and multi-scale feature extraction. By utilizing multiple branches with different filter reduction ratios and dense attention mechanisms, the block captures diverse features while preserving essential information. The proposed PBN model achieved a classification accuracy of 93.16% on the Prostate Gleason dataset, outperforming all other comparison models. © 2025 Wiley Periodicals LLC.
  • No Thumbnail Available
    Item
    Deep learning-based automated mitosis detection in histopathology images for breast cancer grading
    (John Wiley and Sons Inc, 2022) Mathew, T.; Ajith, B.; Kini, J.; Rajan, J.
    Cancer grade is an indicator of the aggressiveness of cancer. It is used for prognosis and treatment decisions. Conventionally cancer grading is performed manually by experienced pathologists via microscopic examination of pathology slides. Among the three factors involved in breast cancer grading (mitosis count, nuclear atypia, and tubule formation), mitotic cell counting is the most challenging task for pathologists. It is possible to automate this task by applying computational algorithms on pathology slides images. Lack of sufficiently large datasets and class imbalance between mitotic and non-mitotic cells in slide images are the two major challenges in developing effective deep learning-based methods for mitosis detection. In this paper, we propose a new approach and a method based on that to address these challenges. The high training data requirement of the advanced deep neural network is met by combining two datasets from different sources after a color-normalization process. Class imbalance is addressed by the augmentation of the mitotic samples in a context-preserving manner. Finally, a customized convolutional neural network classifier is used to classify the candidate cells into the target classes. We have used the publicly available datasets MITOS-ATYPIA and MITOS for the experiments. Our method outperforms most of the recent methods that are based on independent datasets and at the same time offers adaptability to the combination of datasets from different sources. © 2022 Wiley Periodicals LLC.
  • No Thumbnail Available
    Item
    Deep structured residual encoder-decoder network with a novel loss function for nuclei segmentation of kidney and breast histopathology images
    (Springer, 2022) Chanchal, A.K.; Lal, S.; Kini, J.
    To improve the process of diagnosis and treatment of cancer disease, automatic segmentation of haematoxylin and eosin (H & E) stained cell nuclei from histopathology images is the first step in digital pathology. The proposed deep structured residual encoder-decoder network (DSREDN) focuses on two aspects: first, it effectively utilized residual connections throughout the network and provides a wide and deep encoder-decoder path, which results to capture relevant context and more localized features. Second, vanished boundary of detected nuclei is addressed by proposing an efficient loss function that better train our proposed model and reduces the false prediction which is undesirable especially in healthcare applications. The proposed architecture experimented on three different publicly available H&E stained histopathological datasets namely: (I) Kidney (RCC) (II) Triple Negative Breast Cancer (TNBC) (III) MoNuSeg-2018. We have considered F1-score, Aggregated Jaccard Index (AJI), the total number of parameters, and FLOPs (Floating point operations), which are mostly preferred performance measure metrics for comparison of nuclei segmentation. The evaluated score of nuclei segmentation indicated that the proposed architecture achieved a considerable margin over five state-of-the-art deep learning models on three different histopathology datasets. Visual segmentation results show that the proposed DSREDN model accurately segment the nuclear regions than those of the state-of-the-art methods. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
  • No Thumbnail Available
    Item
    Efficient and robust deep learning architecture for segmentation of kidney and breast histopathology images
    (Elsevier Ltd, 2021) Chanchal, A.K.; Kumar, A.; Lal, S.; Kini, J.
    Image segmentation is consistently an important task for computer vision and the analysis of medical images. The analysis and diagnosis of histopathology images by using efficient algorithms that separate hematoxylin and eosin-stained nuclei was the purpose of our proposed method. In this paper, we propose a deep learning model that automatically segments the complex nuclei present in histology images by implementing an effective encoder–decoder architecture with a separable convolution pyramid pooling network (SCPP-Net). The SCPP unit focuses on two aspects: first, it increases the receptive field by varying four different dilation rates, keeping the kernel size fixed, and second, it reduces the trainable parameter by using depth-wise separable convolution. Our deep learning model experimented with three publicly available histopathology image datasets. The proposed SCPP-Net provides better experimental segmentation results compared to other existing deep learning models and is evaluated in terms of F1-score and aggregated Jaccard index. © 2021 Elsevier Ltd
  • No Thumbnail Available
    Item
    Efficient deep learning architecture with dimension-wise pyramid pooling for nuclei segmentation of histopathology images
    (Elsevier Ltd, 2021) Aatresh, A.A.; Yatgiri, R.P.; Chanchal, A.K.; Kumar, A.; Ravi, A.; Das, D.; Raghavendra, B.S.; Lal, S.; Kini, J.
    Image segmentation remains to be one of the most vital tasks in the area of computer vision and more so in the case of medical image processing. Image segmentation quality is the main metric that is often considered with memory and computation efficiency overlooked, limiting the use of power hungry models for practical use. In this paper, we propose a novel framework (Kidney-SegNet) that combines the effectiveness of an attention based encoder-decoder architecture with atrous spatial pyramid pooling with highly efficient dimension-wise convolutions. The segmentation results of the proposed Kidney-SegNet architecture have been shown to outperform existing state-of-the-art deep learning methods by evaluating them on two publicly available kidney and TNBC breast H&E stained histopathology image datasets. Further, our simulation experiments also reveal that the computational complexity and memory requirement of our proposed architecture is very efficient compared to existing deep learning state-of-the-art methods for the task of nuclei segmentation of H&E stained histopathology images. The source code of our implementation will be available at https://github.com/Aaatresh/Kidney-SegNet. © 2021 Elsevier Ltd
  • No Thumbnail Available
    Item
    Evolution of LiverNet 2.x: Architectures for automated liver cancer grade classification from H&E stained liver histopathological images
    (Springer, 2024) Chanchal, A.K.; Lal, S.; Barnwal, D.; Sinha, P.; Arvavasu, S.; Kini, J.
    Recently, the automation of disease identification has been quite popular in the field of medical diagnosis. The rise of Convolutional Neural Networks (CNNs) for training and generalizing medical image data has proven to be quite efficient in detecting and identifying the types and sub-types of various diseases. Since the classification of large datasets of Hematoxylin & Eosin (H&E) stained histopathology images by experts can be expensive and time-consuming, automated processes using deep learning have been encouraged for the past decade. This paper introduces LiverNet 2.x model by modifying the previously encountered LiverNet architecture. The proposed model uses two different improvements of the Atrous Spatial Pyramid Pooling (ASPP) block to extract the clinically defined features of hepatocellular carcinoma (HCC) from liver histopathology images. LiverNet 2.0 uses a modified form of ASPP block known as DenseASPP, where all the atrous convolution outputs are densely connected. Whereas LiverNet 2.1 uses fewer concatenations while maintaining a large receptive field by stacking the dilated convolutional blocks in a tree-like fashion. This paper also discusses the trade-off between LiverNet 2.0 and LiverNet 2.1 in terms of accuracy and computational complexity. All comparison model and the proposed model is trained and tested on the patches of two different histopathological datasets. The experimental results show that the proposed model performs better compared to reference models. For the KMC Liver dataset, LiverNet 2.0 and LiverNet 2.1 achieved an accuracy of 97.50% and 97.14% respectively. Accuracy of 94.37% and 97.14% for the TCGA Liver dataset are achieved. © 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
  • No Thumbnail Available
    Item
    FPGA implementation of deep learning architecture for kidney cancer detection from histopathological images
    (Springer, 2024) Lal, S.; Chanchal, A.K.; Kini, J.; Upadhyay, G.K.
    Kidney cancer is the most common type of cancer, and designing an automated system to accurately classify the cancer grade is of paramount importance for a better prognosis of the disease from histopathological kidney cancer images. Application of deep learning neural networks (DLNNs) for histopathological image classification is thriving and implementation of these networks on edge devices has been gaining the ground correspondingly due to high computational power and low latency requirements. This paper designs an automated system that classifies histopathological kidney cancer images. For experimentation, we have collected Kidney histopathological images of Non-cancerous, cancerous, and their respective grade of Renal Cell Carcinoma (RCC) from Kasturba Medical College (KMC), Mangalore, Karnataka, India. We have implemented and analyzed performances of deep learning architectures on a Field Programmable Gate Array (FPGA) board. Results yield that the Inception-V3 network provides better accuracy for kidney cancer detection as compared to other deep learning models on Kidney histopathological images. Further, the DenseNet-169 network provides better accuracy for kidney cancer grading as compared to other existing deep learning architecture on the FPGA board. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023.
  • No Thumbnail Available
    Item
    High-resolution deep transferred ASPPU-Net for nuclei segmentation of histopathology images
    (Springer Science and Business Media Deutschland GmbH, 2021) Chanchal, A.K.; Lal, S.; Kini, J.
    Purpose: Increasing cancer disease incidence worldwide has become a major public health issue. Manual histopathological analysis is a common diagnostic method for cancer detection. Due to the complex structure and wide variability in the texture of histopathology images, it has been challenging for pathologists to diagnose manually those images. Automatic segmentation of histopathology images to diagnose cancer disease is a continuous exploration field in recent times. Segmentation and analysis for diagnosis of histopathology images by using an efficient deep learning algorithm are the purpose of the proposed method. Method: To improve the segmentation performance, we proposed a deep learning framework that consists of a high-resolution encoder path, an atrous spatial pyramid pooling bottleneck module, and a powerful decoder. Compared to the benchmark segmentation models having a deep and thin path, our network is wide and deep that effectively leverages the strength of residual learning as well as encoder–decoder architecture. Results: We performed careful experimentation and analysis on three publically available datasets namely kidney dataset, Triple Negative Breast Cancer (TNBC) dataset, and MoNuSeg histopathology image dataset. We have used the two most preferred performance metrics called F1 score and aggregated Jaccard index (AJI) to evaluate the performance of the proposed model. The measured values of F1 score and AJI score are (0.9684, 0.9394), (0.8419, 0.7282), and (0.8344, 0.7169) on the kidney dataset, TNBC histopathology dataset, and MoNuSeg dataset, respectively. Conclusion
  • No Thumbnail Available
    Item
    LiverNet: efficient and robust deep learning model for automatic diagnosis of sub-types of liver hepatocellular carcinoma cancer from H&E stained liver histopathology images
    (Springer Science and Business Media Deutschland GmbH, 2021) Aatresh, A.A.; Alabhya, K.; Lal, S.; Kini, J.; Saxena, P.P.
    Purpose: Liver cancer is one of the most common types of cancers in Asia with a high mortality rate. A common method for liver cancer diagnosis is the manual examination of histopathology images. Due to its laborious nature, we focus on alternate deep learning methods for automatic diagnosis, providing significant advantages over manual methods. In this paper, we propose a novel deep learning framework to perform multi-class cancer classification of liver hepatocellular carcinoma (HCC) tumor histopathology images which shows improvements in inference speed and classification quality over other competitive methods. Method: The BreastNet architecture proposed by Togacar et al. shows great promise in using convolutional block attention modules (CBAM) for effective cancer classification in H&E stained breast histopathology images. As part of our experiments with this framework, we have studied the addition of atrous spatial pyramid pooling (ASPP) blocks to effectively capture multi-scale features in H&E stained liver histopathology data. We classify liver histopathology data into four classes, namely the non-cancerous class, low sub-type liver HCC tumor, medium sub-type liver HCC tumor, and high sub-type liver HCC tumor. To prove the robustness and efficacy of our models, we have shown results for two liver histopathology datasets—a novel KMC dataset and the TCGA dataset. Results: Our proposed architecture outperforms state-of-the-art architectures for multi-class cancer classification of HCC histopathology images, not just in terms of quality of classification, but also in computational efficiency on the novel proposed KMC liver data and the publicly available TCGA-LIHC dataset. We have considered precision, recall, F1-score, intersection over union (IoU), accuracy, number of parameters, and FLOPs as metrics for comparison. The results of our meticulous experiments have shown improved classification performance along with added efficiency. LiverNet has been observed to outperform all other frameworks in all metrics under comparison with an approximate improvement of 2 % in accuracy and F1-score on the KMC and TCGA-LIHC datasets. Conclusion: To the best of our knowledge, our work is among the first to provide concrete proof and demonstrate results for a successful deep learning architecture to handle multi-class HCC histopathology image classification among various sub-types of liver HCC tumor. Our method shows a high accuracy of 90.93 % on the proposed KMC liver dataset requiring only 0.5739 million parameters and 1.1934 million floating point operations per second. © 2021, CARS.
  • No Thumbnail Available
    Item
    Novel edge detection method for nuclei segmentation of liver cancer histopathology images
    (Springer Science and Business Media Deutschland GmbH, 2023) Roy, S.; Das, D.; Lal, S.; Kini, J.
    In automatic cancer detection, nuclei segmentation is a very essential step which enables the classification task simpler and computationally more efficient. However, automatic nuclei detection is fraught with the problems of inter-class variability of nuclei size and shapes. In this research article, a novel unsupervised edge detection technique, is proposed for segmenting the nuclei regions in liver cancer Hematoxylin and Eosin (H&E) stained histopathology images. In this novel edge detection technique, the notion of computing local standard deviation is incorporated, instead of computing gradients. Since, local standard deviation value is correlated with the edge information of image, this novel method can extract the nuclei edges efficiently, even at multiscale. The edge-detected image is further converted into a binary image by employing Ostu (IEEE Trans Syst Man Cybern 9(1):62–66, 1979)’s thresholding operation. Subsequently, an adaptive morphological filter is also employed in order to refine the final segmented image. The proposed nuclei segmentation method is also tested on a well-recognized multi-organ dataset, in order to check its effectiveness over wide variety of dataset. The visual results of both datasets indicate that the proposed segmentation method overcomes the limitations of existing unsupervised methods, moreover, its performance is comparable with the same of recent deep neural models like DIST, HoverNet, etc. Furthermore, three quality metrics are computed in order to measure the performance of several nuclei segmentation methods quantitatively. The mean value of quality metrics reveals that proposed segmentation method indeed outperformed other existing nuclei segmentation methods. © 2021, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
  • No Thumbnail Available
    Item
    ProsGradNet: An effective and structured CNN approach for prostate cancer grading from histopathology images
    (Elsevier Ltd, 2025) Prabhu, A.; Sravya, N.; Lal, S.; Kini, J.
    Prostate cancer (PCa) is one of the most prevalent and potentially fatal malignancies affecting men globally. The incidence of prostate cancer is expected to double by 2040, posing significant health challenges. This anticipated increase underscores the urgent need for early and precise diagnosis to facilitate effective treatment and management. Histopathological analysis using Gleason grading system plays a pivotal role in clinical decision making by classifying cancer subtypes based on their cellular characteristics. This paper proposes a novel deep CNN model named as Prostate Grading Network (ProsGradNet), for the automatic grading of PCa from histopathological images. Central to the approach is the novel Context Guided Shared Channel Residual (CGSCR) block, that introduces structured methods for channel splitting and clustering, by varying group sizes. By grouping channels into 2, 4, and 8, it prioritizes deeper layer features, enhancing local semantic content and abstract feature representation. This methodological advancement significantly boosts classification accuracy, achieving an impressive 92.88% on Prostate Gleason dataset, outperforming other CNN models. To demonstrate the generalizability of ProsGradNet over different datasets, experiments are performed on Kasturba Medical College (KMC) Kidney dataset as well. The results further confirm the superiority of the proposed ProsGradNet model, with a classification accuracy of 92.68% on the KMC Kidney dataset. This demonstrates the model's potential to be applied effectively across various histopathological datasets, making it a valuable tool to fight against cancer. © 2025 Elsevier Ltd

Maintained by Central Library NITK | DSpace software copyright © 2002-2026 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify