Faculty Publications

Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736

Publications by NITK Faculty

Browse

Search Results

Now showing 1 - 9 of 9
  • Item
    Multi-Modal Medical Image Fusion with Adaptive Weighted Combination of NSST Bands Using Chaotic Grey Wolf Optimization
    (Institute of Electrical and Electronics Engineers Inc., 2019) Asha, C.S.; Lal, S.; Gurupur, V.P.; Saxena, P.U.P.
    Recently, medical image fusion has emerged as an impressive technique in merging the medical images of different modalities. Certainly, the fused image assists the physician in disease diagnosis for effective treatment planning. The fusion process combines multi-modal images to incur a single image with excellent quality, retaining the information of original images. This paper proposes a multi-modal medical image fusion through a weighted blending of high-frequency subbands of nonsubsampled shearlet transform (NSST) domain via chaotic grey wolf optimization algorithm. As an initial step, the NSST is applied on source images to decompose into the multi-scale and multi-directional components. The low-frequency bands are fused based on a simple max rule to sustain the energy of an individual. The texture details of input images are preserved by an adaptively weighted combination of high-frequency images using a recent chaotic grey wolf optimization algorithm to minimize the distance between the fused image and source images. The entire process emphasizes on retaining the energy of the low-frequency band and the transferring of texture features from source images to the fused image. Finally, the fused image is formed using inverse NSST of merged low and high-frequency bands. The experiments are carried out on eight different disease datasets obtained from Brain Atlas, which consists of MR-T1 and MR-T2, MR and SPECT, MR and PET, and MR and CT. The effectiveness of the proposed method is validated using more than 100 pairs of images based on the subjective and objective quality assessment. The experimental results confirm that the proposed method performs better in contrast with the current state-of-the-art image fusion techniques in terms of entropy, VIFF, and FMI. Hence, the proposed method will be helpful for disease diagnosis, medical treatment planning, and surgical procedure. © 2013 IEEE.
  • Item
    Novel color normalization method for hematoxylin eosin stained histopathology images
    (Institute of Electrical and Electronics Engineers Inc., 2019) Roy, S.; Lal, S.; Kini, J.R.
    With the advent of computer-assisted diagnosis (CAD), the accuracy of cancer detection from histopathology images is significantly increased. However, color variation in the CAD system is inevitable due to the variability of stain concentration and manual tissue sectioning. The small variation in color may lead to the misclassification of cancer cells. Therefore, color normalization is a very much essential step prior to segmentation and classification in order to reduce the inter-variability of background color among a set of source images. In this paper, a novel color normalization method is proposed for Hematoxylin and Eosin stained histopathology images. Conventional Reinhard algorithm is modified in our proposed method by incorporating fuzzy logic. Moreover, mathematically, it is proved that our proposed method satisfies all three hypotheses of color normalization. Furthermore, several quality metrics are estimated locally for evaluating the performance of various color normalization methods. The experimental result reveals that our proposed method has outperformed all other benchmark methods. © 2019 IEEE.
  • Item
    NucleiSegNet: Robust deep learning architecture for the nuclei segmentation of liver cancer histopathology images
    (Elsevier Ltd, 2021) Lal, S.; Das, D.; Alabhya, K.; Kanfade, A.; Kumar, A.; Kini, J.R.
    The nuclei segmentation of hematoxylin and eosin (H&E) stained histopathology images is an important prerequisite in designing a computer-aided diagnostics (CAD) system for cancer diagnosis and prognosis. Automated nuclei segmentation methods enable the qualitative and quantitative analysis of tens of thousands of nuclei within H&E stained histopathology images. However, a major challenge during nuclei segmentation is the segmentation of variable sized, touching nuclei. To address this challenge, we present NucleiSegNet - a robust deep learning network architecture for the nuclei segmentation of H&E stained liver cancer histopathology images. Our proposed architecture includes three blocks: a robust residual block, a bottleneck block, and an attention decoder block. The robust residual block is a newly proposed block for the efficient extraction of high-level semantic maps. The attention decoder block uses a new attention mechanism for efficient object localization, and it improves the proposed architecture's performance by reducing false positives. When applied to nuclei segmentation tasks, the proposed deep-learning architecture yielded superior results compared to state-of-the-art nuclei segmentation methods. We applied our proposed deep learning architecture for nuclei segmentation to a set of H&E stained histopathology images from two datasets, and our comprehensive results show that our proposed architecture outperforms state-of-the-art methods. As part of this work, we also introduced a new liver dataset (KMC liver dataset) of H&E stained liver cancer histopathology image tiles, containing 80 images with annotated nuclei procured from Kasturba Medical College (KMC), Mangalore, Manipal Academy of Higher Education (MAHE), Manipal, Karnataka, India. The proposed model's source code is available at https://github.com/shyamfec/NucleiSegNet. © 2020 Elsevier Ltd
  • Item
    Efficient and robust deep learning architecture for segmentation of kidney and breast histopathology images
    (Elsevier Ltd, 2021) Chanchal, A.K.; Kumar, A.; Lal, S.; Kini, J.
    Image segmentation is consistently an important task for computer vision and the analysis of medical images. The analysis and diagnosis of histopathology images by using efficient algorithms that separate hematoxylin and eosin-stained nuclei was the purpose of our proposed method. In this paper, we propose a deep learning model that automatically segments the complex nuclei present in histology images by implementing an effective encoder–decoder architecture with a separable convolution pyramid pooling network (SCPP-Net). The SCPP unit focuses on two aspects: first, it increases the receptive field by varying four different dilation rates, keeping the kernel size fixed, and second, it reduces the trainable parameter by using depth-wise separable convolution. Our deep learning model experimented with three publicly available histopathology image datasets. The proposed SCPP-Net provides better experimental segmentation results compared to other existing deep learning models and is evaluated in terms of F1-score and aggregated Jaccard index. © 2021 Elsevier Ltd
  • Item
    Deep structured residual encoder-decoder network with a novel loss function for nuclei segmentation of kidney and breast histopathology images
    (Springer, 2022) Chanchal, A.K.; Lal, S.; Kini, J.
    To improve the process of diagnosis and treatment of cancer disease, automatic segmentation of haematoxylin and eosin (H & E) stained cell nuclei from histopathology images is the first step in digital pathology. The proposed deep structured residual encoder-decoder network (DSREDN) focuses on two aspects: first, it effectively utilized residual connections throughout the network and provides a wide and deep encoder-decoder path, which results to capture relevant context and more localized features. Second, vanished boundary of detected nuclei is addressed by proposing an efficient loss function that better train our proposed model and reduces the false prediction which is undesirable especially in healthcare applications. The proposed architecture experimented on three different publicly available H&E stained histopathological datasets namely: (I) Kidney (RCC) (II) Triple Negative Breast Cancer (TNBC) (III) MoNuSeg-2018. We have considered F1-score, Aggregated Jaccard Index (AJI), the total number of parameters, and FLOPs (Floating point operations), which are mostly preferred performance measure metrics for comparison of nuclei segmentation. The evaluated score of nuclei segmentation indicated that the proposed architecture achieved a considerable margin over five state-of-the-art deep learning models on three different histopathology datasets. Visual segmentation results show that the proposed DSREDN model accurately segment the nuclear regions than those of the state-of-the-art methods. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
  • Item
    Novel edge detection method for nuclei segmentation of liver cancer histopathology images
    (Springer Science and Business Media Deutschland GmbH, 2023) Roy, S.; Das, D.; Lal, S.; Kini, J.
    In automatic cancer detection, nuclei segmentation is a very essential step which enables the classification task simpler and computationally more efficient. However, automatic nuclei detection is fraught with the problems of inter-class variability of nuclei size and shapes. In this research article, a novel unsupervised edge detection technique, is proposed for segmenting the nuclei regions in liver cancer Hematoxylin and Eosin (H&E) stained histopathology images. In this novel edge detection technique, the notion of computing local standard deviation is incorporated, instead of computing gradients. Since, local standard deviation value is correlated with the edge information of image, this novel method can extract the nuclei edges efficiently, even at multiscale. The edge-detected image is further converted into a binary image by employing Ostu (IEEE Trans Syst Man Cybern 9(1):62–66, 1979)’s thresholding operation. Subsequently, an adaptive morphological filter is also employed in order to refine the final segmented image. The proposed nuclei segmentation method is also tested on a well-recognized multi-organ dataset, in order to check its effectiveness over wide variety of dataset. The visual results of both datasets indicate that the proposed segmentation method overcomes the limitations of existing unsupervised methods, moreover, its performance is comparable with the same of recent deep neural models like DIST, HoverNet, etc. Furthermore, three quality metrics are computed in order to measure the performance of several nuclei segmentation methods quantitatively. The mean value of quality metrics reveals that proposed segmentation method indeed outperformed other existing nuclei segmentation methods. © 2021, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
  • Item
    FPGA implementation of deep learning architecture for kidney cancer detection from histopathological images
    (Springer, 2024) Lal, S.; Chanchal, A.K.; Kini, J.; Upadhyay, G.K.
    Kidney cancer is the most common type of cancer, and designing an automated system to accurately classify the cancer grade is of paramount importance for a better prognosis of the disease from histopathological kidney cancer images. Application of deep learning neural networks (DLNNs) for histopathological image classification is thriving and implementation of these networks on edge devices has been gaining the ground correspondingly due to high computational power and low latency requirements. This paper designs an automated system that classifies histopathological kidney cancer images. For experimentation, we have collected Kidney histopathological images of Non-cancerous, cancerous, and their respective grade of Renal Cell Carcinoma (RCC) from Kasturba Medical College (KMC), Mangalore, Karnataka, India. We have implemented and analyzed performances of deep learning architectures on a Field Programmable Gate Array (FPGA) board. Results yield that the Inception-V3 network provides better accuracy for kidney cancer detection as compared to other deep learning models on Kidney histopathological images. Further, the DenseNet-169 network provides better accuracy for kidney cancer grading as compared to other existing deep learning architecture on the FPGA board. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023.
  • Item
    Classification and grade prediction of kidney cancer histological images using deep learning
    (Springer, 2024) Chanchal, A.K.; N, S.; Lal, S.; Kumar, S.; Saxena, P.U.P.
    Renal Cell Carcinoma (RCC) is the most common malignant tumor (85%) of kidney cancer and has a complex histological pattern and nuclear structure. The manual diagnosis of kidney cancer or any other cancer from histopathology image depends on the knowledge and experience of pathologists, and the pathologist’s experience influences the results. According to studies, the kind of histology in kidney cancer is related to the prognosis and course of treatment. Since the kind of histology, molecular profile, and stage of the disease all affect how the disease is treated, there is an essential need to develop an automated system that can precisely analyze the histopathological images of the disease. This work demonstrates how a deep learning framework can be used to predict and classify associated grades of RCC from provided haematoxylin and eosin (H &E) images. The proposed model focuses on two important tasks- First to capture and extract associated features from the H &E images of five different grades. Second, to classify the new set of unseen H &E images into five separate grades using the obtained features. The proposed architecture has been tested and experimented on two independent datasets containing H &E stained histopathology images. The proposed architecture has been examined using the following performance metrics namely precision, recall, F1 - score, accuracy, Floating-point operations (FLOPs), and the total number of parameters. The obtained results show that the proposed architecture attains better results over seven state-of-the-art deep learning architectures on two different H &E stained histopathology image datasets. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
  • Item
    Development of Robust CNN Architecture for Grading and Classification of Renal Cell Carcinoma Histology Images
    (Institute of Electrical and Electronics Engineers Inc., 2025) Chanchal, C.A.; Lal, S.; Suresh, S.
    Kidney cancer is a commonly diagnosed cancer disease in recent years, and Renal Cell Carcinoma (RCC) is the most common kidney cancer responsible for 80% to 85% of all renal tumors. The diagnosis of kidney cancer requires manual examination and analysis of histopathological images of the affected tissue. This process is time-consuming, prone to human error, and highly depends on the expertise of a pathologist. Early detection and grading of kidney cancer tissues enable doctors and practitioners to decide the further course of treatment. Therefore, quick and precise analysis of kidney cancer tissue images is extremely important for proper diagnosis. Recently, deep learning algorithms have proved to be very efficient and accurate in histopathology image analysis. In this paper, we propose a computationally efficient deep-learning architecture based on convolutional neural networks (CNNs) to automate the grading and classification task for kidney cancer tissue. The proposed Robust CNN (RoCNN) architecture is capable of learning features at varying convolutional filter sizes because of the inception modules employed in it. Squeeze and Extract (SE) blocks are used to remove unnecessary contributions from noisy channels and improve model accuracy. Concatenating samples from three different parts of architecture allows for the encompassing of varied features, further improving grading and classification accuracy. To demonstrate that the proposed model is generalized and independent of the dataset, it has experimented on two well-known datasets, the KMC kidney dataset of five different grades and the TCGA dataset of four classes. Compared to the best-performing state-of-the-art model the accuracy of RoCNN shows a significant improvement of about 4.22% and 3.01% for both datasets respectively. © 2013 IEEE.