Faculty Publications

Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736

Publications by NITK Faculty

Browse

Search Results

Now showing 1 - 4 of 4
  • Item
    An efficient cuckoo search algorithm based multilevel thresholding for segmentation of satellite images using different objective functions
    (Elsevier Ltd, 2016) Suresh, S.; Lal, S.
    Satellite image segmentation is challenging due to the presence of weakly correlated and ambiguous multiple regions of interest. Several bio-inspired algorithms were developed to generate optimum threshold values for segmenting such images efficiently. Their exhaustive search nature makes them computationally expensive when extended to multilevel thresholding. In this paper, we propose a computationally efficient image segmentation algorithm, called CSMcCulloch, incorporating McCulloch's method for lévy flight generation in Cuckoo Search (CS) algorithm. We have also investigated the impact of Mantegna?s method forlévy flight generation in CS algorithm (CSMantegna) by comparing it with the conventional CS algorithm which uses the simplified version of the same. CSMantegna algorithm resulted in improved segmentation quality with an expense of computational time. The performance of the proposed CSMcCulloch algorithm is compared with other bio-inspired algorithms such as Particle Swarm Optimization (PSO) algorithm, Darwinian Particle Swarm Optimization (DPSO) algorithm, Artificial Bee Colony (ABC) algorithm, Cuckoo Search (CS) algorithm and CSMantegna algorithm using Otsu's method, Kapur entropy and Tsallis entropy as objective functions. Experimental results were validated by measuring PSNR, MSE, FSIM and CPU running time for all the cases investigated. The proposed CSMcCulloch algorithm evolved to be most promising, and computationally efficient for segmenting satellite images. Convergence rate analysis also reveals that the proposed algorithm outperforms others in attaining stable global optimum thresholds. The experiments results encourages related researches in computer vision, remote sensing and image processing applications. © 2016 Elsevier Ltd. All rights reserved.
  • Item
    High-resolution deep transferred ASPPU-Net for nuclei segmentation of histopathology images
    (Springer Science and Business Media Deutschland GmbH, 2021) Chanchal, A.K.; Lal, S.; Kini, J.
    Purpose: Increasing cancer disease incidence worldwide has become a major public health issue. Manual histopathological analysis is a common diagnostic method for cancer detection. Due to the complex structure and wide variability in the texture of histopathology images, it has been challenging for pathologists to diagnose manually those images. Automatic segmentation of histopathology images to diagnose cancer disease is a continuous exploration field in recent times. Segmentation and analysis for diagnosis of histopathology images by using an efficient deep learning algorithm are the purpose of the proposed method. Method: To improve the segmentation performance, we proposed a deep learning framework that consists of a high-resolution encoder path, an atrous spatial pyramid pooling bottleneck module, and a powerful decoder. Compared to the benchmark segmentation models having a deep and thin path, our network is wide and deep that effectively leverages the strength of residual learning as well as encoder–decoder architecture. Results: We performed careful experimentation and analysis on three publically available datasets namely kidney dataset, Triple Negative Breast Cancer (TNBC) dataset, and MoNuSeg histopathology image dataset. We have used the two most preferred performance metrics called F1 score and aggregated Jaccard index (AJI) to evaluate the performance of the proposed model. The measured values of F1 score and AJI score are (0.9684, 0.9394), (0.8419, 0.7282), and (0.8344, 0.7169) on the kidney dataset, TNBC histopathology dataset, and MoNuSeg dataset, respectively. Conclusion
  • Item
    Development and evaluation of deep neural networks for the classification of subtypes of renal cell carcinoma from kidney histopathology images
    (Nature Research, 2025) Chanchal, A.K.; Lal, S.; Suresh, S.
    Kidney cancer is a leading cause of cancer-related mortality, with renal cell carcinoma (RCC) being the most prevalent form, accounting for 80–85% of all renal tumors. Traditional diagnosis of kidney cancer requires manual examination and analysis of histopathology images, which is time-consuming, error-prone, and depends on the pathologist’s expertise. Recently, deep learning algorithms have gained significant attention in histopathology image analysis. In this study, we developed an efficient and robust deep learning architecture called RenalNet for the classification of subtypes of RCC from kidney histopathology images. The RenalNet is designed to capture cross-channel and inter-spatial features at three different scales simultaneously and combine them together. Cross-channel features refer to the relationships and dependencies between different data channels, while inter-spatial features refer to patterns within small spatial regions. The architecture contains a CNN module called multiple channel residual transformation (MCRT), to focus on the most relevant morphological features of RCC by fusing the information from multiple paths. Further, to improve the network’s representation power, a CNN module called Group Convolutional Deep Localization (GCDL) has been introduced, which effectively integrates three different feature descriptors. As a part of this study, we also introduced a novel benchmark dataset for the classification of subtypes of RCC from kidney histopathology images. We obtained digital hematoxylin and eosin (H&E) stained WSIs from The Cancer Genome Atlas (TCGA) and acquired region of interest (ROIs) under the supervision of experienced pathologists resulted in the creation of patches. To demonstrate that the proposed model is generalized and independent of the dataset, it has experimented on three well-known datasets. Compared to the best-performing state-of-the-art model, RenalNet achieves accuracies of 91.67%, 97.14%, and 97.24% on three different datasets. Additionally, the proposed method significantly reduces the number of parameters and FLOPs, demonstrating computationally efficient with 2.71 × FLOPs & 0.2131 × parameters. © The Author(s) 2025.
  • Item
    Multi head attention based deep learning framework for waxberry fruit object segmentation from high resolution remote sensing images
    (Nature Research, 2025) Vaghela, R.; Sravya, N.; Lal, S.; Sarda, J.; Thakkar, A.; Patil, S.
    In some Asian countries, waxberries are special fruit that demand substantial labour for harvesting each season. To ease this burden, automated fruit-picking equipment has seen extensive development over the past decade. However, accurately segmenting waxberries in orchards remains challenging due to complex environments with overlapping fruits, foliage occlusions, and variable lighting conditions. Most existing segmentation methods are optimized for controlled environments with steady lighting and unobstructed views of the fruit, which limits their effectiveness in real-world scenarios. This paper introduces a fully convolutional neural network namely Multi-Attention Waxberry Network (MAWNet) which effectively addresses challenges such as occlusions, overlapping fruits and variable lighting conditions. The MAWNet is a UNet based architecture and it consist of enhanced residual block, transformer block, Atrous Spatial Pyramid Pooling (ASPP) block and introduced Multiple Dilation Convolutional (MDC) Block. The experimental results validate that the proposed MAWNet model surpasses several State-of-the-Art (SOTA) architectures, in terms of performance with achieving a remarkable accuracy of 99.63%, an Intersection over Union (IoU) of 96.77%, and a Dice coefficient of 98.34%. © The Author(s) 2025.