Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
8 results
Search Results
Item Dense refinement residual network for road extraction from aerial imagery data(Institute of Electrical and Electronics Engineers Inc., 2019) Eerapu, K.K.; Ashwath, B.; Lal, S.; Dell’Acqua, F.; Narasimha Dhan, A.V.Extraction of roads from high-resolution aerial images with a high degree of accuracy is a prerequisite in various applications. In aerial images, road pixels and background pixels are generally in the ratio of ones-to-tens, which implies a class imbalance problem. Existing semantic segmentation architectures generally do well in road-dominated cases but fail in background-dominated scenarios. This paper proposes a dense refinement residual network (DRR Net) for semantic segmentation of aerial imagery data. The proposed semantic segmentation architecture is composed of multiple DRR modules for the extraction of diversified roads alleviating the class imbalance problem. Each module of the proposed architecture utilizes dense convolutions at various scales only in the encoder for feature learning. Residual connections in each module of the proposed architecture provide the guided learning path by propagating the combined features to subsequent DRR modules. Segmentation maps undergo various levels of refinement based on the number of DRR modules utilized in the architecture. To emphasize more on small object instances, the proposed architecture has been trained with a composite loss function. The qualitative and quantitative results are reported by utilizing the Massachusetts roads dataset. The experimental results report that the proposed architecture provides better results as compared to other recent architectures. © 2019 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.Item Efficient and robust deep learning architecture for segmentation of kidney and breast histopathology images(Elsevier Ltd, 2021) Chanchal, A.K.; Kumar, A.; Lal, S.; Kini, J.Image segmentation is consistently an important task for computer vision and the analysis of medical images. The analysis and diagnosis of histopathology images by using efficient algorithms that separate hematoxylin and eosin-stained nuclei was the purpose of our proposed method. In this paper, we propose a deep learning model that automatically segments the complex nuclei present in histology images by implementing an effective encoder–decoder architecture with a separable convolution pyramid pooling network (SCPP-Net). The SCPP unit focuses on two aspects: first, it increases the receptive field by varying four different dilation rates, keeping the kernel size fixed, and second, it reduces the trainable parameter by using depth-wise separable convolution. Our deep learning model experimented with three publicly available histopathology image datasets. The proposed SCPP-Net provides better experimental segmentation results compared to other existing deep learning models and is evaluated in terms of F1-score and aggregated Jaccard index. © 2021 Elsevier LtdItem DPPNet: An Efficient and Robust Deep Learning Network for Land Cover Segmentation From High-Resolution Satellite Images(Institute of Electrical and Electronics Engineers Inc., 2023) Sravya, N.; Priyanka; Lal, S.; Nalini, J.; Chintala, C.S.; Dell’Acqua, F.Visual understanding of land cover is an important task in information extraction from high-resolution satellite images, an operation which is often involved in remote sensing applications. Multi-class semantic segmentation of high-resolution satellite images turned out to be an important research topic because of its wide range of real-life applications. Although scientific literature reports several deep learning methods that can provide good results in segmenting remotely sensed images, these are generally computationally expensive. There still exists an open challenge towards developing a robust deep learning model capable of improving performances while requiring less computational complexity. In this article, we propose a new model termed DPPNet (Depth-wise Pyramid Pooling Network), which uses the newly designed Depth-wise Pyramid Pooling (DPP) block and a dense block with multi-dilated depth-wise residual connections. This proposed DPPNet model is evaluated and compared with the benchmark semantic segmentation models on the Land-cover and WHDLD high-resolution Space-borne Sensor (HRS) datasets. The proposed model provides DC, IoU, OA, Ka scores of (88.81%, 78.29%), (76.35%, 60.92%), (87.15%, 81.02%), (77.86%, 72.73%) on the Land-cover and WHDLD HRS datasets respectively. Results show that the proposed DPPNet model provides better performances, in both quantitative and qualitative terms, on these standard benchmark datasets than current state-of-art methods. © 2017 IEEE.Item Evolution of LiverNet 2.x: Architectures for automated liver cancer grade classification from H&E stained liver histopathological images(Springer, 2024) Chanchal, A.K.; Lal, S.; Barnwal, D.; Sinha, P.; Arvavasu, S.; Kini, J.Recently, the automation of disease identification has been quite popular in the field of medical diagnosis. The rise of Convolutional Neural Networks (CNNs) for training and generalizing medical image data has proven to be quite efficient in detecting and identifying the types and sub-types of various diseases. Since the classification of large datasets of Hematoxylin & Eosin (H&E) stained histopathology images by experts can be expensive and time-consuming, automated processes using deep learning have been encouraged for the past decade. This paper introduces LiverNet 2.x model by modifying the previously encountered LiverNet architecture. The proposed model uses two different improvements of the Atrous Spatial Pyramid Pooling (ASPP) block to extract the clinically defined features of hepatocellular carcinoma (HCC) from liver histopathology images. LiverNet 2.0 uses a modified form of ASPP block known as DenseASPP, where all the atrous convolution outputs are densely connected. Whereas LiverNet 2.1 uses fewer concatenations while maintaining a large receptive field by stacking the dilated convolutional blocks in a tree-like fashion. This paper also discusses the trade-off between LiverNet 2.0 and LiverNet 2.1 in terms of accuracy and computational complexity. All comparison model and the proposed model is trained and tested on the patches of two different histopathological datasets. The experimental results show that the proposed model performs better compared to reference models. For the KMC Liver dataset, LiverNet 2.0 and LiverNet 2.1 achieved an accuracy of 97.50% and 97.14% respectively. Accuracy of 94.37% and 97.14% for the TCGA Liver dataset are achieved. © 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.Item RMDNet-Deep Learning Paradigms for Effective Malware Detection and Classification(Institute of Electrical and Electronics Engineers Inc., 2024) S, S.; Lal, S.; Pratap Singh, M.; Raghavendra, B.S.Malware analysis and detection are still essential for maintaining the security of networks and computer systems, even as the threat landscape shifts. Traditional approaches are insufficient to keep pace with the rapidly evolving nature of malware. Artificial Intelligence (AI) assumes a significant role in propelling its design to unprecedented levels. Various Machine Learning (ML) based malware detection systems have been developed to combat the ever-changing characteristics of malware. Consequently, there is a growing interest in exploring advanced techniques that leverage the power of Deep Learning (DL) to effectively analyze and detect malicious software. DL models demonstrate enhanced capabilities for analyzing extensive sequences of system calls. This paper proposes a Robust Malware Detection Network (RMDNet) for effective malware detection and classification. The proposed RMDNet model branches the input and performs depth-wise convolution and concatenation operations. The experimental results of the proposed RMDNet and existing DL models are evaluated on 48240 malware and binary visualization image dataset with RGB format. Also on the multi-class malimg and dumpware-10 datasets with grayscale format. The experimental results on each of these datasets demonstrate that the proposed RMDNet model can effectively and accurately categorize malware, outperforming the most recent benchmark DL algorithms. © 2013 IEEE.Item A Robust CNN Framework for Change Detection Analysis From Bitemporal Remote Sensing Images(Institute of Electrical and Electronics Engineers Inc., 2024) Sravya, N.; Bhaduka, K.; Lal, S.; Nalini, J.; Chintala, C.S.—Deep learning (DL) algorithms are currently the most effective methods for change detection (CD) from high-resolution multispectral (MS) remote-sensing (RS) images. Because a variety of satellites are able to provide a lot of data, it is now easy to find changes using efficient DL models. Current CD methods focus on simple structure and combining the features obtained by all the stages together rather than extracting multiscale features from a single stage since it may lead to information loss and an imbalance contribution of features at different stages. This in turn results in misclassification of small changed areas and poor edge and shape preservation of changed areas. This article introduces an enhanced RSCD network (ERSCDNet) for CD from bitemporal aerial and MS images. The proposed encoder–decoder-based ERSCDNet model uses an attention-based encoder and decoder block and a modified new spatial pyramid pooling block at each stage of the decoder part, which effectively utilize features at each encoder stages and prevent information loss. The learning, vision, and remote sensing CD (LEVIR-CD), Onera satellite change detection (OSCD), and Sun Yat-Sen University CD (SYSU-CD) datasets are used to evaluate the ERSCDNet model. The ERSCDNet gives better performance than all the models used in this article for comparison. It gives an F1 score, a Kappa coefficient, and a Jaccard index of (0.9306, 0.9282, 0.8703), (0.8945, 0.8887, 0.8091), and (0.7581, 0.6876, 0.6103) on OSCD, LEVIR-CD, and SYSU-CD datasets, respectively. © 2024 The Authors. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.Item Development of Robust CNN Architecture for Grading and Classification of Renal Cell Carcinoma Histology Images(Institute of Electrical and Electronics Engineers Inc., 2025) Chanchal, C.A.; Lal, S.; Suresh, S.Kidney cancer is a commonly diagnosed cancer disease in recent years, and Renal Cell Carcinoma (RCC) is the most common kidney cancer responsible for 80% to 85% of all renal tumors. The diagnosis of kidney cancer requires manual examination and analysis of histopathological images of the affected tissue. This process is time-consuming, prone to human error, and highly depends on the expertise of a pathologist. Early detection and grading of kidney cancer tissues enable doctors and practitioners to decide the further course of treatment. Therefore, quick and precise analysis of kidney cancer tissue images is extremely important for proper diagnosis. Recently, deep learning algorithms have proved to be very efficient and accurate in histopathology image analysis. In this paper, we propose a computationally efficient deep-learning architecture based on convolutional neural networks (CNNs) to automate the grading and classification task for kidney cancer tissue. The proposed Robust CNN (RoCNN) architecture is capable of learning features at varying convolutional filter sizes because of the inception modules employed in it. Squeeze and Extract (SE) blocks are used to remove unnecessary contributions from noisy channels and improve model accuracy. Concatenating samples from three different parts of architecture allows for the encompassing of varied features, further improving grading and classification accuracy. To demonstrate that the proposed model is generalized and independent of the dataset, it has experimented on two well-known datasets, the KMC kidney dataset of five different grades and the TCGA dataset of four classes. Compared to the best-performing state-of-the-art model the accuracy of RoCNN shows a significant improvement of about 4.22% and 3.01% for both datasets respectively. © 2013 IEEE.Item MDEANet: modified detail-enhanced convolution and attention-based network for dehazing of remote sensing images(Springer, 2025) Sravya, S.; K.s, B.; Lal, S.Image de-hazing aims to improve quality and restore clarity of hazy images. When airborne particles like dust and smoke absorb light, it can result in a haze, a typical meteorological phenomenon that degrades color accuracy, picture contrast, and overall visual perception. Numerous applications, including environmental monitoring, disaster management and remote sensing, heavily rely on satellite imaging. However, haze and air debris may considerably reduce the clarity and quality of satellite images, which can influence how well they can be used and interpreted. This paper proposes MDEANet (Modified Detail-Enhanced convolution and Attention-based Network), a deep learning-based algorithm for de-hazing of remote sensing images. As haze is unevenly distributed, this model has pixel attention and channel attention blocks which treat pixels and channels of an image differently depending on the haze distribution, giving more flexibility for de-hazing. Difference convolution (DC) captures gradients and improves the representation and adaptation abilities of CNN. The proposed model is trained on the RESIDE-OTS dataset. The proposed model is giving an average PSNR of 29.411, SSIM of 0.9495 and MVR of 0.0335 on RESIDE-OTS test images and it is giving an average MVR value of 0.0264 on satellite images, which are best values compared to existing models. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
