Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
7 results
Search Results
Item A benchmark study of automated intra-retinal cyst segmentation algorithms using optical coherence tomography B-scans(Elsevier Ireland Ltd, 2018) Girish, G.N.; Anima, V.A.; Kothari, A.R.; Sudeep, P.V.; Roychowdhury, S.; Rajan, J.(Background and objectives) Retinal cysts are formed by accumulation of fluid in the retina caused by leakages from inflammation or vitreous fractures. Analysis of the retinal cystic spaces holds significance in detection and treatment of several ocular diseases like age-related macular degeneration, diabetic macular edema etc. Thus, segmentation of intra-retinal cysts and quantification of cystic spaces are vital for retinal pathology and severity detection. In the recent years, automated segmentation of intra-retinal cysts using optical coherence tomography B-scans has gained significant importance in the field of retinal image analysis. The objective of this paper is to compare different intra-retinal cyst segmentation algorithms for comparative analysis and benchmarking purposes. (Methods) In this work, we employ a modular approach for standardizing the different segmentation algorithms. Further, we analyze the variations in automated cyst segmentation performances and method scalability across image acquisition systems by using the publicly available cyst segmentation challenge dataset (OPTIMA cyst segmentation challenge). (Results) Several key automated methods are comparatively analyzed using quantitative and qualitative experiments. Our analysis demonstrates the significance of variations in signal-to-noise ratio (SNR), retinal layer morphology and post-processing steps on the automated cyst segmentation processes. (Conclusion) This benchmarking study provides insights towards the scalability of automated processes across vendor-specific imaging modalities to provide guidance for retinal pathology diagnostics and treatment processes. © 2017 Elsevier B.V.Item Segmentation of intima media complex from carotid ultrasound images using wind driven optimization technique(Elsevier Ltd, 2018) Yamanakkanavar, Y.; Madipalli, P.; Rajan, J.; Kumar, P.K.; Narasimhadhan, A.V.Cardiovascular diseases are the third leading cause of death worldwide. The primitive indication of the possible onset of a cardiovascular disease is atherosclerosis, which is the accumulation of plaque on the arterial wall. The intima-media thickness (IMT) of the common carotid artery is an early marker of the development of cardiovascular disease. The computation of the IMT and the delineation of the carotid plaque are significant predictors for the clinical diagnosis of the risk of stroke. For a robust diagnosis, carotid ultrasound images must be free from speckle noise. To address this problem, we use state-of-the-art despeckling and enhancement methods in this work. Many edge-based methods for IMT estimation have been proposed to overcome the limitations of manual segmentation. In this paper, we present a fully automated region-of-interest (ROI) extraction and a threshold-based segmentation of the intima media complex (IMC) using a wind driven optimization (WDO) technique. A quantitative evaluation is carried out on 90 carotid ultrasound images of two different datasets. The obtained results are compared with those of state-of-the-art techniques such as a model-based approach, a dynamic programming method, and a snake segmentation method. The experimental analysis shows that the proposed method is robust in measuring the IMT in carotid ultrasound images. © 2017 Elsevier LtdItem A visual attention guided unsupervised feature learning for robust vessel delineation in retinal images(Elsevier Ltd, 2018) Srinidhi, C.L.; Aparna., P.; Rajan, J.Background and objective: Accurate segmentation of retinal vessels from color fundus images play a significant role in early diagnosis of various ocular, systemic and neuro-degenerative diseases. Segmenting retinal vessels is challenging due to varying nature of vessel caliber, the proximal presence of pathological lesions, strong central vessel reflex and relatively low contrast images. Most existing methods mainly rely on carefully designed hand-crafted features to model the local geometrical appearance of vasculature structures, which often lacks the discriminative capability in segmenting vessels from a noisy and cluttered background. Methods: We propose a novel visual attention guided unsupervised feature learning (VA-UFL) approach to automatically learn the most discriminative features for segmenting vessels in retinal images. Our VA-UFL approach captures both the knowledge of visual attention mechanism and multi-scale contextual information to selectively visualize the most relevant part of the structure in a given local patch. This allows us to encode a rich hierarchical information into unsupervised filtering learning to generate a set of most discriminative features that aid in the accurate segmentation of vessels, even in the presence of cluttered background. Results: Our proposed method is validated on the five publicly available retinal datasets: DRIVE, STARE, CHASE_DB1, IOSTAR and RC-SLO. The experimental results show that the proposed approach significantly outperformed the state-of-the-art methods in terms of sensitivity, accuracy and area under the receiver operating characteristic curve across all five datasets. Specifically, the method achieved an average sensitivity greater than 0.82, which is 7% higher compared to all existing approaches validated on DRIVE, CHASE_DB1, IOSTAR and RC-SLO datasets, and outperformed even second-human observer. The method is shown to be robust to segmentation of thin vessels, strong central vessel reflex, complex crossover structures and fares well on abnormal cases. Conclusions: The discriminative features learned via visual attention mechanism is superior to hand-crafted features, and it is easily adaptable to various kind of datasets where generous training images are often scarce. Hence, our approach can be easily integrated into large-scale retinal screening programs where the expensive labelled annotation is often unavailable. © 2018 Elsevier LtdItem Automated Method for Retinal Artery/Vein Separation via Graph Search Metaheuristic Approach(Institute of Electrical and Electronics Engineers Inc., 2019) Srinidhi, C.L.; Aparna., P.; Rajan, J.Separation of the vascular tree into arteries and veins is a fundamental prerequisite in the automatic diagnosis of retinal biomarkers associated with systemic and neurodegenerative diseases. In this paper, we present a novel graph search metaheuristic approach for automatic separation of arteries/veins (A/V) from color fundus images. Our method exploits local information to disentangle the complex vascular tree into multiple subtrees, and global information to label these vessel subtrees into arteries and veins. Given a binary vessel map, a graph representation of the vascular network is constructed representing the topological and spatial connectivity of the vascular structures. Based on the anatomical uniqueness at vessel crossing and branching points, the vascular tree is split into multiple subtrees containing arteries and veins. Finally, the identified vessel subtrees are labeled with A/V based on a set of hand-crafted features trained with random forest classifier. The proposed method has been tested on four different publicly available retinal datasets with an average accuracy of 94.7%, 93.2%, 96.8%, and 90.2% across AV-DRIVE, CT-DRIVE, INSPIRE-AVR, and WIDE datasets, respectively. These results demonstrate the superiority of our proposed approach in outperforming the state-of-The-Art methods for A/V separation. © 1992-2012 IEEE.Item Stroke classification from computed tomography scans using 3D convolutional neural network(Elsevier Ltd, 2022) Neethi, A.S.; Niyas, S.; Kannath, S.K.; Mathew, J.; Anzar, A.M.; Rajan, J.Stroke is a cerebrovascular condition with a significant morbidity and mortality rate and causes physical disabilities for survivors. Once the symptoms are identified, it requires a time-critical diagnosis with the help of the most commonly available imaging techniques. Computed tomography (CT) scans are used worldwide for preliminary stroke diagnosis. It demands the expertise and experience of a radiologist to identify the stroke type, which is critical for initiating the treatment. This work attempts to gather those domain skills and build a model from CT scans to diagnose stroke. The non-contrast computed tomography (NCCT) scan of the brain comprises volumetric images or a 3D stack of image slices. So, a model that aims to solve the problem by targeting a 2D slice may fail to address the volumetric nature. We propose a 3D-based fully convolutional classification model to identify stroke cases from CT images that take into account the contextual longitudinal composition of volumetric data. We formulate a custom pre-processing module to enhance the scans and aid in improving the classification performance. Some of the significant challenges faced by 3D CNN are the less number of training samples, and the number of scans is mostly biased in favor of normal patients. In this work, the limitation of insufficient training volume and class imbalanced data have been rectified with the help of a strided slicing approach. A block-wise design was used to formulate the proposed network, with the initial part focusing on adjusting the dimensionality, at the same time retaining the features. Later on, the accumulated feature maps were effectively learned utilizing bundled convolutions and skip connections. The results of the proposed method were compared against 3D CNN stroke classification models on NCCT, various 3D CNN architectures on other brain imaging modalities, and 3D extensions of some of the classical CNN architectures. The proposed method achieved an improvement of 14.28% in the F1-score over the state-of-the-art 3D CNN stroke classification model. © 2022 Elsevier LtdItem A Dual-Stage Semi-Supervised Pre-Training Approach for Medical Image Segmentation(Institute of Electrical and Electronics Engineers Inc., 2024) Aralikatti, R.C.; Pawan, S.J.; Rajan, J.Deep neural networks have played a vital role in developing automated methods for addressing medical image segmentation. However, their reliance on labeled data impedes the practicability. Semi-Supervised learning is gaining attention for its intrinsic ability to extract valuable information from labeled and unlabeled data with improved performance. Recently, consistency regularization methods have gained interest due to their efficient learning procedures. They are, however, confined to data or network-level perturbations, negating the benefit of having both forms in a single framework. In light of this, we ask an intriguing but unexplored question: Can we have both network-level and data-level perturbation in the semi-supervised framework? To this end, we present a holistic approach that integrates data-level perturbation in the model pre-training stage, followed by implicit network-level perturbation in the fine-tuning stage. Furthermore, we incorporate networks with manifold learning paradigms throughout the training to facilitate the formation of robust data representations by ensuring local and global semantic affinities adhering to the theory of consensus. Notably, this may be the first attempt in the semi-supervised medical image segmentation archetype to use data and network-level perturbation with a model pre-training strategy. We extensively validated the efficacy of the proposed framework on three benchmark datasets, namely the Automated Cardiac Diagnosis Challenge, ISIC-2018, and Left Atrial Segmentation Challenge datasets, subjected to severely low-sampled labeled data. Notably, in ACDC (4%), ISIC-2018 (5%), and LA (6%) labeled cases, the proposed method outperforms the second-best method by 2.95%, 1.31%, and 0.71% in the Dice Similarity Metric. © 2023 IEEE.Item An automated deep learning pipeline for detecting user errors in spirometry test(Elsevier Ltd, 2024) Bonthada, S.; Pariserum Perumal, S.P.; Naik, P.P.; Mahesh, M.A.; Rajan, J.Spirometer is used as a major diagnostic tool for obstructive airway diseases and a monitoring tool for therapy response and disease staging over time. It is a sophisticated medical device employed to quantify flow and volume of air exhaled by a subject during a specific testing period. The essential metrics obtained from the spirometry test, play a crucial role in enabling healthcare professionals to thoroughly evaluate the respiratory health and condition of the individual under examination. Several spirometer measurements including Forced Vital Capacity (FVC) and Forced Expiratory Volume (FEV) serve as guidelines for diagnosis and prognosis of Chronic Obstructive Pulmonary Diseases (COPD) and asthma. However, user errors caused by different reasons, including improper handling of the equipment and poor performance during the maneuvers of the expiratory airflow, end up in incorrect treatment directions. To ensure accurate results, spirometry tests traditionally require the presence of a skilled professional to identify and address these errors promptly. A novel machine learning approach is proposed in this paper to automatically identify four such user errors based on Volume-Time and Flow-Volume graphs. By detecting specific errors and providing immediate feedback to patients, reliability and accuracy of spirometry results will be improved and the need for trained professionals will be reduced. The implementation facilitates the widespread adoption of spirometry, particularly in low-resource telemedicine settings. This work implements a binary classification model distinguishing between normal and error test samples, achieving a prediction accuracy of 93%. Additionally, a 4-way classification model is presented for identifying individual error sub-types, demonstrating a prediction accuracy of 94%. © 2023 Elsevier Ltd
