Faculty Publications

Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736

Publications by NITK Faculty

Browse

Search Results

Now showing 1 - 8 of 8
  • Item
    Enhancement and bias removal of optical coherence tomography images: An iterative approach with adaptive bilateral filtering
    (Elsevier Ltd, 2016) Sudeep, P.V.; Issac Niwas, S.; Ponnusamy, P.; Rajan, J.; Xiaojun, Y.; Wang, X.; Luo, Y.; Liu, L.
    Optical coherence tomography (OCT) has continually evolved and expanded as one of the most valuable routine tests in ophthalmology. However, noise (speckle) in the acquired images causes quality degradation of OCT images and makes it difficult to analyze the acquired images. In this paper, an iterative approach based on bilateral filtering is proposed for speckle reduction in multiframe OCT data. Gamma noise model is assumed for the observed OCT image. First, the adaptive version of the conventional bilateral filter is applied to enhance the multiframe OCT data and then the bias due to noise is reduced from each of the filtered frames. These unbiased filtered frames are then refined using an iterative approach. Finally, these refined frames are averaged to produce the denoised OCT image. Experimental results on phantom images and real OCT retinal images demonstrate the effectiveness of the proposed filter. © 2016 Elsevier Ltd.
  • Item
    A benchmark study of automated intra-retinal cyst segmentation algorithms using optical coherence tomography B-scans
    (Elsevier Ireland Ltd, 2018) Girish, G.N.; Anima, V.A.; Kothari, A.R.; Sudeep, P.V.; Roychowdhury, S.; Rajan, J.
    (Background and objectives) Retinal cysts are formed by accumulation of fluid in the retina caused by leakages from inflammation or vitreous fractures. Analysis of the retinal cystic spaces holds significance in detection and treatment of several ocular diseases like age-related macular degeneration, diabetic macular edema etc. Thus, segmentation of intra-retinal cysts and quantification of cystic spaces are vital for retinal pathology and severity detection. In the recent years, automated segmentation of intra-retinal cysts using optical coherence tomography B-scans has gained significant importance in the field of retinal image analysis. The objective of this paper is to compare different intra-retinal cyst segmentation algorithms for comparative analysis and benchmarking purposes. (Methods) In this work, we employ a modular approach for standardizing the different segmentation algorithms. Further, we analyze the variations in automated cyst segmentation performances and method scalability across image acquisition systems by using the publicly available cyst segmentation challenge dataset (OPTIMA cyst segmentation challenge). (Results) Several key automated methods are comparatively analyzed using quantitative and qualitative experiments. Our analysis demonstrates the significance of variations in signal-to-noise ratio (SNR), retinal layer morphology and post-processing steps on the automated cyst segmentation processes. (Conclusion) This benchmarking study provides insights towards the scalability of automated processes across vendor-specific imaging modalities to provide guidance for retinal pathology diagnostics and treatment processes. © 2017 Elsevier B.V.
  • Item
    A visual attention guided unsupervised feature learning for robust vessel delineation in retinal images
    (Elsevier Ltd, 2018) Srinidhi, C.L.; Aparna., P.; Rajan, J.
    Background and objective: Accurate segmentation of retinal vessels from color fundus images play a significant role in early diagnosis of various ocular, systemic and neuro-degenerative diseases. Segmenting retinal vessels is challenging due to varying nature of vessel caliber, the proximal presence of pathological lesions, strong central vessel reflex and relatively low contrast images. Most existing methods mainly rely on carefully designed hand-crafted features to model the local geometrical appearance of vasculature structures, which often lacks the discriminative capability in segmenting vessels from a noisy and cluttered background. Methods: We propose a novel visual attention guided unsupervised feature learning (VA-UFL) approach to automatically learn the most discriminative features for segmenting vessels in retinal images. Our VA-UFL approach captures both the knowledge of visual attention mechanism and multi-scale contextual information to selectively visualize the most relevant part of the structure in a given local patch. This allows us to encode a rich hierarchical information into unsupervised filtering learning to generate a set of most discriminative features that aid in the accurate segmentation of vessels, even in the presence of cluttered background. Results: Our proposed method is validated on the five publicly available retinal datasets: DRIVE, STARE, CHASE_DB1, IOSTAR and RC-SLO. The experimental results show that the proposed approach significantly outperformed the state-of-the-art methods in terms of sensitivity, accuracy and area under the receiver operating characteristic curve across all five datasets. Specifically, the method achieved an average sensitivity greater than 0.82, which is 7% higher compared to all existing approaches validated on DRIVE, CHASE_DB1, IOSTAR and RC-SLO datasets, and outperformed even second-human observer. The method is shown to be robust to segmentation of thin vessels, strong central vessel reflex, complex crossover structures and fares well on abnormal cases. Conclusions: The discriminative features learned via visual attention mechanism is superior to hand-crafted features, and it is easily adaptable to various kind of datasets where generous training images are often scarce. Hence, our approach can be easily integrated into large-scale retinal screening programs where the expensive labelled annotation is often unavailable. © 2018 Elsevier Ltd
  • Item
    Segmentation of intra-retinal cysts from optical coherence tomography images using a fully convolutional neural network model
    (Institute of Electrical and Electronics Engineers Inc., 2019) Girish, G.N.; Thakur, B.; Chowdhury, S.R.; Kothari, A.R.; Rajan, J.
    Optical coherence tomography (OCT) is an imaging modality that is used extensively for ophthalmic diagnosis, near-histological visualization, and quantification of retinal abnormalities such as cysts, exudates, retinal layer disorganization, etc. Intra-retinal cysts (IRCs) occur in several macular disorders such as, diabetic macular edema, retinal vascular disorders, age-related macular degeneration, and inflammatory disorders. Automated segmentation of IRCs poses challenges owing to variations in the acquisition system scan intensities, speckle noise, and imaging artifacts. Several segmentation methods have been proposed in the literature for IRC segmentation on vendor-specific OCT images that lack generalizability across imaging systems. In this paper, we propose a fully convolutional network (FCN) model for vendor-independent IRC segmentation. The proposed method counteracts image noise variabilities and trains FCN models on OCT sub-images from the OPTIMA cyst segmentation challenge dataset (with four different vendor-specific images, namely, Cirrus, Nidek, Spectralis, and Topcon). Further, optimal data augmentation and model hyperparametrization are shown to prevent over-fitting for IRC area segmentation. The proposed method is evaluated on the test dataset with a recall/precision rate of 0.66/0.79 across imaging vendors. The Dice correlation coefficient of the proposed method outperforms that of the published algorithms in the OPTIMA cyst segmentation challenge with a Dice rate of 0.71 across the vendors. © 2013 IEEE.
  • Item
    Automated Method for Retinal Artery/Vein Separation via Graph Search Metaheuristic Approach
    (Institute of Electrical and Electronics Engineers Inc., 2019) Srinidhi, C.L.; Aparna., P.; Rajan, J.
    Separation of the vascular tree into arteries and veins is a fundamental prerequisite in the automatic diagnosis of retinal biomarkers associated with systemic and neurodegenerative diseases. In this paper, we present a novel graph search metaheuristic approach for automatic separation of arteries/veins (A/V) from color fundus images. Our method exploits local information to disentangle the complex vascular tree into multiple subtrees, and global information to label these vessel subtrees into arteries and veins. Given a binary vessel map, a graph representation of the vascular network is constructed representing the topological and spatial connectivity of the vascular structures. Based on the anatomical uniqueness at vessel crossing and branching points, the vascular tree is split into multiple subtrees containing arteries and veins. Finally, the identified vessel subtrees are labeled with A/V based on a set of hand-crafted features trained with random forest classifier. The proposed method has been tested on four different publicly available retinal datasets with an average accuracy of 94.7%, 93.2%, 96.8%, and 90.2% across AV-DRIVE, CT-DRIVE, INSPIRE-AVR, and WIDE datasets, respectively. These results demonstrate the superiority of our proposed approach in outperforming the state-of-The-Art methods for A/V separation. © 1992-2012 IEEE.
  • Item
    Marker controlled watershed transform for intra-retinal cysts segmentation from optical coherence tomography B-scans
    (Elsevier B.V., 2020) Girish, G.N.; R Kothari, A.; Rajan, J.
    Retinal cysts have pathological significance in several eye disorders. Detecting and quantifying such cysts from optical coherence tomography (OCT) scans is currently tedious and requires expertise. To aid the diagnostic process, an automatic intra-retinal cyst segmentation method using marker-controlled watershed transform on OCT B-scans is proposed in this paper. The proposed method is based on two stages – k-means clustering technique is used to identify cysts in the form of markers, followed by topographical based watershed transform for final segmentation. Qualitative and quantitative evaluation of proposed method was carried out against ground truth obtained from two graders on OPTIMA cyst segmentation challenge dataset. This method efficiently segments cystic structures with mean recall and precision rate 0.67 and 0.78, respectively, while preserving high correlation coefficient of 0.95 against ground truth obtained from both graders. Obtained results show that the proposed method outperformed other existing methods. © 2017 Elsevier B.V.
  • Item
    Capsule Network–based architectures for the segmentation of sub-retinal serous fluid in optical coherence tomography images of central serous chorioretinopathy
    (Springer Science and Business Media Deutschland GmbH, 2021) Pawan, S.J.; Sankar, R.; Jain, A.; Jain, M.; Darshan, D.V.; Anoop, B.N.; Kothari, A.R.; Venkatesan, M.; Rajan, J.
    Central serous chorioretinopathy (CSCR) is a chorioretinal disorder of the eye characterized by serous detachment of the neurosensory retina at the posterior pole of the eye. CSCR results from the accumulation of subretinal fluid (SRF) due to idiopathic defects at the level of the retinal pigment epithelial (RPE) that allows serous fluid from the choriocapillaris to diffuse into the subretinal space between RPE and neurosensory retinal layers. This condition is presently investigated by clinicians using invasive angiography or non-invasive optical coherence tomography (OCT) imaging. OCT images provide a representation of the fluid underlying the retina, and in the absence of automated segmentation tools, currently only a qualitative assessment of the same is used to follow the progression of the disease. Automated segmentation of the SRF can prove to be extremely useful for the assessment of progression and for the timely management of CSCR. In this paper, we adopt an existing architecture called SegCaps, which is based on the recently introduced Capsule Networks concept, for the segmentation of SRF from CSCR OCT images. Furthermore, we propose an enhancement to SegCaps, which we have termed as DRIP-Caps, that utilizes the concepts of Dilation, Residual Connections, Inception Blocks, and Capsule Pooling to address the defined problem. The proposed model outperforms the benchmark UNet architecture while reducing the number of trainable parameters by 54.21%. Moreover, it reduces the computation complexity of SegCaps by reducing the number of trainable parameters by 37.85%, with competitive performance. The experiments demonstrate the generalizability of the proposed model, as evidenced by its remarkable performance even with a limited number of training samples. [Figure not available: see fulltext.]. © 2021, International Federation for Medical and Biological Engineering.
  • Item
    A Deep Ensemble Learning-Based CNN Architecture for Multiclass Retinal Fluid Segmentation in OCT Images
    (Institute of Electrical and Electronics Engineers Inc., 2023) Rahil, M.; Anoop, B.N.; Girish, G.N.; Kothari, A.R.; Koolagudi, S.G.; Rajan, J.
    Retinal Fluids (fluid collections) develop because of the accumulation of fluid in the retina, which may be caused by several retinal disorders, and can lead to loss of vision. Optical coherence tomography (OCT) provides non-invasive cross-sectional images of the retina and enables the visualization of different retinal abnormalities. The identification and segmentation of retinal cysts from OCT scans is gaining immense attention since the manual analysis of OCT data is time consuming and requires an experienced ophthalmologist. Identification and categorization of the retinal cysts aids in establishing the pathophysiology of various retinal diseases, such as macular edema, diabetic macular edema, and age-related macular degeneration. Hence, an automatic algorithm for the segmentation and detection of retinal cysts would be of great value to the ophthalmologists. In this study, we have proposed a convolutional neural network-based deep ensemble architecture that can segment the three different types of retinal cysts from the retinal OCT images. The quantitative and qualitative performance of the model was evaluated using the publicly available RETOUCH challenge dataset. The proposed model outperformed the state-of-the-art methods, with an overall improvement of 1.8%. © 2013 IEEE.