Faculty Publications

Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736

Publications by NITK Faculty

Browse

Search Results

Now showing 1 - 10 of 11
  • Item
    Enhancement and bias removal of optical coherence tomography images: An iterative approach with adaptive bilateral filtering
    (Elsevier Ltd, 2016) Sudeep, P.V.; Issac Niwas, S.; Ponnusamy, P.; Rajan, J.; Xiaojun, Y.; Wang, X.; Luo, Y.; Liu, L.
    Optical coherence tomography (OCT) has continually evolved and expanded as one of the most valuable routine tests in ophthalmology. However, noise (speckle) in the acquired images causes quality degradation of OCT images and makes it difficult to analyze the acquired images. In this paper, an iterative approach based on bilateral filtering is proposed for speckle reduction in multiframe OCT data. Gamma noise model is assumed for the observed OCT image. First, the adaptive version of the conventional bilateral filter is applied to enhance the multiframe OCT data and then the bias due to noise is reduced from each of the filtered frames. These unbiased filtered frames are then refined using an iterative approach. Finally, these refined frames are averaged to produce the denoised OCT image. Experimental results on phantom images and real OCT retinal images demonstrate the effectiveness of the proposed filter. © 2016 Elsevier Ltd.
  • Item
    Accurate lumen diameter measurement in curved vessels in carotid ultrasound: an iterative scale-space and spatial transformation approach
    (Springer Verlag service@springer.de, 2017) Krishna Kumar, P.; Araki, T.; Rajan, J.; Saba, L.; Lavra, F.; Ikeda, N.; Sharma, A.M.; Shafique, S.; Nicolaïdes, A.; Laird, J.R.; Gupta, A.; Suri, J.S.
    Monitoring of cerebrovascular diseases via carotid ultrasound has started to become a routine. The measurement of image-based lumen diameter (LD) or inter-adventitial diameter (IAD) is a promising approach for quantification of the degree of stenosis. The manual measurements of LD/IAD are not reliable, subjective and slow. The curvature associated with the vessels along with non-uniformity in the plaque growth poses further challenges. This study uses a novel and generalized approach for automated LD and IAD measurement based on a combination of spatial transformation and scale-space. In this iterative procedure, the scale-space is first used to get the lumen axis which is then used with spatial image transformation paradigm to get a transformed image. The scale-space is then reapplied to retrieve the lumen region and boundary in the transformed framework. Then, inverse transformation is applied to display the results in original image framework. Two hundred and two patients’ left and right common carotid artery (404 carotid images) B-mode ultrasound images were retrospectively analyzed. The validation of our algorithm has done against the two manual expert tracings. The coefficient of correlation between the two manual tracings for LD was 0.98 (p < 0.0001) and 0.99 (p < 0.0001), respectively. The precision of merit between the manual expert tracings and the automated system was 97.7 and 98.7%, respectively. The experimental analysis demonstrated superior performance of the proposed method over conventional approaches. Several statistical tests demonstrated the stability and reliability of the automated system. © 2016, International Federation for Medical and Biological Engineering.
  • Item
    A benchmark study of automated intra-retinal cyst segmentation algorithms using optical coherence tomography B-scans
    (Elsevier Ireland Ltd, 2018) Girish, G.N.; Anima, V.A.; Kothari, A.R.; Sudeep, P.V.; Roychowdhury, S.; Rajan, J.
    (Background and objectives) Retinal cysts are formed by accumulation of fluid in the retina caused by leakages from inflammation or vitreous fractures. Analysis of the retinal cystic spaces holds significance in detection and treatment of several ocular diseases like age-related macular degeneration, diabetic macular edema etc. Thus, segmentation of intra-retinal cysts and quantification of cystic spaces are vital for retinal pathology and severity detection. In the recent years, automated segmentation of intra-retinal cysts using optical coherence tomography B-scans has gained significant importance in the field of retinal image analysis. The objective of this paper is to compare different intra-retinal cyst segmentation algorithms for comparative analysis and benchmarking purposes. (Methods) In this work, we employ a modular approach for standardizing the different segmentation algorithms. Further, we analyze the variations in automated cyst segmentation performances and method scalability across image acquisition systems by using the publicly available cyst segmentation challenge dataset (OPTIMA cyst segmentation challenge). (Results) Several key automated methods are comparatively analyzed using quantitative and qualitative experiments. Our analysis demonstrates the significance of variations in signal-to-noise ratio (SNR), retinal layer morphology and post-processing steps on the automated cyst segmentation processes. (Conclusion) This benchmarking study provides insights towards the scalability of automated processes across vendor-specific imaging modalities to provide guidance for retinal pathology diagnostics and treatment processes. © 2017 Elsevier B.V.
  • Item
    Stack generalized deep ensemble learning for retinal layer segmentation in Optical Coherence Tomography images
    (Elsevier Sp. z o.o., 2020) Anoop, B.N.; Pavan, R.; Girish, G.N.; Kothari, A.R.; Rajan, J.
    Segmentation of retinal layers is a vital and important step in computerized processing and the study of retinal Optical Coherence Tomography (OCT) images. However, automatic segmentation of retinal layers is challenging due to the presence of noise, widely varying reflectivity of image components, variations in morphology and alignment of layers in the presence of retinal diseases. In this paper, we propose a Fully Convolutional Network (FCN) termed as DelNet based on a deep ensemble learning approach to selectively segment retinal layers from OCT scans. The proposed model is tested on a publicly available DUKE DME dataset. Comparative analysis with other state-of-the-art methods on a benchmark dataset shows that the performance of DelNet is superior to other methods. © 2020 Nalecz Institute of Biocybernetics and Biomedical Engineering of the Polish Academy of Sciences
  • Item
    A cascaded convolutional neural network architecture for despeckling OCT images
    (Elsevier Ltd, 2021) Anoop, B.N.; Kalmady, K.S.; Udathu, A.; Siddharth, V.; Girish, G.N.; Kothari, A.R.; Rajan, J.
    Optical Coherence Tomography (OCT) is an imaging technique widely used for medical imaging. Noise in an OCT image generally degrades its quality, thereby obscuring clinical features and making the automated segmentation task suboptimal. Obtaining higher quality images requires sophisticated equipment and technology, available only in selected research settings, and is expensive to acquire. Developing effective denoising methods to improve the quality of the images acquired on systems currently in use has potential for vastly improving image quality and automated quantitative analysis. Noise characteristics in images acquired from machines of different makes and models may vary. Our experiments show that any single state-of-the-art method for noise reduction fails to perform equally well on images from various sources. Therefore, detailed analysis is required to determine the exact noise type in images acquired using different OCT machines. In this work we studied noise characteristics in the publicly available DUKE and OPTIMA datasets to build a more efficient model for noise reduction. These datasets have OCT images acquired using machines of different manufacturers. We further propose a patch-wise training methodology to build a system to effectively denoise OCT images. We have performed an extensive range of experiments to show that the proposed method performs superior to other state-of-the-art-methods. © 2021 Elsevier Ltd
  • Item
    Capsule Network–based architectures for the segmentation of sub-retinal serous fluid in optical coherence tomography images of central serous chorioretinopathy
    (Springer Science and Business Media Deutschland GmbH, 2021) Pawan, S.J.; Sankar, R.; Jain, A.; Jain, M.; Darshan, D.V.; Anoop, B.N.; Kothari, A.R.; Venkatesan, M.; Rajan, J.
    Central serous chorioretinopathy (CSCR) is a chorioretinal disorder of the eye characterized by serous detachment of the neurosensory retina at the posterior pole of the eye. CSCR results from the accumulation of subretinal fluid (SRF) due to idiopathic defects at the level of the retinal pigment epithelial (RPE) that allows serous fluid from the choriocapillaris to diffuse into the subretinal space between RPE and neurosensory retinal layers. This condition is presently investigated by clinicians using invasive angiography or non-invasive optical coherence tomography (OCT) imaging. OCT images provide a representation of the fluid underlying the retina, and in the absence of automated segmentation tools, currently only a qualitative assessment of the same is used to follow the progression of the disease. Automated segmentation of the SRF can prove to be extremely useful for the assessment of progression and for the timely management of CSCR. In this paper, we adopt an existing architecture called SegCaps, which is based on the recently introduced Capsule Networks concept, for the segmentation of SRF from CSCR OCT images. Furthermore, we propose an enhancement to SegCaps, which we have termed as DRIP-Caps, that utilizes the concepts of Dilation, Residual Connections, Inception Blocks, and Capsule Pooling to address the defined problem. The proposed model outperforms the benchmark UNet architecture while reducing the number of trainable parameters by 54.21%. Moreover, it reduces the computation complexity of SegCaps by reducing the number of trainable parameters by 37.85%, with competitive performance. The experiments demonstrate the generalizability of the proposed model, as evidenced by its remarkable performance even with a limited number of training samples. [Figure not available: see fulltext.]. © 2021, International Federation for Medical and Biological Engineering.
  • Item
    Segmentation of focal cortical dysplasia lesions from magnetic resonance images using 3D convolutional neural networks
    (Elsevier Ltd, 2021) Niyas, S.; Chethana Vaisali, S.; Show, I.; Chandrika, T.G.; Vinayagamani, S.; Kesavadas, C.; Rajan, J.
    Computer-aided diagnosis using advanced Artific ial Intelligence (AI) techniques has become much popular over the last few years. This work automates the segmentation of Focal Cortical Dysplasia (FCD) lesions from three-dimensional (3D) Magnetic Resonance (MR) images. FCD is a type of neuronal malformation in the brain cortex and is the leading cause of intractable epilepsy, irrespective of gender or age differences. Since the neuron related abnormalities are usually resistant to drug therapy, surgical resection has been the main treatment approach for patients with intractable epilepsy. Automating the identification and segmentation of FCD is useful for neuroradiologists in pre-surgical evaluations. Convolutional Neural Networks (CNNs) have the ability to learn appropriate features from the training data without any human intervention. But, most of the state-of-the-art FCD segmentation approaches use two-dimensional (2D) CNN models despite the availability of 3D Magnetic resonance imaging (MRI) volumes, and hence fail to leverage the inter-slice information present in the MRI volumes. The major hurdles in considering a 3D CNN model are the need for a large 3D dataset, big memory, and high computation cost. A deep 3D CNN segmentation model, which can extract inter-slice information and overcomes the drawbacks of conventional 3D CNN methods to an extent, is proposed in this paper. The model uses a 3D version of U-Net with residual blocks that works on shallow depth 3D sub-volumes generated from MRI volumes. The proposed method shows superior performance over the state-of-the-art FCD segmentation methods in both qualitative and quantitative analysis. © 2021 Elsevier Ltd
  • Item
    Crossover based technique for data augmentation
    (Elsevier Ireland Ltd, 2022) Raj, R.; Mathew, J.; Kannath, S.K.; Rajan, J.
    Background and Objective: Medical image classification problems are frequently constrained by the availability of datasets. “Data augmentation” has come as a data enhancement and data enrichment solution to the challenge of limited data. Traditionally data augmentation techniques are based on linear and label preserving transformations; however, recent works have demonstrated that even non-linear, non-label preserving techniques can be unexpectedly effective. This paper proposes a non-linear data augmentation technique for the medical domain and explores its results. Methods: This paper introduces “Crossover technique”, a new data augmentation technique for Convolutional Neural Networks in Medical Image Classification problems. Our technique synthesizes a pair of samples by applying two-point crossover on the already available training dataset. By this technique, we create N new samples from N training samples. The proposed crossover based data augmentation technique, although non-label preserving, has performed significantly better in terms of increased accuracy and reduced loss for all the tested datasets over varied architectures. Results: The proposed method was tested on three publicly available medical datasets with various network architectures. For the mini-MIAS database of mammograms, our method improved the accuracy by 1.47%, achieving 80.15% using VGG-16 architecture. Our method works fine for both gray-scale as well as RGB images, as on the PH2 database for Skin Cancer, it improved the accuracy by 3.57%, achieving 85.71% using VGG-19 architecture. In addition, our technique improved accuracy on the brain tumor dataset by 0.40%, achieving 97.97% using VGG-16 architecture. Conclusion: The proposed novel crossover technique for training the Convolutional Neural Network (CNN) is painless to implement by applying two-point crossover on two images to form new images. The method would go a long way in tackling the challenges of limited datasets and problems of class imbalances in medical image analysis. Our code is available at https://github.com/rishiraj-cs/Crossover-augmentation © 2022
  • Item
    Stroke classification from computed tomography scans using 3D convolutional neural network
    (Elsevier Ltd, 2022) Neethi, A.S.; Niyas, S.; Kannath, S.K.; Mathew, J.; Anzar, A.M.; Rajan, J.
    Stroke is a cerebrovascular condition with a significant morbidity and mortality rate and causes physical disabilities for survivors. Once the symptoms are identified, it requires a time-critical diagnosis with the help of the most commonly available imaging techniques. Computed tomography (CT) scans are used worldwide for preliminary stroke diagnosis. It demands the expertise and experience of a radiologist to identify the stroke type, which is critical for initiating the treatment. This work attempts to gather those domain skills and build a model from CT scans to diagnose stroke. The non-contrast computed tomography (NCCT) scan of the brain comprises volumetric images or a 3D stack of image slices. So, a model that aims to solve the problem by targeting a 2D slice may fail to address the volumetric nature. We propose a 3D-based fully convolutional classification model to identify stroke cases from CT images that take into account the contextual longitudinal composition of volumetric data. We formulate a custom pre-processing module to enhance the scans and aid in improving the classification performance. Some of the significant challenges faced by 3D CNN are the less number of training samples, and the number of scans is mostly biased in favor of normal patients. In this work, the limitation of insufficient training volume and class imbalanced data have been rectified with the help of a strided slicing approach. A block-wise design was used to formulate the proposed network, with the initial part focusing on adjusting the dimensionality, at the same time retaining the features. Later on, the accumulated feature maps were effectively learned utilizing bundled convolutions and skip connections. The results of the proposed method were compared against 3D CNN stroke classification models on NCCT, various 3D CNN architectures on other brain imaging modalities, and 3D extensions of some of the classical CNN architectures. The proposed method achieved an improvement of 14.28% in the F1-score over the state-of-the-art 3D CNN stroke classification model. © 2022 Elsevier Ltd
  • Item
    A novel deep classifier framework for automated molecular subtyping of breast carcinoma using immunohistochemistry image analysis
    (Elsevier Ltd, 2022) Mathew, T.; Niyas, S.; Johnpaul, C.I.; Kini, J.; Rajan, J.
    Breast carcinoma has various subtypes based on the genetic factors involved in the pathogenesis of the malignancy. Identifying the exact subtype and providing targeted treatment to the patient can improve the survival chances. Molecular subtyping through immunohistochemistry analysis is a pathology procedure to determine the subtype of breast cancer. The existing manual procedure is tedious and involves assessing the status of the four vital molecular biomarkers present in the tumor tissues. In this paper, a deep learning-based framework for automated molecular subtyping of breast cancer is proposed. Digital slide images of the four biomarkers are separately processed by the proposed framework. In the preprocessing stage, the non-informative background regions from the images are separated. The patches extracted from the foreground regions are classified into target classes using convolutional neural network models trained for this purpose. Classification results are post-processed to predict the status of all the four biomarkers. The predictions for the individual biomarkers are finally consolidated as per clinical guidelines to determine the subtype of the cancer. The proposed system is evaluated for the performance of individual biomarker status prediction and patient-level subtype classification.For patient-level evaluation of biomarkers ER, PR, K67, and HER2, the proposed method gives F1 Scores 1.00, 1.00, 0.90, and 0.94 respectively, whereas for molecular subtyping an F1 score of 0.89 is obtained. In both these aspects, the proposed framework has given significant results that show the effectiveness of our approach. © 2022 Elsevier Ltd