Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
8 results
Search Results
Item Segmentation of intra-retinal cysts from optical coherence tomography images using a fully convolutional neural network model(Institute of Electrical and Electronics Engineers Inc., 2019) Girish, G.N.; Thakur, B.; Chowdhury, S.R.; Kothari, A.R.; Rajan, J.Optical coherence tomography (OCT) is an imaging modality that is used extensively for ophthalmic diagnosis, near-histological visualization, and quantification of retinal abnormalities such as cysts, exudates, retinal layer disorganization, etc. Intra-retinal cysts (IRCs) occur in several macular disorders such as, diabetic macular edema, retinal vascular disorders, age-related macular degeneration, and inflammatory disorders. Automated segmentation of IRCs poses challenges owing to variations in the acquisition system scan intensities, speckle noise, and imaging artifacts. Several segmentation methods have been proposed in the literature for IRC segmentation on vendor-specific OCT images that lack generalizability across imaging systems. In this paper, we propose a fully convolutional network (FCN) model for vendor-independent IRC segmentation. The proposed method counteracts image noise variabilities and trains FCN models on OCT sub-images from the OPTIMA cyst segmentation challenge dataset (with four different vendor-specific images, namely, Cirrus, Nidek, Spectralis, and Topcon). Further, optimal data augmentation and model hyperparametrization are shown to prevent over-fitting for IRC area segmentation. The proposed method is evaluated on the test dataset with a recall/precision rate of 0.66/0.79 across imaging vendors. The Dice correlation coefficient of the proposed method outperforms that of the published algorithms in the OPTIMA cyst segmentation challenge with a Dice rate of 0.71 across the vendors. © 2013 IEEE.Item Automatic detection and localization of Focal Cortical Dysplasia lesions in MRI using fully convolutional neural network(Elsevier Ltd, 2019) Bijay Dev, K.M.; Pawan, P.S.; Niyas, S.; Vinayagamani, S.; Kesavadas, C.; Rajan, J.Focal cortical dysplasia (FCD) is the leading cause of drug-resistant epilepsy in both children and adults. At present, the only therapeutic approach in patients with drug-resistant epilepsy is surgery. Hence, the quantification of FCD via non-invasive imaging techniques helps physicians to decide on surgical interventions. The properties like non-invasiveness and capability to produce high-resolution images makes magnetic resonance imaging an ideal tool for detecting the FCD to an extent. The FCD lesions vary in size, shape, and location for different patients and make the manual detection time consuming and sensitive to the experience of the observer. Automatic segmentation of FCD lesions is challenging due to the difference in signal strength in images acquired with different machines, noise, and other kinds of distortions such as motion artifacts. Most of the methods proposed in the literature use conventional machine learning and image processing techniques in which their accuracy relies on the trained features. Hence, feature extraction should be done more precisely which requires human expertise. The ability to learn the appropriate features/representations from the training data without any human interventions makes the convolutional neural network (CNN) the suitable method for addressing these drawbacks. As far as we are aware, this work is the first one to use a CNN based model to solve the aforementioned problem using only MRI FLAIR images. We customized the popular U-Net architecture and trained the proposed model from scratch (using MRI images acquired with 1.5T and 3T scanners). FCD detection rate (recall) of the proposed model is 82.5 (33/40 patients detected correctly). © 2019Item Segmentation of focal cortical dysplasia lesions from magnetic resonance images using 3D convolutional neural networks(Elsevier Ltd, 2021) Niyas, S.; Chethana Vaisali, S.; Show, I.; Chandrika, T.G.; Vinayagamani, S.; Kesavadas, C.; Rajan, J.Computer-aided diagnosis using advanced Artific ial Intelligence (AI) techniques has become much popular over the last few years. This work automates the segmentation of Focal Cortical Dysplasia (FCD) lesions from three-dimensional (3D) Magnetic Resonance (MR) images. FCD is a type of neuronal malformation in the brain cortex and is the leading cause of intractable epilepsy, irrespective of gender or age differences. Since the neuron related abnormalities are usually resistant to drug therapy, surgical resection has been the main treatment approach for patients with intractable epilepsy. Automating the identification and segmentation of FCD is useful for neuroradiologists in pre-surgical evaluations. Convolutional Neural Networks (CNNs) have the ability to learn appropriate features from the training data without any human intervention. But, most of the state-of-the-art FCD segmentation approaches use two-dimensional (2D) CNN models despite the availability of 3D Magnetic resonance imaging (MRI) volumes, and hence fail to leverage the inter-slice information present in the MRI volumes. The major hurdles in considering a 3D CNN model are the need for a large 3D dataset, big memory, and high computation cost. A deep 3D CNN segmentation model, which can extract inter-slice information and overcomes the drawbacks of conventional 3D CNN methods to an extent, is proposed in this paper. The model uses a 3D version of U-Net with residual blocks that works on shallow depth 3D sub-volumes generated from MRI volumes. The proposed method shows superior performance over the state-of-the-art FCD segmentation methods in both qualitative and quantitative analysis. © 2021 Elsevier LtdItem An empirical study of the impact of masks on face recognition(Elsevier Ltd, 2022) Jeevan, G.; Zacharias, G.C.; Nair, M.S.; Rajan, J.Face recognition has a wide range of applications like video surveillance, security, access control, etc. Over the past decade, the field of face recognition has matured and grown at par with the latest advancements in technology, particularly deep learning. Convolution Neural Networks have surpassed human accuracy in Face Recognition on popular evaluation tests such as LFW. However, most existing models evaluate their performance with an assumption of the availability of full facial information. The COVID-19 pandemic has laid forth challenges to this assumption, and to the performance of existing methods and leading-edge algorithms in the field of face recognition. This is in the wake of an explosive increase in the number of people wearing face masks. The reduced amount of facial information available to a recognition system from a masked face impacts their discrimination ability. In this context, we design and conduct a series of experiments comparing the masked face recognition performances of CNN architectures available in literature and exploring possible alterations in loss functions, architectures, and training methods that can enable existing methods to fully extract and leverage the limited facial information available in a masked face. We evaluate existing CNN-based face recognition systems for their performance against datasets composed entirely of masked faces, in contrast to the existing standard evaluations where masked or occluded faces are a rare occurrence. The study also presents evidence denoting an increased impact of network depth on performance compared to standard face recognition. Our observations indicate that substantial performance gains can be achieved by the introduction of masked faces in the training set. The study also inferred that various parameter settings determined suitable for standard face recognition are not ideal for masked face recognition. Through empirical analysis we derived new value recommendations for these parameters and settings. © 2021 Elsevier LtdItem Crossover based technique for data augmentation(Elsevier Ireland Ltd, 2022) Raj, R.; Mathew, J.; Kannath, S.K.; Rajan, J.Background and Objective: Medical image classification problems are frequently constrained by the availability of datasets. “Data augmentation” has come as a data enhancement and data enrichment solution to the challenge of limited data. Traditionally data augmentation techniques are based on linear and label preserving transformations; however, recent works have demonstrated that even non-linear, non-label preserving techniques can be unexpectedly effective. This paper proposes a non-linear data augmentation technique for the medical domain and explores its results. Methods: This paper introduces “Crossover technique”, a new data augmentation technique for Convolutional Neural Networks in Medical Image Classification problems. Our technique synthesizes a pair of samples by applying two-point crossover on the already available training dataset. By this technique, we create N new samples from N training samples. The proposed crossover based data augmentation technique, although non-label preserving, has performed significantly better in terms of increased accuracy and reduced loss for all the tested datasets over varied architectures. Results: The proposed method was tested on three publicly available medical datasets with various network architectures. For the mini-MIAS database of mammograms, our method improved the accuracy by 1.47%, achieving 80.15% using VGG-16 architecture. Our method works fine for both gray-scale as well as RGB images, as on the PH2 database for Skin Cancer, it improved the accuracy by 3.57%, achieving 85.71% using VGG-19 architecture. In addition, our technique improved accuracy on the brain tumor dataset by 0.40%, achieving 97.97% using VGG-16 architecture. Conclusion: The proposed novel crossover technique for training the Convolutional Neural Network (CNN) is painless to implement by applying two-point crossover on two images to form new images. The method would go a long way in tackling the challenges of limited datasets and problems of class imbalances in medical image analysis. Our code is available at https://github.com/rishiraj-cs/Crossover-augmentation © 2022Item Stroke classification from computed tomography scans using 3D convolutional neural network(Elsevier Ltd, 2022) Neethi, A.S.; Niyas, S.; Kannath, S.K.; Mathew, J.; Anzar, A.M.; Rajan, J.Stroke is a cerebrovascular condition with a significant morbidity and mortality rate and causes physical disabilities for survivors. Once the symptoms are identified, it requires a time-critical diagnosis with the help of the most commonly available imaging techniques. Computed tomography (CT) scans are used worldwide for preliminary stroke diagnosis. It demands the expertise and experience of a radiologist to identify the stroke type, which is critical for initiating the treatment. This work attempts to gather those domain skills and build a model from CT scans to diagnose stroke. The non-contrast computed tomography (NCCT) scan of the brain comprises volumetric images or a 3D stack of image slices. So, a model that aims to solve the problem by targeting a 2D slice may fail to address the volumetric nature. We propose a 3D-based fully convolutional classification model to identify stroke cases from CT images that take into account the contextual longitudinal composition of volumetric data. We formulate a custom pre-processing module to enhance the scans and aid in improving the classification performance. Some of the significant challenges faced by 3D CNN are the less number of training samples, and the number of scans is mostly biased in favor of normal patients. In this work, the limitation of insufficient training volume and class imbalanced data have been rectified with the help of a strided slicing approach. A block-wise design was used to formulate the proposed network, with the initial part focusing on adjusting the dimensionality, at the same time retaining the features. Later on, the accumulated feature maps were effectively learned utilizing bundled convolutions and skip connections. The results of the proposed method were compared against 3D CNN stroke classification models on NCCT, various 3D CNN architectures on other brain imaging modalities, and 3D extensions of some of the classical CNN architectures. The proposed method achieved an improvement of 14.28% in the F1-score over the state-of-the-art 3D CNN stroke classification model. © 2022 Elsevier LtdItem Forecasting Land-Use and Land-Cover Change Using Hybrid CNN-LSTM Model(Institute of Electrical and Electronics Engineers Inc., 2024) Varma, B.; Naik, N.; Chandrasekaran, K.; Venkatesan, M.; Rajan, J.Land-use and land-cover (LULC) information helps analyze future trends and is essential for environmental management and sustainable planning. Time-series satellite images are employed in this study to forecast changes in LULC. Deep-learning (DL) frameworks have been widely used for modeling dynamic LULC changes at the regional level. However, improving the accuracy of the existing prediction models is necessary. This letter proposes an integrated convolutional neural network (CNN) and long short-term memory network (LSTM) known as a hybrid CNN-LSTM model to address the fine-scale LULC prediction requirement. The efficiency of the proposed approach was examined using LULC data for the Dakshina Kannada District of Karnataka State, India. The proposed model achieved an overall accuracy of 95.11% and a kappa coefficient of 0.92, based on the ground-truth data for 2014. The model's predictions for 2035, based on data from 2005 to 2014, revealed the following trends: Urbanization exhibited a pattern of rapid expansion and increased growth. The integrated CNN-LSTM model extracted spatial and temporal features for effectively predicting LULC changes. Infrastructure development, population density, and enhanced economic activities were the major driving factors of changes in LULC for the study region. Robust LULC change forecasting will strengthen LULC evaluations, aid in understanding complex land-use systems, and empower decision-makers to formulate effective land management strategies in the coming years. © 2004-2012 IEEE.Item A Dual-Stage Semi-Supervised Pre-Training Approach for Medical Image Segmentation(Institute of Electrical and Electronics Engineers Inc., 2024) Aralikatti, R.C.; Pawan, S.J.; Rajan, J.Deep neural networks have played a vital role in developing automated methods for addressing medical image segmentation. However, their reliance on labeled data impedes the practicability. Semi-Supervised learning is gaining attention for its intrinsic ability to extract valuable information from labeled and unlabeled data with improved performance. Recently, consistency regularization methods have gained interest due to their efficient learning procedures. They are, however, confined to data or network-level perturbations, negating the benefit of having both forms in a single framework. In light of this, we ask an intriguing but unexplored question: Can we have both network-level and data-level perturbation in the semi-supervised framework? To this end, we present a holistic approach that integrates data-level perturbation in the model pre-training stage, followed by implicit network-level perturbation in the fine-tuning stage. Furthermore, we incorporate networks with manifold learning paradigms throughout the training to facilitate the formation of robust data representations by ensuring local and global semantic affinities adhering to the theory of consensus. Notably, this may be the first attempt in the semi-supervised medical image segmentation archetype to use data and network-level perturbation with a model pre-training strategy. We extensively validated the efficacy of the proposed framework on three benchmark datasets, namely the Automated Cardiac Diagnosis Challenge, ISIC-2018, and Left Atrial Segmentation Challenge datasets, subjected to severely low-sampled labeled data. Notably, in ACDC (4%), ISIC-2018 (5%), and LA (6%) labeled cases, the proposed method outperforms the second-best method by 2.95%, 1.31%, and 0.71% in the Dice Similarity Metric. © 2023 IEEE.
