Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
14 results
Search Results
Item Depthwise Separable Convolutional Neural Network Model for Intra-Retinal Cyst Segmentation(Institute of Electrical and Electronics Engineers Inc., 2019) Girish, G.N.; Saikumar, B.; Roychowdhury, S.; Kothari, A.R.; Rajan, J.Intra-retinal cysts (IRCs) are significant in detecting several ocular and retinal pathologies. Segmentation and quantification of IRCs from optical coherence tomography (OCT) scans is a challenging task due to present of speckle noise and scan intensity variations across the vendors. This work proposes a convolutional neural network (CNN) model with an encoder-decoder pair architecture for IRC segmentation across different cross-vendor OCT scans. Since deep CNN models have high computational complexity due to a large number of parameters, the proposed method of depthwise separable convolutional filters aids model generalizability and prevents model over-fitting. Also, the swish activation function is employed to prevent the vanishing gradient problem. The optima cyst segmentation challenge (OCSC) dataset with four different vendor OCT device scans is used to evaluate the proposed model. Our model achieves a mean Dice score of 0.74 and mean recall/precision rate of 0.72/0.82 across different imaging vendors and it outperforms existing algorithms on the OCSC dataset. © 2019 IEEE.Item Cross Task Temporal Consistency for Semi-supervised Medical Image Segmentation(Springer Science and Business Media Deutschland GmbH, 2022) Jeevan, G.; Pawan, S.J.; Rajan, J.Semi-supervised deep learning for medical image segmentation is an intriguing area of research as far as the requirement for an adequate amount of labeled data is concerned. In this context, we propose Cross Task Temporal Consistency, a novel Semi-Supervised Learning framework that combines a self-ensembled learning strategy with cross-consistency constraints derived from the implicit perturbations between the incongruous tasks of multi-headed architectures. More specifically, the Signed Distance Map output of a teacher model is transformed to an approximate segmentation map which acts as a pseudo target for the student model. Simultaneously, the teacher’s segmentation task output is utilized as the objective for the student’s Signed Distance Map derived segmentation output. Our proposed framework is intuitively simple and can be plugged into existing segmentation architectures with minimal computational overhead. Our work focuses on improving the segmentation performance in very low-labeled data proportions and has demonstrated marked superiority in performance and stability over existing SSL techniques, as evidenced through extensive evaluations on two standard datasets: ACDC and LA. © 2022, Springer Nature Switzerland AG.Item Segmentation of intra-retinal cysts from optical coherence tomography images using a fully convolutional neural network model(Institute of Electrical and Electronics Engineers Inc., 2019) Girish, G.N.; Thakur, B.; Chowdhury, S.R.; Kothari, A.R.; Rajan, J.Optical coherence tomography (OCT) is an imaging modality that is used extensively for ophthalmic diagnosis, near-histological visualization, and quantification of retinal abnormalities such as cysts, exudates, retinal layer disorganization, etc. Intra-retinal cysts (IRCs) occur in several macular disorders such as, diabetic macular edema, retinal vascular disorders, age-related macular degeneration, and inflammatory disorders. Automated segmentation of IRCs poses challenges owing to variations in the acquisition system scan intensities, speckle noise, and imaging artifacts. Several segmentation methods have been proposed in the literature for IRC segmentation on vendor-specific OCT images that lack generalizability across imaging systems. In this paper, we propose a fully convolutional network (FCN) model for vendor-independent IRC segmentation. The proposed method counteracts image noise variabilities and trains FCN models on OCT sub-images from the OPTIMA cyst segmentation challenge dataset (with four different vendor-specific images, namely, Cirrus, Nidek, Spectralis, and Topcon). Further, optimal data augmentation and model hyperparametrization are shown to prevent over-fitting for IRC area segmentation. The proposed method is evaluated on the test dataset with a recall/precision rate of 0.66/0.79 across imaging vendors. The Dice correlation coefficient of the proposed method outperforms that of the published algorithms in the OPTIMA cyst segmentation challenge with a Dice rate of 0.71 across the vendors. © 2013 IEEE.Item Automatic detection and localization of Focal Cortical Dysplasia lesions in MRI using fully convolutional neural network(Elsevier Ltd, 2019) Bijay Dev, K.M.; Pawan, P.S.; Niyas, S.; Vinayagamani, S.; Kesavadas, C.; Rajan, J.Focal cortical dysplasia (FCD) is the leading cause of drug-resistant epilepsy in both children and adults. At present, the only therapeutic approach in patients with drug-resistant epilepsy is surgery. Hence, the quantification of FCD via non-invasive imaging techniques helps physicians to decide on surgical interventions. The properties like non-invasiveness and capability to produce high-resolution images makes magnetic resonance imaging an ideal tool for detecting the FCD to an extent. The FCD lesions vary in size, shape, and location for different patients and make the manual detection time consuming and sensitive to the experience of the observer. Automatic segmentation of FCD lesions is challenging due to the difference in signal strength in images acquired with different machines, noise, and other kinds of distortions such as motion artifacts. Most of the methods proposed in the literature use conventional machine learning and image processing techniques in which their accuracy relies on the trained features. Hence, feature extraction should be done more precisely which requires human expertise. The ability to learn the appropriate features/representations from the training data without any human interventions makes the convolutional neural network (CNN) the suitable method for addressing these drawbacks. As far as we are aware, this work is the first one to use a CNN based model to solve the aforementioned problem using only MRI FLAIR images. We customized the popular U-Net architecture and trained the proposed model from scratch (using MRI images acquired with 1.5T and 3T scanners). FCD detection rate (recall) of the proposed model is 82.5 (33/40 patients detected correctly). © 2019Item A cascaded convolutional neural network architecture for despeckling OCT images(Elsevier Ltd, 2021) Anoop, B.N.; Kalmady, K.S.; Udathu, A.; Siddharth, V.; Girish, G.N.; Kothari, A.R.; Rajan, J.Optical Coherence Tomography (OCT) is an imaging technique widely used for medical imaging. Noise in an OCT image generally degrades its quality, thereby obscuring clinical features and making the automated segmentation task suboptimal. Obtaining higher quality images requires sophisticated equipment and technology, available only in selected research settings, and is expensive to acquire. Developing effective denoising methods to improve the quality of the images acquired on systems currently in use has potential for vastly improving image quality and automated quantitative analysis. Noise characteristics in images acquired from machines of different makes and models may vary. Our experiments show that any single state-of-the-art method for noise reduction fails to perform equally well on images from various sources. Therefore, detailed analysis is required to determine the exact noise type in images acquired using different OCT machines. In this work we studied noise characteristics in the publicly available DUKE and OPTIMA datasets to build a more efficient model for noise reduction. These datasets have OCT images acquired using machines of different manufacturers. We further propose a patch-wise training methodology to build a system to effectively denoise OCT images. We have performed an extensive range of experiments to show that the proposed method performs superior to other state-of-the-art-methods. © 2021 Elsevier LtdItem Multi-Res-Attention UNet: A CNN Model for the Segmentation of Focal Cortical Dysplasia Lesions from Magnetic Resonance Images(Institute of Electrical and Electronics Engineers Inc., 2021) Thomas, E.; Pawan, S.J.; Kumar, S.; Horo, A.; Niyas, S.; Vinayagamani, S.; Kesavadas, C.; Rajan, J.In this work, we have focused on the segmentation of Focal Cortical Dysplasia (FCD) regions from MRI images. FCD is a congenital malformation of brain development that is considered as the most common causative of intractable epilepsy in adults and children. To our knowledge, the latest work concerning the automatic segmentation of FCD was proposed using a fully convolutional neural network (FCN) model based on UNet. While there is no doubt that the model outperformed conventional image processing techniques by a considerable margin, it suffers from several pitfalls. First, it does not account for the large semantic gap of feature maps passed from the encoder to the decoder layer through the long skip connections. Second, it fails to leverage the salient features that represent complex FCD lesions and suppress most of the irrelevant features in the input sample. We propose Multi-Res-Attention UNet; a novel hybrid skip connection-based FCN architecture that addresses these drawbacks. Moreover, we have trained it from scratch for the detection of FCD from 3 T MRI 3D FLAIR images and conducted 5-fold cross-validation to evaluate the model. FCD detection rate (Recall) of 92% was achieved for patient wise analysis. © 2013 IEEE.Item An empirical study of the impact of masks on face recognition(Elsevier Ltd, 2022) Jeevan, G.; Zacharias, G.C.; Nair, M.S.; Rajan, J.Face recognition has a wide range of applications like video surveillance, security, access control, etc. Over the past decade, the field of face recognition has matured and grown at par with the latest advancements in technology, particularly deep learning. Convolution Neural Networks have surpassed human accuracy in Face Recognition on popular evaluation tests such as LFW. However, most existing models evaluate their performance with an assumption of the availability of full facial information. The COVID-19 pandemic has laid forth challenges to this assumption, and to the performance of existing methods and leading-edge algorithms in the field of face recognition. This is in the wake of an explosive increase in the number of people wearing face masks. The reduced amount of facial information available to a recognition system from a masked face impacts their discrimination ability. In this context, we design and conduct a series of experiments comparing the masked face recognition performances of CNN architectures available in literature and exploring possible alterations in loss functions, architectures, and training methods that can enable existing methods to fully extract and leverage the limited facial information available in a masked face. We evaluate existing CNN-based face recognition systems for their performance against datasets composed entirely of masked faces, in contrast to the existing standard evaluations where masked or occluded faces are a rare occurrence. The study also presents evidence denoting an increased impact of network depth on performance compared to standard face recognition. Our observations indicate that substantial performance gains can be achieved by the introduction of masked faces in the training set. The study also inferred that various parameter settings determined suitable for standard face recognition are not ideal for masked face recognition. Through empirical analysis we derived new value recommendations for these parameters and settings. © 2021 Elsevier LtdItem Crossover based technique for data augmentation(Elsevier Ireland Ltd, 2022) Raj, R.; Mathew, J.; Kannath, S.K.; Rajan, J.Background and Objective: Medical image classification problems are frequently constrained by the availability of datasets. “Data augmentation” has come as a data enhancement and data enrichment solution to the challenge of limited data. Traditionally data augmentation techniques are based on linear and label preserving transformations; however, recent works have demonstrated that even non-linear, non-label preserving techniques can be unexpectedly effective. This paper proposes a non-linear data augmentation technique for the medical domain and explores its results. Methods: This paper introduces “Crossover technique”, a new data augmentation technique for Convolutional Neural Networks in Medical Image Classification problems. Our technique synthesizes a pair of samples by applying two-point crossover on the already available training dataset. By this technique, we create N new samples from N training samples. The proposed crossover based data augmentation technique, although non-label preserving, has performed significantly better in terms of increased accuracy and reduced loss for all the tested datasets over varied architectures. Results: The proposed method was tested on three publicly available medical datasets with various network architectures. For the mini-MIAS database of mammograms, our method improved the accuracy by 1.47%, achieving 80.15% using VGG-16 architecture. Our method works fine for both gray-scale as well as RGB images, as on the PH2 database for Skin Cancer, it improved the accuracy by 3.57%, achieving 85.71% using VGG-19 architecture. In addition, our technique improved accuracy on the brain tumor dataset by 0.40%, achieving 97.97% using VGG-16 architecture. Conclusion: The proposed novel crossover technique for training the Convolutional Neural Network (CNN) is painless to implement by applying two-point crossover on two images to form new images. The method would go a long way in tackling the challenges of limited datasets and problems of class imbalances in medical image analysis. Our code is available at https://github.com/rishiraj-cs/Crossover-augmentation © 2022Item Stroke classification from computed tomography scans using 3D convolutional neural network(Elsevier Ltd, 2022) Neethi, A.S.; Niyas, S.; Kannath, S.K.; Mathew, J.; Anzar, A.M.; Rajan, J.Stroke is a cerebrovascular condition with a significant morbidity and mortality rate and causes physical disabilities for survivors. Once the symptoms are identified, it requires a time-critical diagnosis with the help of the most commonly available imaging techniques. Computed tomography (CT) scans are used worldwide for preliminary stroke diagnosis. It demands the expertise and experience of a radiologist to identify the stroke type, which is critical for initiating the treatment. This work attempts to gather those domain skills and build a model from CT scans to diagnose stroke. The non-contrast computed tomography (NCCT) scan of the brain comprises volumetric images or a 3D stack of image slices. So, a model that aims to solve the problem by targeting a 2D slice may fail to address the volumetric nature. We propose a 3D-based fully convolutional classification model to identify stroke cases from CT images that take into account the contextual longitudinal composition of volumetric data. We formulate a custom pre-processing module to enhance the scans and aid in improving the classification performance. Some of the significant challenges faced by 3D CNN are the less number of training samples, and the number of scans is mostly biased in favor of normal patients. In this work, the limitation of insufficient training volume and class imbalanced data have been rectified with the help of a strided slicing approach. A block-wise design was used to formulate the proposed network, with the initial part focusing on adjusting the dimensionality, at the same time retaining the features. Later on, the accumulated feature maps were effectively learned utilizing bundled convolutions and skip connections. The results of the proposed method were compared against 3D CNN stroke classification models on NCCT, various 3D CNN architectures on other brain imaging modalities, and 3D extensions of some of the classical CNN architectures. The proposed method achieved an improvement of 14.28% in the F1-score over the state-of-the-art 3D CNN stroke classification model. © 2022 Elsevier LtdItem Deep learning-based automated mitosis detection in histopathology images for breast cancer grading(John Wiley and Sons Inc, 2022) Mathew, T.; Ajith, B.; Kini, J.; Rajan, J.Cancer grade is an indicator of the aggressiveness of cancer. It is used for prognosis and treatment decisions. Conventionally cancer grading is performed manually by experienced pathologists via microscopic examination of pathology slides. Among the three factors involved in breast cancer grading (mitosis count, nuclear atypia, and tubule formation), mitotic cell counting is the most challenging task for pathologists. It is possible to automate this task by applying computational algorithms on pathology slides images. Lack of sufficiently large datasets and class imbalance between mitotic and non-mitotic cells in slide images are the two major challenges in developing effective deep learning-based methods for mitosis detection. In this paper, we propose a new approach and a method based on that to address these challenges. The high training data requirement of the advanced deep neural network is met by combining two datasets from different sources after a color-normalization process. Class imbalance is addressed by the augmentation of the mitotic samples in a context-preserving manner. Finally, a customized convolutional neural network classifier is used to classify the candidate cells into the target classes. We have used the publicly available datasets MITOS-ATYPIA and MITOS for the experiments. Our method outperforms most of the recent methods that are based on independent datasets and at the same time offers adaptability to the combination of datasets from different sources. © 2022 Wiley Periodicals LLC.
