Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
9 results
Search Results
Item Segmentation of focal cortical dysplasia lesions from magnetic resonance images using 3D convolutional neural networks(Elsevier Ltd, 2021) Niyas, S.; Chethana Vaisali, S.; Show, I.; Chandrika, T.G.; Vinayagamani, S.; Kesavadas, C.; Rajan, J.Computer-aided diagnosis using advanced Artific ial Intelligence (AI) techniques has become much popular over the last few years. This work automates the segmentation of Focal Cortical Dysplasia (FCD) lesions from three-dimensional (3D) Magnetic Resonance (MR) images. FCD is a type of neuronal malformation in the brain cortex and is the leading cause of intractable epilepsy, irrespective of gender or age differences. Since the neuron related abnormalities are usually resistant to drug therapy, surgical resection has been the main treatment approach for patients with intractable epilepsy. Automating the identification and segmentation of FCD is useful for neuroradiologists in pre-surgical evaluations. Convolutional Neural Networks (CNNs) have the ability to learn appropriate features from the training data without any human intervention. But, most of the state-of-the-art FCD segmentation approaches use two-dimensional (2D) CNN models despite the availability of 3D Magnetic resonance imaging (MRI) volumes, and hence fail to leverage the inter-slice information present in the MRI volumes. The major hurdles in considering a 3D CNN model are the need for a large 3D dataset, big memory, and high computation cost. A deep 3D CNN segmentation model, which can extract inter-slice information and overcomes the drawbacks of conventional 3D CNN methods to an extent, is proposed in this paper. The model uses a 3D version of U-Net with residual blocks that works on shallow depth 3D sub-volumes generated from MRI volumes. The proposed method shows superior performance over the state-of-the-art FCD segmentation methods in both qualitative and quantitative analysis. © 2021 Elsevier LtdItem An empirical study of the impact of masks on face recognition(Elsevier Ltd, 2022) Jeevan, G.; Zacharias, G.C.; Nair, M.S.; Rajan, J.Face recognition has a wide range of applications like video surveillance, security, access control, etc. Over the past decade, the field of face recognition has matured and grown at par with the latest advancements in technology, particularly deep learning. Convolution Neural Networks have surpassed human accuracy in Face Recognition on popular evaluation tests such as LFW. However, most existing models evaluate their performance with an assumption of the availability of full facial information. The COVID-19 pandemic has laid forth challenges to this assumption, and to the performance of existing methods and leading-edge algorithms in the field of face recognition. This is in the wake of an explosive increase in the number of people wearing face masks. The reduced amount of facial information available to a recognition system from a masked face impacts their discrimination ability. In this context, we design and conduct a series of experiments comparing the masked face recognition performances of CNN architectures available in literature and exploring possible alterations in loss functions, architectures, and training methods that can enable existing methods to fully extract and leverage the limited facial information available in a masked face. We evaluate existing CNN-based face recognition systems for their performance against datasets composed entirely of masked faces, in contrast to the existing standard evaluations where masked or occluded faces are a rare occurrence. The study also presents evidence denoting an increased impact of network depth on performance compared to standard face recognition. Our observations indicate that substantial performance gains can be achieved by the introduction of masked faces in the training set. The study also inferred that various parameter settings determined suitable for standard face recognition are not ideal for masked face recognition. Through empirical analysis we derived new value recommendations for these parameters and settings. © 2021 Elsevier LtdItem Crossover based technique for data augmentation(Elsevier Ireland Ltd, 2022) Raj, R.; Mathew, J.; Kannath, S.K.; Rajan, J.Background and Objective: Medical image classification problems are frequently constrained by the availability of datasets. “Data augmentation” has come as a data enhancement and data enrichment solution to the challenge of limited data. Traditionally data augmentation techniques are based on linear and label preserving transformations; however, recent works have demonstrated that even non-linear, non-label preserving techniques can be unexpectedly effective. This paper proposes a non-linear data augmentation technique for the medical domain and explores its results. Methods: This paper introduces “Crossover technique”, a new data augmentation technique for Convolutional Neural Networks in Medical Image Classification problems. Our technique synthesizes a pair of samples by applying two-point crossover on the already available training dataset. By this technique, we create N new samples from N training samples. The proposed crossover based data augmentation technique, although non-label preserving, has performed significantly better in terms of increased accuracy and reduced loss for all the tested datasets over varied architectures. Results: The proposed method was tested on three publicly available medical datasets with various network architectures. For the mini-MIAS database of mammograms, our method improved the accuracy by 1.47%, achieving 80.15% using VGG-16 architecture. Our method works fine for both gray-scale as well as RGB images, as on the PH2 database for Skin Cancer, it improved the accuracy by 3.57%, achieving 85.71% using VGG-19 architecture. In addition, our technique improved accuracy on the brain tumor dataset by 0.40%, achieving 97.97% using VGG-16 architecture. Conclusion: The proposed novel crossover technique for training the Convolutional Neural Network (CNN) is painless to implement by applying two-point crossover on two images to form new images. The method would go a long way in tackling the challenges of limited datasets and problems of class imbalances in medical image analysis. Our code is available at https://github.com/rishiraj-cs/Crossover-augmentation © 2022Item Stroke classification from computed tomography scans using 3D convolutional neural network(Elsevier Ltd, 2022) Neethi, A.S.; Niyas, S.; Kannath, S.K.; Mathew, J.; Anzar, A.M.; Rajan, J.Stroke is a cerebrovascular condition with a significant morbidity and mortality rate and causes physical disabilities for survivors. Once the symptoms are identified, it requires a time-critical diagnosis with the help of the most commonly available imaging techniques. Computed tomography (CT) scans are used worldwide for preliminary stroke diagnosis. It demands the expertise and experience of a radiologist to identify the stroke type, which is critical for initiating the treatment. This work attempts to gather those domain skills and build a model from CT scans to diagnose stroke. The non-contrast computed tomography (NCCT) scan of the brain comprises volumetric images or a 3D stack of image slices. So, a model that aims to solve the problem by targeting a 2D slice may fail to address the volumetric nature. We propose a 3D-based fully convolutional classification model to identify stroke cases from CT images that take into account the contextual longitudinal composition of volumetric data. We formulate a custom pre-processing module to enhance the scans and aid in improving the classification performance. Some of the significant challenges faced by 3D CNN are the less number of training samples, and the number of scans is mostly biased in favor of normal patients. In this work, the limitation of insufficient training volume and class imbalanced data have been rectified with the help of a strided slicing approach. A block-wise design was used to formulate the proposed network, with the initial part focusing on adjusting the dimensionality, at the same time retaining the features. Later on, the accumulated feature maps were effectively learned utilizing bundled convolutions and skip connections. The results of the proposed method were compared against 3D CNN stroke classification models on NCCT, various 3D CNN architectures on other brain imaging modalities, and 3D extensions of some of the classical CNN architectures. The proposed method achieved an improvement of 14.28% in the F1-score over the state-of-the-art 3D CNN stroke classification model. © 2022 Elsevier LtdItem StrokeViT with AutoML for brain stroke classification(Elsevier Ltd, 2023) Raj, R.; Mathew, J.; Kannath, S.K.; Rajan, J.Stroke, categorized under cardiovascular and circulatory diseases, is considered the second foremost cause of death worldwide, causing approximately 11% of deaths annually. Stroke diagnosis using a Computed Tomography (CT) scan is considered ideal for identifying whether the stroke is hemorrhagic or ischemic. However, most methods for stroke classification are based on a single slice-level prediction mechanism, meaning that the most imperative CT slice has to be manually selected by the radiologist from the original CT volume. This paper proposes an integration of Convolutional Neural Network (CNN), Vision Transformers (ViT), and AutoML to obtain slice-level predictions as well as patient-wise prediction results. While the CNN with inductive bias captures local features, the transformer captures long-range dependencies between sequences. This collaborative local-global feature extractor improves upon the slice-wise predictions of the CT volume. We propose stroke-specific feature extraction from each slice-wise prediction to obtain the patient-wise prediction using AutoML. While the slice-wise predictions helps the radiologist to verify close and corner cases, the patient-wise predictions makes the outcome clinically relevant and closer to real-world scenario. The proposed architecture has achieved an accuracy of 87% for single slice-level prediction and an accuracy of 92% for patient-wise prediction. For comparative analysis of slice-level predictions, standalone architectures of VGG-16, VGG-19, ResNet50, and ViT were considered. The proposed architecture has outperformed the standalone architectures by 9% in terms of accuracy. For patient-wise predictions, AutoML considers 13 different ML algorithms, of which 3 achieve an accuracy of more than 90%. The proposed architecture helps in reducing the manual effort by the radiologist to manually select the most imperative CT from the original CT volume and shows improvement over other standalone architectures for classification tasks. The proposed architecture can be generalized for volumetric scans aiding in the patient diagnosis of head and neck, lungs, diseases of hepatobiliary tract, genitourinary diseases, women's imaging including breast cancer and various musculoskeletal diseases. Code for proposed stroke-specific feature extraction with the pre-trained weights of the trained model is available at: https://github.com/rishiraj-cs/StrokeViT_With_AutoML. © 2022 Elsevier LtdItem WideCaps: a wide attention-based capsule network for image classification(Springer Science and Business Media Deutschland GmbH, 2023) Pawan, S.J.; Sharma, R.; Reddy, H.; Vani, M.; Rajan, J.The capsule network is a distinct and promising segment of the neural network family that has drawn attention due to its unique ability to maintain equivariance by preserving spatial relationships among the features. The capsule network has attained unprecedented success in image classification with datasets such as MNIST and affNIST by encoding the characteristic features into capsules and building a parse-tree structure. However, on datasets involving complex foreground and background regions, such as CIFAR-10 and CIFAR-100, the performance of the capsule network is suboptimal due to its naive data routing policy and incompetence in extracting complex features. This paper proposes a new design strategy for capsule network architectures for efficiently dealing with complex images. The proposed method incorporates the optimal placement of the novel wide bottleneck residual block and squeeze and excitation Attention Blocks into the capsule network upheld by the modified factorized machines routing algorithm to address the defined problem. This setup allows channel interdependencies at almost no computational cost, thereby enhancing the representation ability of capsules on complex images. We extensively evaluate the performance of the proposed model on the five publicly available datasets, namely the CIFAR-10, Fashion MNIST, Brain Tumor, SVHN, and the CIFAR-100 datasets. The proposed method outperformed the top-5 capsule network-based methods on Fashion MNIST, CIFAR-10, SVHN, Brain Tumor, and gave a highly competitive performance on the CIFAR-100 datasets. © 2023, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.Item Forecasting Land-Use and Land-Cover Change Using Hybrid CNN-LSTM Model(Institute of Electrical and Electronics Engineers Inc., 2024) Varma, B.; Naik, N.; Chandrasekaran, K.; Venkatesan, M.; Rajan, J.Land-use and land-cover (LULC) information helps analyze future trends and is essential for environmental management and sustainable planning. Time-series satellite images are employed in this study to forecast changes in LULC. Deep-learning (DL) frameworks have been widely used for modeling dynamic LULC changes at the regional level. However, improving the accuracy of the existing prediction models is necessary. This letter proposes an integrated convolutional neural network (CNN) and long short-term memory network (LSTM) known as a hybrid CNN-LSTM model to address the fine-scale LULC prediction requirement. The efficiency of the proposed approach was examined using LULC data for the Dakshina Kannada District of Karnataka State, India. The proposed model achieved an overall accuracy of 95.11% and a kappa coefficient of 0.92, based on the ground-truth data for 2014. The model's predictions for 2035, based on data from 2005 to 2014, revealed the following trends: Urbanization exhibited a pattern of rapid expansion and increased growth. The integrated CNN-LSTM model extracted spatial and temporal features for effectively predicting LULC changes. Infrastructure development, population density, and enhanced economic activities were the major driving factors of changes in LULC for the study region. Robust LULC change forecasting will strengthen LULC evaluations, aid in understanding complex land-use systems, and empower decision-makers to formulate effective land management strategies in the coming years. © 2004-2012 IEEE.Item A Dual-Stage Semi-Supervised Pre-Training Approach for Medical Image Segmentation(Institute of Electrical and Electronics Engineers Inc., 2024) Aralikatti, R.C.; Pawan, S.J.; Rajan, J.Deep neural networks have played a vital role in developing automated methods for addressing medical image segmentation. However, their reliance on labeled data impedes the practicability. Semi-Supervised learning is gaining attention for its intrinsic ability to extract valuable information from labeled and unlabeled data with improved performance. Recently, consistency regularization methods have gained interest due to their efficient learning procedures. They are, however, confined to data or network-level perturbations, negating the benefit of having both forms in a single framework. In light of this, we ask an intriguing but unexplored question: Can we have both network-level and data-level perturbation in the semi-supervised framework? To this end, we present a holistic approach that integrates data-level perturbation in the model pre-training stage, followed by implicit network-level perturbation in the fine-tuning stage. Furthermore, we incorporate networks with manifold learning paradigms throughout the training to facilitate the formation of robust data representations by ensuring local and global semantic affinities adhering to the theory of consensus. Notably, this may be the first attempt in the semi-supervised medical image segmentation archetype to use data and network-level perturbation with a model pre-training strategy. We extensively validated the efficacy of the proposed framework on three benchmark datasets, namely the Automated Cardiac Diagnosis Challenge, ISIC-2018, and Left Atrial Segmentation Challenge datasets, subjected to severely low-sampled labeled data. Notably, in ACDC (4%), ISIC-2018 (5%), and LA (6%) labeled cases, the proposed method outperforms the second-best method by 2.95%, 1.31%, and 0.71% in the Dice Similarity Metric. © 2023 IEEE.Item An automated deep learning pipeline for detecting user errors in spirometry test(Elsevier Ltd, 2024) Bonthada, S.; Pariserum Perumal, S.P.; Naik, P.P.; Mahesh, M.A.; Rajan, J.Spirometer is used as a major diagnostic tool for obstructive airway diseases and a monitoring tool for therapy response and disease staging over time. It is a sophisticated medical device employed to quantify flow and volume of air exhaled by a subject during a specific testing period. The essential metrics obtained from the spirometry test, play a crucial role in enabling healthcare professionals to thoroughly evaluate the respiratory health and condition of the individual under examination. Several spirometer measurements including Forced Vital Capacity (FVC) and Forced Expiratory Volume (FEV) serve as guidelines for diagnosis and prognosis of Chronic Obstructive Pulmonary Diseases (COPD) and asthma. However, user errors caused by different reasons, including improper handling of the equipment and poor performance during the maneuvers of the expiratory airflow, end up in incorrect treatment directions. To ensure accurate results, spirometry tests traditionally require the presence of a skilled professional to identify and address these errors promptly. A novel machine learning approach is proposed in this paper to automatically identify four such user errors based on Volume-Time and Flow-Volume graphs. By detecting specific errors and providing immediate feedback to patients, reliability and accuracy of spirometry results will be improved and the need for trained professionals will be reduced. The implementation facilitates the widespread adoption of spirometry, particularly in low-resource telemedicine settings. This work implements a binary classification model distinguishing between normal and error test samples, achieving a prediction accuracy of 93%. Additionally, a 4-way classification model is presented for identifying individual error sub-types, demonstrating a prediction accuracy of 94%. © 2023 Elsevier Ltd
