Browsing by Author "Dodia, S."
Now showing 1 - 16 of 16
- Results Per Page
- Sort Options
Item 3D AttU-NET for Brain Tumor Segmentation with a Novel Loss Function(Institute of Electrical and Electronics Engineers Inc., 2023) Roy, R.; Annappa, B.; Dodia, S.In the United States of America (USA), every year 150,000 patients are registered with a secondary brain tumor that is not generated in the brain. This necessitates the need for early brain tumor detection, which in turn will help patients to live longer. For clinical evaluation and treatment, precise segmentation of brain tumors in MRI images is required. This process can be aided by machine learning and efficient image processing, but manual imaging can be time-consuming. In this study, we aim to develop an 3D automated segmentation algorithm with a novel loss function. A 3D attention UNET CNN model was trained using the novel loss function, which was calculated by taking the weighted average of dice loss and focal loss to overcome the class imbalance. Results show the enhancement in the segmentation performance of attention UNET model with an average increase of 5% in the Dice coefficient for all three classes. However, the model's performance was not as strong for enhanced and core tumors. Further research may be needed to optimize performance in these areas. . © 2023 IEEE.Item A Novel Artificial Intelligence-Based Lung Nodule Segmentation and Classification System on CT Scans(Springer Science and Business Media Deutschland GmbH, 2022) Dodia, S.; Annappa, A.; Mahesh, M.A.Major innovations in deep neural networks have helped optimize the functionality of tasks such as detection, classification, segmentation, etc., in medical imaging. Although Computer-Aided Diagnosis (CAD) systems created using classic deep architectures have significantly improved performance, the pipeline operation remains unclear. In this work, in comparison to the state-of-the-art deep learning architectures, we developed a novel pipeline for performing lung nodule detection and classification, resulting in fewer parameters, better analysis, and improved performance. Histogram equalization, an image enhancement technique, is used as an initial preprocessing step to improve the contrast of the lung CT scans. A novel Elagha initialization-based Fuzzy C-Means clustering (EFCM) is introduced in this work to perform nodule segmentation from the preprocessed CT scan. Following this, Convolutional Neural Network (CNN) is used for feature extraction to perform nodule classification instead of customary classification. Another set of features considered in this work is Bag-of-Visual-Words (BoVW). These features are encoded representations of the detected nodule images. This work also examines a blend of intermediate features extracted from CNN and BoVW characteristics, which resulted in higher performance than individual feature representation. A Support Vector Machine (SVM) is used to distinguish detected nodules into benign and malignant nodules. Achieved results clearly show improvement in the nodule detection and classification task performance compared to the state-of-the-art architectures. The model is evaluated on the popular publicly available LUNA16 dataset and verified by an expert pulmonologist. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.Item A Novel Bi-level Lung Cancer Classification System on CT Scans(Springer Science and Business Media Deutschland GmbH, 2022) Dodia, S.; Annappa, A.; Mahesh, M.A.Purpose: Lung cancer is a life-threatening disease that affects both men and women. Accurate identification of lung cancer has been a challenging task for decades. The aim of this work is to perform a bi-level classification of lung cancer nodules. In Level-1, candidates are classified into nodules and non-nodules, and in Level-2, the detected nodules are further classified into benign and malignant. Methods: A new preprocessing method, named, Boosted Bilateral Histogram Equalization (BBHE) is applied to the input scans prior to feeding the input to the neural networks. A novel Cauchy Black Widow Optimization-based Convolutional Neural Network (CBWO-CNN) is introduced for Level-1 classification. The weight updation in the CBWO-CNN is performed using Cauchy mutation, and the error rate is minimized, which in turn improved the accuracy with less computation time. A novel hybrid Convolutional Neural Network (CNN) model with shared parameters is introduced for performing Level-2 classification. The second model proposed in this work is a fusion of Squeeze-and-Excitation Network (SE-Net) and Xception, abbreviated as “SE-Xception†. The weight parameters are shared for the SE-Xception model trained from CBWO-CNN, i.e., a knowledge transfer approach is adapted. Results: The recognition accuracy obtained from CBWO-CNN for Level-1 classification is 96.37% with a reduced False Positive Rate (FPR) of 0.033. SE-Xception model achieved a sensitivity of 96.14%, an accuracy of 94.75%, and a specificity of 92.83%, respectively, for Level-2 classification. Conclusion: The proposed method’s performance is better than existing deep learning architectures and outperformed individual SE-Net and Xception with fewer parameters. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.Item A novel receptive field-regularized V-net and nodule classification network for lung nodule detection(John Wiley and Sons Inc, 2022) Dodia, S.; Annappa, B.; Mahesh, M.Recent advancements in deep learning have achieved great success in building a reliable computer-aided diagnosis (CAD) system. In this work, a novel deep-learning architecture, named receptive field regularized V-net (RFR V-Net), is proposed for detecting lung cancer nodules with reduced false positives (FP). The method uses a receptive regularization on the encoder block's convolution and deconvolution layer of the decoder block in the V-Net model. Further, nodule classification is performed using a new combination of SqueezeNet and ResNet, named nodule classification network (NCNet). Postprocessing image enhancement is performed on the 2D slice by increasing the image's intensity by adding pseudo-color or fluorescence contrast. The proposed RFR V-Net resulted in dice similarity coefficient of 95.01% and intersection over union of 0.83, respectively. The proposed NCNet achieved the sensitivity of 98.38% and FPs/Scan of 2.3 for 3D representations. The proposed NCNet resulted in considerable improvements over existing CAD systems. © 2021 Wiley Periodicals LLC.Item An Efficient Deep Transfer Learning Approach for Classification of Skin Cancer Images(Springer Science and Business Media Deutschland GmbH, 2023) Naik, P.P.; Annappa, B.; Dodia, S.Prolonged exposure to the sun for an extended period can likely cause skin cancer, which is an abnormal proliferation of skin cells. The early detection of this illness necessitates the classification of der-matoscopic images, making it an enticing study problem. Deep learning is playing a crucial role in efficient dermoscopic analysis. Modified version of MobileNetV2 is proposed for the classification of skin cancer images in seven classes. The proposed deep learning model employs transfer learning and various data augmentation techniques to more accurately classify skin lesions compared to existing models. To improve the per¬formance of the classifier, data augmentation techniques are performed on “HAM10000" (Human Against Machine) dataset to classify seven dif¬ferent kinds of skin cancer. The proposed model obtained a training accuracy of 96.56% and testing accuracy of 93.11%. Also, it has a lower number of parameters in comparison to existing methods. The aim of the study is to aid dermatologists in the clinic to make more accurate diagnoses of skin lesions and in the early detection of skin cancer. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.Item Automatic Abnormality Detection in Musculoskeletal Radiographs Using Ensemble of Pre-trained Networks(Springer Science and Business Media Deutschland GmbH, 2023) Verma, R.; Jain, S.; Saritha, S.K.; Dodia, S.Musculoskeletal disability (MSDs) defined as the injuries that affect the movement or musculoskeletal system of the human body. Over the worldwide, it is the second most cause of physical disability. Musculoskeletal disability worsens over time and can result in long-term discomfort and severe disability. As a result, early detection and diagnosis of these anomalies is essential. But the diagnosis process is very time consuming, error prone and required diagnostic professional. Deep learning algorithms have recently been applied in medical imaging that provides a robust platform with very reliable outcomes. The development of Computer Aided Detection (CAD) system extensively speed up the diagnosis process. In this paper, a weighted ensemble model has been proposed, which is the combination of three pre-trained models (DenseNet169, MobileNet, and XceptionNet). The weighted ensemble model is tested on MURA dataset, a large public dataset provided by Stanford ML Group. Our model achieved a cohen’s kappa score 0.739 with precision of 0.885 and recall of 0.854, which is higher than many existing approaches such as densenet169 and ensemble200 model. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.Item Bankruptcy Prediction Using Bi-Level Classification Technique(Springer Science and Business Media Deutschland GmbH, 2023) Antani, A.; Annappa, B.; Dodia, S.; Manoj Kumar, M.V.Bankruptcy is a legal proceeding involving a person or a business, where they are unable to pay the debt. Financial investors, banks, money lenders, and the government seek to know the status of bankruptcy of firms as it carries huge financial risk. The prediction of bankruptcy will help all the stakeholders of the company. To model bankruptcy prediction, traditional statistical methods like multiple discriminant analysis and Machine Learning (ML) models like Decision Trees, Support Vector Machines, and Ensemble have been utilized. In existing works, homogeneous base estimators are used while developing ensemble algorithms. This study uses a bi-level classification technique (a heterogeneous ensemble ML technique) to predict bankruptcy. To train the classifier, the features extracted are Altman z-score parameters and market-based measures. Unlike previous studies, this study uses an indicator of corporate governance as a feature. The outcome of this study is an improvement in the performance of the ML model using the bi-level classification technique. An F1-score of 0.98 and 97.8% accuracy is achieved with features including Tobin’s Q and bi-level classification technique as an ML model. It outperforms the 96% accuracy of the random forest algorithm. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.Item Classification of Skin Cancer Images using Lightweight Convolutional Neural Network(Institute of Electrical and Electronics Engineers Inc., 2023) Sandeep Kumar, T.; Annappa, B.; Dodia, S.Skin is the most powerful shield human organ that protects the internal organs of the human body from external attacks. This important organ is attacked by a diverse range of microbes such as viruses, fungi, and bacteria causing a lot of damage to the skin. Apart from these microbes, even dust plays important role in damaging skin. Every year several people in the world are suffering from skin diseases. These skin diseases are contagious and spread very fast. There are varieties of skin diseases. Thus it requires a lot of practice to distinguish the skin disease by the doctor and provide treatment. In order to automate this process several deep learning models are used in recent past years. This paper demonstrates an efficient and lightweight modified SqueezeNet deep learning model on the HAM10000 dataset for skin cancer classification. This model has outperformed state-of-the-art models with fewer parameters. As compared to existing deep learning models, this SqueezeNet variant has achieved 99.7%, 97.7%, and 97.04% as train, validation, and test accuracies respectively using only 0.13 million parameters. © 2023 IEEE.Item COVID-19: Automatic detection from X-ray images by utilizing deep learning methods(Elsevier Ltd, 2021) Nigam, B.; Nigam, A.; Jain, R.; Dodia, S.; Arora, N.; Annappa, B.In recent months, a novel virus named Coronavirus has emerged to become a pandemic. The virus is spreading not only humans, but it is also affecting animals. First ever case of Coronavirus was registered in city of Wuhan, Hubei province of China on 31st of December in 2019. Coronavirus infected patients display very similar symptoms like pneumonia, and it attacks the respiratory organs of the body, causing difficulty in breathing. The disease is diagnosed using a Real-Time Reverse Transcriptase Polymerase Chain reaction (RT-PCR) kit and requires time in the laboratory to confirm the presence of the virus. Due to insufficient availability of the kits, the suspected patients cannot be treated in time, which in turn increases the chance of spreading the disease. To overcome this solution, radiologists observed the changes appearing in the radiological images such as X-ray and CT scans. Using deep learning algorithms, the suspected patients’ X-ray or Computed Tomography (CT) scan can differentiate between the healthy person and the patient affected by Coronavirus. In this paper, popular deep learning architectures are used to develop a Coronavirus diagnostic systems. The architectures used in this paper are VGG16, DenseNet121, Xception, NASNet, and EfficientNet. Multiclass classification is performed in this paper. The classes considered are COVID-19 positive patients, normal patients, and other class. In other class, chest X-ray images of pneumonia, influenza, and other illnesses related to the chest region are included. The accuracies obtained for VGG16, DenseNet121, Xception, NASNet, and EfficientNet are 79.01%, 89.96%, 88.03%, 85.03% and 93.48% respectively. The need for deep learning with radiologic images is necessary for this critical condition as this will provide a second opinion to the radiologists fast and accurately. These deep learning Coronavirus detection systems can also be useful in the regions where expert physicians and well-equipped clinics are not easily accessible. © 2021 Elsevier LtdItem Diabetic Retinopathy Detection Using Novel Loss Function in Deep Learning(Springer Science and Business Media Deutschland GmbH, 2024) Singh, S.; Annappa, B.; Dodia, S.Globally, the number of diabetics has significantly increased in recent years. Several age groups are affected. Diabetic Retinopathy (DR) affects those with diabetes for a long time. DR is a side effect of diabetes that affects the retina’s blood vessels and is caused by high blood sugar levels. Therefore, early detection and treatment are preferred. Manual recognition concerns and a lack of technology support for ophthalmologists are the most complex problems. Nowadays, Deep Learning (DL) based approaches are used significantly for creating DR detection systems because of the ongoing development of Artificial Intelligence (AI) techniques. This paper uses the APTOS dataset of retina images to train four deep Convolution Neural Network (CNN) models using a novel loss function. The four DL models used are VGG16, Resnet50, DenseNet121, and DenseNet169 to explain their rich properties and improve the classification for different phases of DR. The experimental results of this study demonstrate that VGG16 produced the lowest accuracy of 73.26% on the APTOS dataset, while DenseNet169-based detection gives the most significant result of 96.68% accuracy among the four approaches. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.Item Infant Brain MRI Segmentation Using Deep Volumetric U-Net with Gamma Transformation(Springer Science and Business Media Deutschland GmbH, 2023) Yeshwanth, G.S.; Annappa, B.; Dodia, S.; Manoj Kumar, M.V.The growth of the brain from infantile to adolescence is very complex and takes a very long period. There are many processes such as myelination, migration, neural induction, and many other time-taking processes to study the development of the brain. This makes it necessary to develop some automatic tools to study the development of the brain. The brain consists mainly of three parts white matter, gray matter, and cerebrospinal fluid. So, quantitative tools will be a great boon for the medical community to deal with the brain if the brain MRI images are segmented into these three different parts. Although there are some tools for segmenting adult MRI images, for 6-month child segmentation, the brain becomes challenging as the white matter and gray matter are almost indistinguishable due to the brain development process. Segmentation of brain MRI images can identify specific patterns that contribute to healthy brain development. The dataset to address this problem had been taken from the Iseg2019 challenge conducted by MICCAI. Segmentation of MRI needs expert doctors. Advancements in computer vision techniques can be used to replace present time-consuming work. This paper proposes a deep learning model for image segmentation using a three-dimensional U-net. The proposed model gives dice values of 93.75, 88.24, and 85.64 for cerebrospinal fluid, gray matter, and white matter. This paper also presents various experimental results of U-net, attention U-net with different modifications. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.Item KAC SegNet: A Novel Kernel-Based Active Contour Method for Lung Nodule Segmentation and Classification Using Dense AlexNet Framework(World Scientific, 2024) Dodia, S.; Annappa, B.; Mahesh, P.A.Lung cancer is known to be one of the leading causes of death worldwide. There is a chance of increasing the survival rate of the patients if detected at an early stage. Computed Tomography (CT) scans are prominently used to detect and classify lung cancer nodules/tumors in the thoracic region. There is a need to develop an efficient and reliable computer-aided diagnosis model to detect lung cancer nodules accurately from CT scans. This work proposes a novel kernel-based active-contour (KAC) SegNet deep learning model to perform lung cancer nodule detection from CT scans. The active contour uses a snake method to detect internal and external boundaries of the curves, which is used to extract the Region Of Interest (ROI) from the CT scan. From the extracted ROI, the nodules are further classified into benign and malignant using a Dense AlexNet deep learning model. The key contributions of this work are the fusion of an edge detection method with a deep learning segmentation method which provides enhanced lung nodule segmentation performance, and an ensemble of state-of-the-art deep learning classifiers, which encashes the advantages of both DenseNet and AlexNet to learn better discriminative information from the detected lung nodules. The experimental outcome shows that the proposed segmentation approach achieves a Dice Score Coefficient of 97.8% and an Intersection-over-Union of 92.96%. The classification performance resulted in an accuracy of 95.65%, a False Positive Rate, and False Negative Rate values of 0.0572 and 0.0289. The proposed model is robust compared to the existing state-of-the-art methods. © 2024 World Scientific Publishing Company.Item Light-weight Deep Learning Model for Cataract Detection using Novel Activation Function(Institute of Electrical and Electronics Engineers Inc., 2023) Singh, P.; Naveen, B.; Mohapatra, A.R.; Annappa, B.; Dodia, S.In cataracts, the natural lens behind the iris and pupil is cloudy, which causes light passing through it to be distorted or blocked, causing blurry or dim vision. About 50% of all cases of blindness worldwide is caused by cataract, according to WHO and the National Library of Medicine. A timely diagnosis of cataracts can help prevent vision loss and other disease-related complications. Several recent developments in machine learning have significantly impacted medical science. However, most existing approaches for cataract detection are based on traditional machine learning techniques. There have been a few attempts to use deep learning in recent years; the models have delivered decent outcomes but require much computational power. Reducing ophthalmologists' time can improve patient outcomes, increase access to care, lower costs, address workforce shortages, and improve healthcare efficiency. It allows ophthalmologists to see more patients and provide more accurate, timely diagnoses and treatments. Using lightweight deep learning algorithms, this paper proposes a solution that delivers rapid and precise results without requiring high-end hardware. A novel activation function is also proposed that significantly improved the performance. The proposed model is a lightweight one that achieved 95.8% accuracy using only 16,874 parameters. © 2023 IEEE.Item Machine Learning-based Automated System for Subjective Answer Evaluation(Institute of Electrical and Electronics Engineers Inc., 2023) Dodia, S.; Spoorthy, V.; Chandak, T.An examination is a useful tool for assessing students' knowledge. Evaluation of exams is a difficult and time-consuming process. The automatic examination of answer scripts makes this task easier for teachers, reducing the amount of effort and time required. The existing literature has a number of methods that have been proposed for evaluating responses to objective questions using machine learning. However, more work needs to be done on evaluating answers to descriptive questions. This study suggests a way to evaluate students' answers to questions of a descriptive kind without using traditional paper or pencil by teachers. Instead, a computer acts as a teacher and grades the students' submissions. The primary objective is to communicate the outcomes of subjective responses using Bidirectional Encoder Representations from Transformers (BERT), cosine, and Jaccard distance. The proposed model achieved an accuracy of 91%, an error of 9.01, a precision of 83%, and a recall of 79%, respectively. The suggested model has provided the best results in comparison with state-of-the-art systems. © 2023 IEEE.Item Machine learning-based detection and classification of lung cancer(Elsevier, 2022) Dodia, S.; Annappa, A.Cancer is termed to be one of the life-threatening diseases in the world. Among various types of cancer, the highest mortality and morbidity rate recorded is from lung cancer. Computer-aided diagnosis (CAD) systems are used to identify lung cancer nodules. The development of reliable automated algorithms is important to provide doctors with a second opinion. A lung cancer diagnosis is performed in two steps: lung cancer nodule detection and classification. In nodule detection, from a given computed tomography (CT) scan, the nodules and nonnodules are identified. Once the nodules and nonnodules are identified, the next step is to classify the detected nodules as cancerous and noncancerous. This work explores various machine learning classifiers for lung cancer classification. A majority voting scheme is used to classify nodules. An in-depth analysis of different machine learning algorithms’ performance is presented in this work. © 2023 Elsevier Inc. All rights reserved.Item Optimizing Super-Resolution Generative Adversarial Networks(Springer Science and Business Media Deutschland GmbH, 2023) Jain, V.; Annappa, B.; Dodia, S.Image super-resolution is an ill-posed problem because many possible high-resolution solutions exist for a single low resolution (LR) image. There are traditional methods to solve this problem, they are fast and straightforward, but they fail when the scale factor is high or there is noise in the data. With the development of machine learning algorithms, their application in this field is studied, and they perform better than traditional methods. Many Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs) have been developed for this problem. The Super-Resolution Generative Adversarial Networks (SRGAN) have proved to be significant in this area. Although the SRGAN produces good results with 4 upscaling, it has some shortcomings. This paper proposes an improved version of SRGAN with reduced computational complexity and training time. The proposed model achieved an PPSNR of 29.72 and SSIM value of 0.86. The proposed work outperforms most of the recently developed systems. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
