Conference Papers
Permanent URI for this collectionhttps://idr.nitk.ac.in/handle/123456789/28506
Browse
6 results
Search Results
Item Infant Brain MRI Segmentation Using Deep Volumetric U-Net with Gamma Transformation(Springer Science and Business Media Deutschland GmbH, 2023) Yeshwanth, G.S.; Annappa, B.; Dodia, S.; Manoj Kumar, M.V.The growth of the brain from infantile to adolescence is very complex and takes a very long period. There are many processes such as myelination, migration, neural induction, and many other time-taking processes to study the development of the brain. This makes it necessary to develop some automatic tools to study the development of the brain. The brain consists mainly of three parts white matter, gray matter, and cerebrospinal fluid. So, quantitative tools will be a great boon for the medical community to deal with the brain if the brain MRI images are segmented into these three different parts. Although there are some tools for segmenting adult MRI images, for 6-month child segmentation, the brain becomes challenging as the white matter and gray matter are almost indistinguishable due to the brain development process. Segmentation of brain MRI images can identify specific patterns that contribute to healthy brain development. The dataset to address this problem had been taken from the Iseg2019 challenge conducted by MICCAI. Segmentation of MRI needs expert doctors. Advancements in computer vision techniques can be used to replace present time-consuming work. This paper proposes a deep learning model for image segmentation using a three-dimensional U-net. The proposed model gives dice values of 93.75, 88.24, and 85.64 for cerebrospinal fluid, gray matter, and white matter. This paper also presents various experimental results of U-net, attention U-net with different modifications. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.Item Classification of Skin Cancer Images using Lightweight Convolutional Neural Network(Institute of Electrical and Electronics Engineers Inc., 2023) Sandeep Kumar, T.; Annappa, B.; Dodia, S.Skin is the most powerful shield human organ that protects the internal organs of the human body from external attacks. This important organ is attacked by a diverse range of microbes such as viruses, fungi, and bacteria causing a lot of damage to the skin. Apart from these microbes, even dust plays important role in damaging skin. Every year several people in the world are suffering from skin diseases. These skin diseases are contagious and spread very fast. There are varieties of skin diseases. Thus it requires a lot of practice to distinguish the skin disease by the doctor and provide treatment. In order to automate this process several deep learning models are used in recent past years. This paper demonstrates an efficient and lightweight modified SqueezeNet deep learning model on the HAM10000 dataset for skin cancer classification. This model has outperformed state-of-the-art models with fewer parameters. As compared to existing deep learning models, this SqueezeNet variant has achieved 99.7%, 97.7%, and 97.04% as train, validation, and test accuracies respectively using only 0.13 million parameters. © 2023 IEEE.Item An Efficient Deep Transfer Learning Approach for Classification of Skin Cancer Images(Springer Science and Business Media Deutschland GmbH, 2023) Naik, P.P.; Annappa, B.; Dodia, S.Prolonged exposure to the sun for an extended period can likely cause skin cancer, which is an abnormal proliferation of skin cells. The early detection of this illness necessitates the classification of der-matoscopic images, making it an enticing study problem. Deep learning is playing a crucial role in efficient dermoscopic analysis. Modified version of MobileNetV2 is proposed for the classification of skin cancer images in seven classes. The proposed deep learning model employs transfer learning and various data augmentation techniques to more accurately classify skin lesions compared to existing models. To improve the per¬formance of the classifier, data augmentation techniques are performed on “HAM10000" (Human Against Machine) dataset to classify seven dif¬ferent kinds of skin cancer. The proposed model obtained a training accuracy of 96.56% and testing accuracy of 93.11%. Also, it has a lower number of parameters in comparison to existing methods. The aim of the study is to aid dermatologists in the clinic to make more accurate diagnoses of skin lesions and in the early detection of skin cancer. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.Item Optimizing Super-Resolution Generative Adversarial Networks(Springer Science and Business Media Deutschland GmbH, 2023) Jain, V.; Annappa, B.; Dodia, S.Image super-resolution is an ill-posed problem because many possible high-resolution solutions exist for a single low resolution (LR) image. There are traditional methods to solve this problem, they are fast and straightforward, but they fail when the scale factor is high or there is noise in the data. With the development of machine learning algorithms, their application in this field is studied, and they perform better than traditional methods. Many Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs) have been developed for this problem. The Super-Resolution Generative Adversarial Networks (SRGAN) have proved to be significant in this area. Although the SRGAN produces good results with 4 upscaling, it has some shortcomings. This paper proposes an improved version of SRGAN with reduced computational complexity and training time. The proposed model achieved an PPSNR of 29.72 and SSIM value of 0.86. The proposed work outperforms most of the recently developed systems. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.Item Cross-Database Facial Expression Recognition using CNN with Attention Mechanism(Institute of Electrical and Electronics Engineers Inc., 2023) Chandra, J.; Annappa, B.; Rashmi Adyapady, R.Facial expression is one of the most effective and universal ways to express emotions and intentions. It reflects what a person is thinking or experiencing. Thus, the expression recognition is one of the key aspects of understanding non-verbal communication and interpreting emotions in social interactions. Some emotions are very confusing, and separating the features between them becomes difficult because they share the same feature space. For example, the distinction between fear, anger, and disgust is confusing. This work tried to improve the model's class-wise performance to detect each class correctly. A distinct combination of deep-learning models is used to calculate the performance of the model, such as ResNet, XceptionNet, DenseNet, etc. The datasets like Real-world Affective Faces Database (RAF-DB), Japanese Female Facial Expression (JAFFE) & Facial Expression Recognition 2013 Plus (FER+) are used to evaluate the model's performance. The proposed model achieved better results and overcame the previous work's limitations. CDE's performance on RAF-DB and FER+ evaluations was significantly better than the current SOTA methods, with an increase in accuracy of 5.18% and 3.98%, respectively. © 2023 IEEE.Item Abdominal Multi-Organ Segmentation Using Federated Learning(Institute of Electrical and Electronics Engineers Inc., 2024) Yadav, G.; Annappa, B.; Sachin, D.N.Multi-organ segmentation refers to precisely de-lineating and identifying multiple organs or structures within medical images, such as Computed Tomography (CT) scans or Magnetic Resonance Imaging (MRI), to outline boundaries and regions for each organ accurately. Medical imaging is crucial to comprehending and diagnosing a wide range of illnesses for which accurate multi-organ image segmentation is often required for successful analysis. Due to the delicate nature of medical data, traditional methods for multi-organ segmentation include centralizing data, which presents serious privacy problems. This centralized training strategy impedes innovation and collaborative efforts in healthcare by raising worries about patient confidentiality, data security, and reg-ulatory compliance. The development of deep learning-based image segmentation algorithms has been hindered by the lack of fully annotated datasets, and this issue is exacerbated in multi-organ segmentation. Federated Learning (FL) addresses privacy concerns in multi-organ segmentation by enabling model training across decentralized institutions without sharing raw data. Our proposed FL-based model for CT scans ensures data privacy while achieving accurate multi-organ segmentation. By leveraging FL techniques, this paper collaboratively trains segmentation models on local datasets held by distinct medical institutions. The expected outcomes encompass achieving high Dice Similarity Coefficient (DSC) metrics and validating the efficacy of the proposed FL approach in attaining precise and accurate segmentation across diverse medical imaging datasets. © 2024 IEEE.
