Faculty Publications

Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736

Publications by NITK Faculty

Browse

Search Results

Now showing 1 - 3 of 3
  • Item
    Deep neural models for automated multi-task diagnostic scan management - Quality enhancement, view classification and report generation
    (IOP Publishing Ltd, 2022) Karthik, K.; Kamath S․, S.
    The detailed physiological perspectives captured by medical imaging provides actionable insights to doctors to manage comprehensive care of patients. However, the quality of such diagnostic image modalities is often affected by mismanagement of the image capturing process by poorly trained technicians and older/poorly maintained imaging equipment. Further, a patient is often subjected to scanning at different orientations to capture the frontal, lateral and sagittal views of the affected areas. Due to the large volume of diagnostic scans performed at a modern hospital, adequate documentation of such additional perspectives is mostly overlooked, which is also an essential key element of quality diagnostic systems and predictive analytics systems. Another crucial challenge affecting effective medical image data management is that the diagnostic scans are essentially stored as unstructured data, lacking a well-defined processing methodology for enabling intelligent image data management for supporting applications like similar patient retrieval, automated disease prediction etc. One solution is to incorporate automated diagnostic image descriptions of the observation/findings by leveraging computer vision and natural language processing. In this work, we present multi-task neural models capable of addressing these critical challenges. We propose ESRGAN, an image enhancement technique for improving the quality and visualization of medical chest x-ray images, thereby substantially improving the potential for accurate diagnosis, automatic detection and region-of-interest segmentation. We also propose a CNN-based model called ViewNet for predicting the view orientation of the x-ray image and generating a medical report using Xception net, thus facilitating a robust medical image management system for intelligent diagnosis applications. Experimental results are demonstrated using standard metrics like BRISQUE, PIQE and BLEU scores, indicating that the proposed models achieved excellent performance. Further, the proposed deep learning approaches enable diagnosis in a lesser time and their hybrid architecture shows significant potential for supporting many intelligent diagnosis applications. © 2021 IOP Publishing Ltd.
  • Item
    Swarm optimisation-based bag of visual words model for content-based X-ray scan retrieval
    (Inderscience Publishers, 2022) Karthik, K.; Kamath S․, S.
    Classification and retrieval of medical images (MedIR) are emerging applications of computer vision for enabling intelligent medical diagnostics. Medical images are multi-dimensional and require specialised processing for the extraction of features from their manifold underlying content. Existing models often fail to consider the inherent characteristics of data and have thus often fallen short when applied to medical images. In this paper, we present a MedIR approach based on the bag of visual words (BoVW) model for content-based medical image retrieval. When it comes to any medical approach models, an imbalance in the dataset is one of the issues. Hence the perspective is also considering a balanced set of categories from an imbalanced dataset. The proposed work on BoVW model extracts features from each image are used to train supervised machine learning classifier for X-ray medical image classification and retrieval. During the experimental validation, the proposed model performed well with the classification accuracy of 89.73% and a good retrieval result using our filter-based approach. © © 2022 Inderscience Enterprises Ltd.
  • Item
    Multi-task deep neural network models for learning COVID-19 disease representations from multimodal data
    (Inderscience Publishers, 2023) Mayya, V.; Karthik, K.; Karadka, K.P.; Kamath S․, S.S.
    Over the continued course of the COVID-19 pandemic, a significant volume of expert-written diagnosis reports has been accumulated that capture a multitude of symptoms and observations on diagnosed COVID-19 cases, along with expert-validated chest X-ray scans. The utility of rich, latent information embedded in such unstructured expert-written diagnosis reports and its importance as a source of valuable disease-specific information has been explored to a very limited extent. In this work, a convolutional attention-based dense (CAD) neural model for COVID-19 prediction is proposed. The model is trained on the rich disease-specific parameters extracted from chest X-ray images and expert-written diagnostic text reports to support an evidence-based diagnosis. Scalability is ensured by incorporating content based learning models for automatically generating diagnosis reports of identified COVID-19 cases, reducing radiologists' cognitive burden. Experimental evaluation showed that multimodal patient data plays a vital role in diagnosing early-stage cases, thus helping hasten the diagnosis process. © 2023 Inderscience Enterprises Ltd.