Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
3 results
Search Results
Item Deep neural models for automated multi-task diagnostic scan management-quality enhancement, view classification and report generation(NLM (Medline), 2021) Karthik, K.; Kamath S?, S.The detailed physiological perspectives captured by medical imaging provides actionable insights to doctors to manage comprehensive care of patients. However, the quality of such diagnostic image modalities is often affected by mismanagement of the image capturing process by poorly trained technicians and older/poorly maintained imaging equipment. Further, a patient is often subjected to scanning at different orientations to capture the frontal, lateral and sagittal views of the affected areas. Due to the large volume of diagnostic scans performed at a modern hospital, adequate documentation of such additional perspectives is mostly overlooked, which is also an essential key element of quality diagnostic systems and predictive analytics systems. Another crucial challenge affecting effective medical image data management is that the diagnostic scans are essentially stored as unstructured data, lacking a well-defined processing methodology for enabling intelligent image data management for supporting applications like similar patient retrieval , automated disease prediction etc. One solution is to incorporate automated diagnostic image descriptions of the observation/findings by leveraging computer vision and natural language processing. In this work, we present multi-task neural models capable of addressing these critical challenges. We propose ESRGAN, an image enhancement technique for improving the quality and visualization of medical chest x-ray images, thereby substantially improving the potential for accurate diagnosis, automatic detection and region-of-interest segmentation. We also propose a CNN-based model called ViewNet for predicting the view orientation of the x-ray image and generating a medical report using Xception net, thus facilitating a robust medical image management system for intelligent diagnosis applications. Experimental results are demonstrated using standard metrics like BRISQUE, PIQE and BLEU scores, indicating that the proposed models achieved excellent performance. Further, the proposed deep learning approaches enable diagnosis in a lesser time and their hybrid architecture shows significant potential for supporting many intelligent diagnosis applications. © 2021 IOP Publishing Ltd.Item Deep neural models for automated multi-task diagnostic scan management - Quality enhancement, view classification and report generation(IOP Publishing Ltd, 2022) Karthik, K.; Kamath S․, S.The detailed physiological perspectives captured by medical imaging provides actionable insights to doctors to manage comprehensive care of patients. However, the quality of such diagnostic image modalities is often affected by mismanagement of the image capturing process by poorly trained technicians and older/poorly maintained imaging equipment. Further, a patient is often subjected to scanning at different orientations to capture the frontal, lateral and sagittal views of the affected areas. Due to the large volume of diagnostic scans performed at a modern hospital, adequate documentation of such additional perspectives is mostly overlooked, which is also an essential key element of quality diagnostic systems and predictive analytics systems. Another crucial challenge affecting effective medical image data management is that the diagnostic scans are essentially stored as unstructured data, lacking a well-defined processing methodology for enabling intelligent image data management for supporting applications like similar patient retrieval, automated disease prediction etc. One solution is to incorporate automated diagnostic image descriptions of the observation/findings by leveraging computer vision and natural language processing. In this work, we present multi-task neural models capable of addressing these critical challenges. We propose ESRGAN, an image enhancement technique for improving the quality and visualization of medical chest x-ray images, thereby substantially improving the potential for accurate diagnosis, automatic detection and region-of-interest segmentation. We also propose a CNN-based model called ViewNet for predicting the view orientation of the x-ray image and generating a medical report using Xception net, thus facilitating a robust medical image management system for intelligent diagnosis applications. Experimental results are demonstrated using standard metrics like BRISQUE, PIQE and BLEU scores, indicating that the proposed models achieved excellent performance. Further, the proposed deep learning approaches enable diagnosis in a lesser time and their hybrid architecture shows significant potential for supporting many intelligent diagnosis applications. © 2021 IOP Publishing Ltd.Item Multi-task deep neural network models for learning COVID-19 disease representations from multimodal data(Inderscience Publishers, 2023) Mayya, V.; Karthik, K.; Karadka, K.P.; Kamath S․, S.S.Over the continued course of the COVID-19 pandemic, a significant volume of expert-written diagnosis reports has been accumulated that capture a multitude of symptoms and observations on diagnosed COVID-19 cases, along with expert-validated chest X-ray scans. The utility of rich, latent information embedded in such unstructured expert-written diagnosis reports and its importance as a source of valuable disease-specific information has been explored to a very limited extent. In this work, a convolutional attention-based dense (CAD) neural model for COVID-19 prediction is proposed. The model is trained on the rich disease-specific parameters extracted from chest X-ray images and expert-written diagnostic text reports to support an evidence-based diagnosis. Scalability is ensured by incorporating content based learning models for automatically generating diagnosis reports of identified COVID-19 cases, reducing radiologists' cognitive burden. Experimental evaluation showed that multimodal patient data plays a vital role in diagnosing early-stage cases, thus helping hasten the diagnosis process. © 2023 Inderscience Enterprises Ltd.
