Please use this identifier to cite or link to this item:
Title: Ai-Based Clinical Decision Support Systems Using Multimodal Healthcare Data
Authors: Mayya, Veena
Supervisors: S, Sowmya Kamath
Keywords: Clinical Decision Support Systems;Natural Language Pro- cessing, Computer Vision;Machine Learning;Healthcare Informatics
Issue Date: 2022
Publisher: National Institute of Technology Karnataka, Surathkal
Abstract: Healthcare analytics is a branch of data science that examines underlying patterns in healthcare data in order to identify ways in which clinical care can be improved – in terms of patient care, cost optimization, and hospital management. Towards this end, Clinical Decision Support Systems (CDSS) have received extensive re- search attention over the years. CDSS are intended to influence clinical decision making during patient care. CDSS can be defined as “a link between health obser- vations and health-related knowledge that influences treatment choices by clinicians for improved healthcare delivery”.A CDSS is intended to aid physicians and other health care professionals with clinical decision-making tasks based on automated analysis of patient data and other sources of information. CDSS is an evolving system with the potential for wide applicability to improve patient outcomes and healthcare resource utilization. Recent breakthroughs in healthcare analytics have seen an emerging trend in the application of artificial intelligence approaches to assist essential applications such as disease prediction, disease code assignment, disease phenotyping, and disease-related lesion segmentation. Despite the signifi- cant benefits offered by CDSSs, there are several issues that need to be overcome to achieve their full potential. There is substantial scope for improvement in terms of patient data modelling methodologies and prediction models, particularly for unstructured clinical data. This thesis discusses several approaches for developing decision support sys- tems towards patient-centric predictive analytics on large multimodal healthcare data. Clinical data in the form of unstructured text, which is rich in patient- specific information sources, has largely remained unexplored and could be poten- tially used to facilitate effective CDSS development. Effective code assignment for patient clinical records in a hospital plays a significant role in the process of stan- dardizing medical records, mainly for streamlining clinical care delivery, billing, and managing insurance claims. The current practice employed is manual cod- ing, usually carried out by trained medical coders, making the process subjective, error-prone, inexact, and time-consuming. To alleviate this cost-intensive pro- iii cess, intelligent coding systems built on patients’ unstructured electronic medical records (EMR) are critical. Towards this, various deep learning models have been proposed for improving the diagnostic coding system performance that makes use of patient clinical reports and discharge summaries. The approach involved multi channel convolution networks and label attention transformer architectures for au- tomatic assignment of diagnostic codes. The label attention mechanism enabled the direct extraction of textual evidence in medical documents that mapped to the diagnostic codes. Medical imaging data like ultrasound, magnetic resonance imaging, computed tomography, positron emission tomography, X-ray, retinal photography, slit lamp microscopy, etc., play an important role in the early detection, diagnosis, and treatment of diseases. Presently, most imaging modalities are manually inter- preted by expert clinicians for disease diagnosis. With the exponential increase in the volume of chronic patients, this process of manual inspection and interpre- tation increases the cognitive and diagnostic burden on healthcare professionals. Recently, machine learning and deep learning techniques have been utilized for designing computer based analysis systems for medical images. Ophthalmology, pathology, radiology, and oncology are a few fields where deep learning techniques have been successfully leveraged for interpreting imaging data. Ophthalmology was the first field to be revolutionized and most explored in health care. To- wards this, various deep learning models have been proposed for improving the performance of ocular disease detection systems that make use of fundoscopy and slit-lamp microscopy imaging data. Patient data is recorded in multiple formats, including unstructured clinical notes, structured EHRs, and diagnostic images, resulting in multimodal data that together accounts for patients’ demographic information, past histories of illness and medical procedures performed, diseases diagnosed, etc. Most existing works limit their models to a single modality of data, like structured textual, unstruc- tured textual, or imaging medical data, and very few works have utilized multi- modal medical data. To address this, various deep learning models were designed that can learn disease representations from multimodal patient data for early dis- ease prediction. Scalability is ensured by incorporating content based learning models for automatically generating diagnosis reports of identified lung diseases, reducing radiologists’ cognitive burden.
Appears in Collections:1. Ph.D Theses

Files in This Item:
File Description SizeFormat 
187054IT004-Veena Mayya.pdf79.44 MBAdobe PDFThumbnail

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.