Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
19 results
Search Results
Item The heart rate is a non-stationary signal, and its variation can contain indicators of current disease or warnings about impending cardiac diseases. The indicators can be present at all times or can occur at random, during certain intervals of the day. However, to study and pinpoint abnormalities in large quantities of data collected over several hours is strenuous and time consuming. Hence, heart rate variation measurement (instantaneous heart rate against time) has become a popular, non-invasive tool for assessing the autonomic nervous system. Computer-based analytical tools for the in-depth study and classification of data over day-long intervals can be very useful in diagnostics. The paper deals with the classification of cardiac rhythms using an artificial neural network and fuzzy relationships. The results indicate a high level of efficacy of the tools used, with an accuracy level of 80-85%. © IFMBE: 2004.(Classification of cardiac abnormalities using heart rate signals) Acharya, A.U.; Kumar, A.; Subbanna Bhat, P.; Lim, C.M.; Iyengar, S.S.; Kannathal, N.; Krishnan, S.M.2004Item Automated identification of diabetic retinopathy stages using digital fundus images(2008) Nayak, J.; Subbanna Bhat, P.S.; Acharya, R.; Lim, C.M.; Kagathi, M.Diabetic retinopathy (DR) is caused by damage to the small blood vessels of the retina in the posterior part of the eye of the diabetic patient. The main stages of diabetic retinopathy are non-proliferate diabetes retinopathy (NPDR) and proliferate diabetes retinopathy (PDR). The retinal fundus photographs are widely used in the diagnosis and treatment of various eye diseases in clinics. It is also one of the main resources for mass screening of diabetic retinopathy. In this work, we have proposed a computer-based approach for the detection of diabetic retinopathy stage using fundus images. Image preprocessing, morphological processing techniques and texture analysis methods are applied on the fundus images to detect the features such as area of hard exudates, area of the blood vessels and the contrast. Our protocol uses total of 140 subjects consisting of two stages of DR and normal. Our extracted features are statistically significant (p<0.0001) with distinct mean±SD as shown in Table 1. These features are then used as an input to the artificial neural network (ANN) for an automatic classification. The detection results are validated by comparing it with expert ophthalmologists. We demonstrated a classification accuracy of 93%, sensitivity of 90% and specificity of 100%. © 2007 Springer Science+Business Media, LLC.Item Analysis of cortical rhythms in intracranial EEG by temporal difference operators during epileptic seizures(Elsevier Ltd, 2016) Malali, A.; Chaitanya, G.; Gowda, S.; Majumdar, K.Brain oscillations have traditionally been studied by time-frequency analysis of the electrophysiological signals. In this work we demonstrated the usefulness of two nonlinear combinations of differential operators on intracranial EEG (iEEG) recordings to study abnormal oscillations in human brain during intractable focal epileptic seizures. Each one dimensional time domain signal was visualized as the trajectory of a particle moving in a force field with one degree of freedom. Modeling of the temporal difference operators to be applied on the signals was inspired by the principles of classical Newtonian mechanics. Efficiency of one of the nonlinear combinations of the operators in distinguishing the seizure part from the background signal and the artifacts was established, particularly when the seizure duration was long. The resultant automatic detection algorithm is linear time executable and detects a seizure with an average delay of 5.02 s after the electrographic onset, with a mean 0.05/h false positive rate and 94% detection accuracy. The area under the ROC curve was 0.959. Another nonlinear combination of differential operators detects spikes (peaks) and inverted spikes (troughs) in a signal irrespective of their shape and size. It was shown that in a majority of the cases simultaneous occurrence of all the spikes and inverted spikes across the focal channels was more after the seizure offset than during the seizure, where the duration after the offset was taken equal to the duration of the seizure. It has been explained in terms of GABAergic inhibition of seizure termination. © 2016 Elsevier Ltd. All rights reserved.Item Recent Advancements in Retinal Vessel Segmentation(Springer New York LLC barbara.b.bertram@gsk.com, 2017) Srinidhi, C.L.; Aparna., P.; Rajan, J.Retinal vessel segmentation is a key step towards the accurate visualization, diagnosis, early treatment and surgery planning of ocular diseases. For the last two decades, a tremendous amount of research has been dedicated in developing automated methods for segmentation of blood vessels from retinal fundus images. Despite the fact, segmentation of retinal vessels still remains a challenging task due to the presence of abnormalities, varying size and shape of the vessels, non-uniform illumination and anatomical variability between subjects. In this paper, we carry out a systematic review of the most recent advancements in retinal vessel segmentation methods published in last five years. The objectives of this study are as follows: first, we discuss the most crucial preprocessing steps that are involved in accurate segmentation of vessels. Second, we review most recent state-of-the-art retinal vessel segmentation techniques which are classified into different categories based on their main principle. Third, we quantitatively analyse these methods in terms of its sensitivity, specificity, accuracy, area under the curve and discuss newly introduced performance metrics in current literature. Fourth, we discuss the advantages and limitations of the existing segmentation techniques. Finally, we provide an insight into active problems and possible future directions towards building successful computer-aided diagnostic system. © 2017, Springer Science+Business Media New York.Item Combined radiogrammetry and texture analysis for early diagnosis of osteoporosis using Indian and Swiss data(Elsevier Ltd, 2018) Areeckal, A.S.; Kamath, J.; Zawadynski, S.; Kocher, M.; Sumam David, S.Osteoporosis is a bone disorder characterized by bone loss and decreased bone strength. The most widely used technique for detection of osteoporosis is the measurement of bone mineral density (BMD) using dual energy X-ray absorptiometry (DXA). But DXA scans are expensive and not widely available in low-income economies. In this paper, we propose a low cost pre-screening tool for the detection of low bone mass, using cortical radiogrammetry of third metacarpal bone and trabecular texture analysis of distal radius from hand and wrist radiographs. An automatic segmentation algorithm to automatically locate and segment the third metacarpal bone and distal radius region of interest (ROI) is proposed. Cortical measurements such as combined cortical thickness (CCT), cortical area (CA), percent cortical area (PCA) and Barnett Nordin index (BNI) were taken from the shaft of third metacarpal bone. Texture analysis of trabecular network at the distal radius was performed using features obtained from histogram, gray level Co-occurrence matrix (GLCM) and morphological gradient method (MGM). The significant cortical and texture features were selected using independent sample t-test and used to train classifiers to classify healthy subjects and people with low bone mass. The proposed pre-screening tool was validated on two ethnic groups, Indian sample population and Swiss sample population. Data of 134 subjects from Indian sample population and 65 subjects from Swiss sample population were analysed. The proposed automatic segmentation approach shows a detection accuracy of 86% in detecting the third metacarpal bone shaft and 90% in accurately locating the distal radius ROI. Comparison of the automatic radiogrammetry to the ground truth provided by experts show a mean absolute error of 0.04 mm for cortical width of healthy group, 0.12 mm for cortical width of low bone mass group, 0.22 mm for medullary width of healthy group, and 0.26 mm for medullary width of low bone mass group. Independent sample t-test was used to select the most discriminant features, to be used as input for training the classifiers. Pearson correlation analysis of the extracted features with DXA-BMD of lumbar spine (DXA-LS) shows significantly high correlation values. Classifiers were trained with the most significant features in the Indian and Swiss sample data. Weighted KNN classifier shows the best test accuracy of 78% for Indian sample data and 100% for Swiss sample data. Hence, combined automatic radiogrammetry and texture analysis is shown to be an effective low cost pre-screening tool for early diagnosis of osteoporosis. © 2018 Elsevier LtdItem Estimation of tumor parameters using neural networks for inverse bioheat problem(Elsevier Ireland Ltd, 2021) Majdoubi, J.; Iyer, A.S.; Ashique, A.M.; Arumuga Perumal, D.A.; Mahrous, Y.M.; Rahimi-Gorji, M.; Issakhov, A.Background and objective: Some types of cancer cause rapid cell growth, while others cause cells to grow and divide at a slower rate. Certain forms of cancer result in visible growths called tumors. This work proposes an inverse estimation of the size and location of the tumor using a feedforward Neural Network (FFNN) model. Methods: The forward model is a 3D model of the breast induced with a tumor of various sizes at different locations within the breast, and it is solved using the Pennes equation. The data obtained from the simulation of the bioheat transfer is used for training the neural network. In order to optimize the neural network architecture, the work proposes varying the number of neurons in the hidden layer and thus finding the best fit to create a relationship between the temperature profile and tumor parameters which can be used to estimate the tumor parameters given the temperature profile. Results: These simulations resulted in a temperature distribution profile that could thus be used to locate and determine the parameters of the cancerous tumor within the breast. The prediction accuracy showed the capacity of the trained Feed Forward Neural Network to estimate the unknown parameters within an acceptable range of error. The model validations use the Root Mean Square Error method to quantify and minimize the prediction error. Conclusions: In this work, a non-intrusive method for the diagnosis of breast cancer was modelled, which yields conclusive results for the estimation of the tumor parameters. © 2021Item LiverNet: efficient and robust deep learning model for automatic diagnosis of sub-types of liver hepatocellular carcinoma cancer from H&E stained liver histopathology images(Springer Science and Business Media Deutschland GmbH, 2021) Aatresh, A.A.; Alabhya, K.; Lal, S.; Kini, J.; Saxena, P.P.Purpose: Liver cancer is one of the most common types of cancers in Asia with a high mortality rate. A common method for liver cancer diagnosis is the manual examination of histopathology images. Due to its laborious nature, we focus on alternate deep learning methods for automatic diagnosis, providing significant advantages over manual methods. In this paper, we propose a novel deep learning framework to perform multi-class cancer classification of liver hepatocellular carcinoma (HCC) tumor histopathology images which shows improvements in inference speed and classification quality over other competitive methods. Method: The BreastNet architecture proposed by Togacar et al. shows great promise in using convolutional block attention modules (CBAM) for effective cancer classification in H&E stained breast histopathology images. As part of our experiments with this framework, we have studied the addition of atrous spatial pyramid pooling (ASPP) blocks to effectively capture multi-scale features in H&E stained liver histopathology data. We classify liver histopathology data into four classes, namely the non-cancerous class, low sub-type liver HCC tumor, medium sub-type liver HCC tumor, and high sub-type liver HCC tumor. To prove the robustness and efficacy of our models, we have shown results for two liver histopathology datasets—a novel KMC dataset and the TCGA dataset. Results: Our proposed architecture outperforms state-of-the-art architectures for multi-class cancer classification of HCC histopathology images, not just in terms of quality of classification, but also in computational efficiency on the novel proposed KMC liver data and the publicly available TCGA-LIHC dataset. We have considered precision, recall, F1-score, intersection over union (IoU), accuracy, number of parameters, and FLOPs as metrics for comparison. The results of our meticulous experiments have shown improved classification performance along with added efficiency. LiverNet has been observed to outperform all other frameworks in all metrics under comparison with an approximate improvement of 2 % in accuracy and F1-score on the KMC and TCGA-LIHC datasets. Conclusion: To the best of our knowledge, our work is among the first to provide concrete proof and demonstrate results for a successful deep learning architecture to handle multi-class HCC histopathology image classification among various sub-types of liver HCC tumor. Our method shows a high accuracy of 90.93 % on the proposed KMC liver dataset requiring only 0.5739 million parameters and 1.1934 million floating point operations per second. © 2021, CARS.Item Deep neural models for automated multi-task diagnostic scan management - Quality enhancement, view classification and report generation(IOP Publishing Ltd, 2022) Karthik, K.; Kamath S․, S.The detailed physiological perspectives captured by medical imaging provides actionable insights to doctors to manage comprehensive care of patients. However, the quality of such diagnostic image modalities is often affected by mismanagement of the image capturing process by poorly trained technicians and older/poorly maintained imaging equipment. Further, a patient is often subjected to scanning at different orientations to capture the frontal, lateral and sagittal views of the affected areas. Due to the large volume of diagnostic scans performed at a modern hospital, adequate documentation of such additional perspectives is mostly overlooked, which is also an essential key element of quality diagnostic systems and predictive analytics systems. Another crucial challenge affecting effective medical image data management is that the diagnostic scans are essentially stored as unstructured data, lacking a well-defined processing methodology for enabling intelligent image data management for supporting applications like similar patient retrieval, automated disease prediction etc. One solution is to incorporate automated diagnostic image descriptions of the observation/findings by leveraging computer vision and natural language processing. In this work, we present multi-task neural models capable of addressing these critical challenges. We propose ESRGAN, an image enhancement technique for improving the quality and visualization of medical chest x-ray images, thereby substantially improving the potential for accurate diagnosis, automatic detection and region-of-interest segmentation. We also propose a CNN-based model called ViewNet for predicting the view orientation of the x-ray image and generating a medical report using Xception net, thus facilitating a robust medical image management system for intelligent diagnosis applications. Experimental results are demonstrated using standard metrics like BRISQUE, PIQE and BLEU scores, indicating that the proposed models achieved excellent performance. Further, the proposed deep learning approaches enable diagnosis in a lesser time and their hybrid architecture shows significant potential for supporting many intelligent diagnosis applications. © 2021 IOP Publishing Ltd.Item Swarm optimisation-based bag of visual words model for content-based X-ray scan retrieval(Inderscience Publishers, 2022) Karthik, K.; Kamath S․, S.Classification and retrieval of medical images (MedIR) are emerging applications of computer vision for enabling intelligent medical diagnostics. Medical images are multi-dimensional and require specialised processing for the extraction of features from their manifold underlying content. Existing models often fail to consider the inherent characteristics of data and have thus often fallen short when applied to medical images. In this paper, we present a MedIR approach based on the bag of visual words (BoVW) model for content-based medical image retrieval. When it comes to any medical approach models, an imbalance in the dataset is one of the issues. Hence the perspective is also considering a balanced set of categories from an imbalanced dataset. The proposed work on BoVW model extracts features from each image are used to train supervised machine learning classifier for X-ray medical image classification and retrieval. During the experimental validation, the proposed model performed well with the classification accuracy of 89.73% and a good retrieval result using our filter-based approach. © © 2022 Inderscience Enterprises Ltd.
