Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
2 results
Search Results
Item Machine learning for mobile wound assessment(SPIE spie@spie.org, 2018) Kamath, S.; Sirazitdinova, E.; Deserno, T.M.Chronic wounds affect millions of people around the world. In particular, elderly persons in home care may develop decubitus. Here, mobile image acquisition and analysis can provide a good assistance. We develop a system for mobile wound capture using mobile devices such as smartphones. The photographs are acquired with the integrated camera of the device and then calibrated and processed to determine the size of various tissues that are present in a wound, i.e., necrotic, sloughy, and granular tissue. The random forest classifier based on various color and texture features is used for that. These features are Sobel, Hessian, membrane projections, variance, mean, median, anisotropic diffusion, and bilateral as well as Kuwahara filters. The resultant probability output is thresholded using the Otsu technique. The similarity between manual ground truth labeling and the classification is measured. The acquired results are compared to those achieved with a basic technique of color thresholding, as well as those produced by the SVM classifier. The fast random forest was found to produce better results. It is also seen to have a superior performance when the method is applied only to the wound regions having the background subtracted. Mean similarity is 0.89, 0.39, and 0.44 for necrotic, sloughy, and granular tissue, respectively. Although the training phase is time consuming, the trained classifier performs fast enough to be implemented on the mobile device. This will allow comprehensive monitoring of skin lesions and wounds. © 2018 SPIE.Item Multistage Image Reconstruction and Attention-Based Semi-Supervised Learning for Medical Image Segmentation(SAGE Publications Ltd, 2025) Gawas, P.; Kamath S, S.; Singh, A.; Gurupur, V.Automated segmentation of medical images is critical in detecting and diagnosing various conditions. In recent years, supervised deep learning (DL) techniques have been widely researched. However, their application is often limited by the availability of annotated data in the medical domain. To address this, recent studies have explored semi-supervised techniques, though very few of these works focus on skin-lesion segmentation. In addition, they struggle to effectively capture contextual features to delineate the region of interest from the surrounding tissues in the image, which is crucial for accurate segmentation. In this article, a semi-supervised approach for medical image segmentation called MIRA (Medical Image Reconstruction and Analysis) is proposed, which uses adaptive-attention U-Net (AA-U-Net) trained on pseudo-labels generated with a lightweight feature-consistent encoder-decoder network (FCED-Net) to address these challenges. A case study focusing on the precise segmentation of malignant skin lesions is considered for our experiments, as the scarcity of extensive annotated dermatology data limits the effectiveness of traditional DL models. The proposed pipeline is validated and tested using two standard datasets, ISIC2016 and PH2. With only 50% annotated samples, the proposed approach demonstrated promising performance with DSC, IoU, and accuracy of 0.96, 0.92, and 0.85 on ISIC2016 and 0.93, 0.88, and 0.93 on cross-data testing with PH 2 dataset. When benchmarked against leading edge models trained on 100% labeled data, MIRA achieved promising results and even outperformed in some cases. These findings show that it can significantly reduce manual annotation requirements while achieving segmentation performance comparable to models trained on fully annotated skin lesion data. © The Author(s) 2025
