Conference Papers

Permanent URI for this collectionhttps://idr.nitk.ac.in/handle/123456789/28506

Browse

Search Results

Now showing 1 - 10 of 12
  • Item
    Medical image segmentation using improved mountain clustering technique version-2
    (2010) Verma, N.K.; Roy, A.; Vasikarla, S.
    This paper proposes Improved Mountain Clustering version-2 (IMC-2) based medical image segmentation. The proposed technique is a more powerful approach for medical image based diagnosing diseases like brain tumor, tooth decay, lung cancer, tuberculosis etc. The IMC-2 based medical image segmentation approach has been applied on various categories of images including MRI images, dental X-rays, chest X-rays and compared with some widely used segmentation techniques such as K-means, FCM and EM as well as with IMC-1. The performance of all these segmentation approaches is compared on widely accepted validation measure, Global Silhouette Index. Also, the segments obtained from the above mentioned segmentation approaches have been visually evaluated. © 2010 IEEE.
  • Item
    Efficient fuzzy clustering based approach to brain tumor segmentation on MR images
    (2011) Arakeri, M.P.; Guddeti, G.
    Image segmentation is one of the most vital and significant step in medical applications. The conventional fuzzy c-means (FCM) clustering is the most widely used unsupervised clustering method for brain tumor segmentation on magnetic resonance (MR) images. However, the major limitation of the conventional FCM is its huge computational time and it is sensitive to initial cluster centers. In this paper, we present a novel efficient FCM algorithm to eliminate the drawback of conventional FCM. The proposed algorithm is formulated by incorporating distribution of the gray level information in the image and a new objective function which ensures better stability and compactness of clusters. Experiments are conducted on brain MR images to investigate the effectiveness of the proposed method in segmenting brain tumor. The conventional FCM and the proposed method are compared to explore the efficiency and accuracy of the proposed method. © 2011 Springer-Verlag.
  • Item
    A parallel segmentation of brain tumor from magnetic resonance images
    (2012) Dessai, V.S.; Arakeri, M.P.; Guddeti, G.
    Medical image segmentation is nowadays at the core of medical image analysis and supports computer-aided diagnosis, surgical planning, intra-operative guidance or postoperative assessment. Large amounts of research efforts have been made in developing effective brain MR (magnetic resonance) image tumor segmentation methods in the past years. However algorithms proposed so far are time consuming because it involves lot of mathematical computations. Also serial segmentation of multiple MRI slices (usually required for 3D visualization) takes exponential time. This results in need for improvement in performance as far as the time complexity is concerned. This paper proposes a methodology that incorporates the K-means clustering and morphological operation for parallel segmentation of multiple MRI slices corresponding to single patient. Segmentation of multiple MRI slices for tumor extraction plays major role in 3D (Three Dimensional) visualization and serves as an input for the same. The proposed framework follows SIMD (Single Instruction Multiple Data) model and since the segmentation of individual slice is independent of each other and can be performed in parallel and multithreading definitely speeds up the entire process. Also the framework does not involve any kind of inter-process communication thus the time is saved here as well. © 2012 IEEE.
  • Item
    A hybrid algorithm for disparity calculation from sparse disparity estimates based on stereo vision
    (Institute of Electrical and Electronics Engineers Inc., 2014) Mukherjee, S.; Guddeti, G.R.M.
    In this paper, we have proposed a novel method for stereo disparity estimation by combining the existing methods of block based and region based stereo matching. Our method can generate dense disparity maps from disparity measurements of only 18% pixels of either the left or the right image of a stereo image pair. It works by segmenting the lightness values of image pixels using a fast implementation of K-Means clustering. It then refines those segment boundaries by morphological filtering and connected components analysis, thus removing a lot of redundant boundary pixels. This is followed by determining the boundaries' disparities by the SAD cost function. Lastly, we reconstruct the entire disparity map of the scene from the boundaries' disparities through disparity propagation along the scan lines and disparity prediction of regions of uncertainty by considering disparities of the neighboring regions. Experimental results on the Middlebury stereo vision dataset demonstrate that the proposed method outperforms traditional disparity determination methods like SAD and NCC by up to 30% and achieves an improvement of 2.6% when compared to a recent approach based on absolute difference (AD) cost function for disparity calculations [1]. © 2014 IEEE.
  • Item
    Depth image super-resolution with local medians and bilateral filtering
    (Institute of Electrical and Electronics Engineers Inc., 2016) Balure, C.S.; Ramesh Kini, M.; Bhavsar, A.
    In this paper, we propose an approach for depth image super-resolution (SR). Given a noisy low resolution (LR) depth image and its corresponding registered high resolution (HR) colour image, our approach improves the resolution of the LR image while suppressing noise. We use the segmentation of HR colour images as a cue for depth image super-resolution. Our method begins with a highly over-segmented color image (using well-known segmentation approaches such as mean shift (MS) or simple linear iterative clustering (SLIC), and an interpolated LR depth image. We then use a combination of the local medians in the depth image (corresponding to the colour segments) and bicubic interpolation, followed by bilateral filtering to compute the SR depth image. We performed experiments for higher magnification factors 4, 8 using the Middlebury depth image dataset and evaluate the SR performance using the PSNR and SSIM metrics. The experimental results show that proposed method (including some variants), while being relatively simplistic, shows an average improvement of 1.2dB and 1.7dB on noiseless and noisy data respectively, over the popular method of guided image filtering (GIF) for upsampling factor 8. © 2016 IEEE.
  • Item
    Damage identification and assessment using image processing on post-disaster satellite imagery
    (Institute of Electrical and Electronics Engineers Inc., 2017) Joshi, A.R.; Tarte, I.; Suresh, S.; Koolagudi, S.G.
    Natural disasters such as earthquakes and tsunamis often have a devastating effect on human life and cause noticeable damage to infrastructure. Active research has been ongoing to mitigate the impact of these catastrophes and preclude the economic losses. The existing methods that utilize pre-event and post-event images not only require the immediate and guaranteed availability of the appropriate data set but are also encumbered by manual mapping of the images, necessitating the indication of corresponding control points in the two images. This paper highlights the use of only post-event imagery in the absence of reference data to achieve a more timely delivery to produce damage maps as the output. This eliminates the need for manual georeferencing of images. Our method incorporates simple linear iterative clustering (SLIC) for segmenting the images into uniform superpixels and extraction of 62 features for each superpixel. We used various classifiers of which Random Forest classifier was found to give a comparatively high accuracy of 90.4% over others. To enumerate the accuracy of the method proposed, we used 1500 data regions of which 20% were used for testing, and 80% were used for training. The aerial images taken by GeoEye1 after the 2011 Christchurch earthquake and 2011 Japan earthquake and tsunami are utilized in this study to detect building damage. In the case of availability of ground truth, we compare the histograms of the pre- and post-imagery to quantify similarity as the SSD (Sum of Squared Distances) value and thus, our approach produces an assessment as an output map displaying the extent of damage in the area covered by each superpixel. We consider 6 levels of damage ranging from 1 to 6, where 1 signifies no damage, and 6, maximum damage. © 2017 IEEE.
  • Item
    Machine learning for mobile wound assessment
    (SPIE spie@spie.org, 2018) Kamath, S.; Sirazitdinova, E.; Deserno, T.M.
    Chronic wounds affect millions of people around the world. In particular, elderly persons in home care may develop decubitus. Here, mobile image acquisition and analysis can provide a good assistance. We develop a system for mobile wound capture using mobile devices such as smartphones. The photographs are acquired with the integrated camera of the device and then calibrated and processed to determine the size of various tissues that are present in a wound, i.e., necrotic, sloughy, and granular tissue. The random forest classifier based on various color and texture features is used for that. These features are Sobel, Hessian, membrane projections, variance, mean, median, anisotropic diffusion, and bilateral as well as Kuwahara filters. The resultant probability output is thresholded using the Otsu technique. The similarity between manual ground truth labeling and the classification is measured. The acquired results are compared to those achieved with a basic technique of color thresholding, as well as those produced by the SVM classifier. The fast random forest was found to produce better results. It is also seen to have a superior performance when the method is applied only to the wound regions having the background subtracted. Mean similarity is 0.89, 0.39, and 0.44 for necrotic, sloughy, and granular tissue, respectively. Although the training phase is time consuming, the trained classifier performs fast enough to be implemented on the mobile device. This will allow comprehensive monitoring of skin lesions and wounds. © 2018 SPIE.
  • Item
    Depthwise Separable Convolutional Neural Network Model for Intra-Retinal Cyst Segmentation
    (Institute of Electrical and Electronics Engineers Inc., 2019) Girish, G.N.; Saikumar, B.; Roychowdhury, S.; Kothari, A.R.; Rajan, J.
    Intra-retinal cysts (IRCs) are significant in detecting several ocular and retinal pathologies. Segmentation and quantification of IRCs from optical coherence tomography (OCT) scans is a challenging task due to present of speckle noise and scan intensity variations across the vendors. This work proposes a convolutional neural network (CNN) model with an encoder-decoder pair architecture for IRC segmentation across different cross-vendor OCT scans. Since deep CNN models have high computational complexity due to a large number of parameters, the proposed method of depthwise separable convolutional filters aids model generalizability and prevents model over-fitting. Also, the swish activation function is employed to prevent the vanishing gradient problem. The optima cyst segmentation challenge (OCSC) dataset with four different vendor OCT device scans is used to evaluate the proposed model. Our model achieves a mean Dice score of 0.74 and mean recall/precision rate of 0.72/0.82 across different imaging vendors and it outperforms existing algorithms on the OCSC dataset. © 2019 IEEE.
  • Item
    Brain tumor segmentation based on 3D residual U-Net
    (Springer, 2020) Bhalerao, M.; Thakur, S.
    We propose a deep learning based approach for automatic brain tumor segmentation utilizing a three-dimensional U-Net extended by residual connections. In this work, we did not incorporate architectural modifications to the existing 3D U-Net, but rather evaluated different training strategies for potential improvement of performance. Our model was trained on the dataset of the International Brain Tumor Segmentation (BraTS) challenge 2019 that comprise multi-parametric magnetic resonance imaging (mpMRI) scans from 335 patients diagnosed with a glial tumor. Furthermore, our model was evaluated on the BraTS 2019 independent validation data that consisted of another 125 brain tumor mpMRI scans. The results that our 3D Residual U-Net obtained on the BraTS 2019 test data are Mean Dice scores of 0.697, 0.828, 0.772 and Hausdorff95 distances of 25.56, 14.64, 26.69 for enhancing tumor, whole tumor, and tumor core, respectively. © Springer Nature Switzerland AG 2020.
  • Item
    Survey of Leukemia Cancer Cell Detection Using Image Processing
    (Springer Science and Business Media Deutschland GmbH, 2022) Devi, T.G.; Patil, N.; Rai, S.; Philipose, C.S.
    Cancer is the development of abnormal cells that divide at an abnormal pace, uncontrollably. Cancerous cells have the ability to destroy other normal tissues and can spread throughout the body. Cancer cells can develop in various parts of the body. The paper focuses on leukemia which is a type of blood cancer. Blood cancer usually start in the bone marrow where the blood is produced in the body. The types of blood cancer are: Leukemia, Non-Hodgkin lymphoma, Hodgkin lymphoma, and Multiple myeloma. Leukemia is a type of blood cancer that originates in the bone marrow. Leukemia is seen when the body produces an abnormal amount of white blood cells that hinder the bone marrow from creating red blood cells and platelets. Several detection methods to identify the cancerous cells have been proposed. Identification of the cancer cells through cell image processing is very complex. The use of computer aided image processing allows the images to be viewed in 2D and 3D making it easier to identify the cancerous cells. The cells have to undergo segmentation and classification in order to identify the cancerous tumours. Several papers propose segmentation methods, classification methods and some propose both. The purpose of this survey is to review various papers that use either conventional methods or machine learning methods to detect the cells as cancerous and non-cancerous. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.