Please use this identifier to cite or link to this item: https://idr.nitk.ac.in/jspui/handle/123456789/11421
Full metadata record
DC FieldValueLanguage
dc.contributor.authorUpadhya, A.H.K.
dc.contributor.authorTalawar, B.
dc.contributor.authorRajan, J.
dc.date.accessioned2020-03-31T08:31:21Z-
dc.date.available2020-03-31T08:31:21Z-
dc.date.issued2017
dc.identifier.citationJournal of Real-Time Image Processing, 2017, Vol.13, 1, pp.181-192en_US
dc.identifier.urihttp://idr.nitk.ac.in/jspui/handle/123456789/11421-
dc.description.abstractMagnetic resonance imaging (MRI) is a widely deployed medical imaging technique used for various applications such as neuroimaging, cardiovascular imaging and musculoskeletal imaging. However, MR images degrade in quality due to noise. The magnitude MRI data in the presence of noise generally follows a Rician distribution if acquired with single-coil systems. Several methods are proposed in the literature for denoising MR images corrupted with Rician noise. Amongst the methods proposed in literature for denoising MR images corrupted with Rician noise, the non-local maximum likelihood methods (NLML) and its variants are popular. In spite of the performance and denoising quality, NLML algorithm suffers from a tremendous time complexity O(m3N3) , where m3 and N3 represent the search window and image size, respectively, for a 3D image. This makes the algorithm challenging for deployment in the real-time applications where fast and prompt results are required. A viable solution to this shortcoming would be the application of a data parallel processing framework such as Nvidia CUDA so as to utilize the mutually exclusive and computationally intensive calculations to our advantage. The GPU-based implementation of NLML-based image denoising achieves significant speedup compared to the serial implementation. This research paper describes the first successful attempt to implement a GPU-accelerated version of the NLML algorithm. The main focus of the research was on the parallelization and acceleration of one computationally intensive section of the algorithm so as to demonstrate the execution time improvement through the application of parallel processing concepts on a GPU. Our results suggest the possibility of practical deployment of NLML and its variants for MRI denoising. 2016, Springer-Verlag Berlin Heidelberg.en_US
dc.titleGPU implementation of non-local maximum likelihood estimation method for denoising magnetic resonance imagesen_US
dc.typeArticleen_US
Appears in Collections:1. Journal Articles

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.