Browsing by Author "Sankar, R."
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Capsule Network–based architectures for the segmentation of sub-retinal serous fluid in optical coherence tomography images of central serous chorioretinopathy(Springer Science and Business Media Deutschland GmbH, 2021) Pawan, S.J.; Sankar, R.; Jain, A.; Jain, M.; Darshan, D.V.; Anoop, B.N.; Kothari, A.R.; Venkatesan, M.; Rajan, J.Central serous chorioretinopathy (CSCR) is a chorioretinal disorder of the eye characterized by serous detachment of the neurosensory retina at the posterior pole of the eye. CSCR results from the accumulation of subretinal fluid (SRF) due to idiopathic defects at the level of the retinal pigment epithelial (RPE) that allows serous fluid from the choriocapillaris to diffuse into the subretinal space between RPE and neurosensory retinal layers. This condition is presently investigated by clinicians using invasive angiography or non-invasive optical coherence tomography (OCT) imaging. OCT images provide a representation of the fluid underlying the retina, and in the absence of automated segmentation tools, currently only a qualitative assessment of the same is used to follow the progression of the disease. Automated segmentation of the SRF can prove to be extremely useful for the assessment of progression and for the timely management of CSCR. In this paper, we adopt an existing architecture called SegCaps, which is based on the recently introduced Capsule Networks concept, for the segmentation of SRF from CSCR OCT images. Furthermore, we propose an enhancement to SegCaps, which we have termed as DRIP-Caps, that utilizes the concepts of Dilation, Residual Connections, Inception Blocks, and Capsule Pooling to address the defined problem. The proposed model outperforms the benchmark UNet architecture while reducing the number of trainable parameters by 54.21%. Moreover, it reduces the computation complexity of SegCaps by reducing the number of trainable parameters by 37.85%, with competitive performance. The experiments demonstrate the generalizability of the proposed model, as evidenced by its remarkable performance even with a limited number of training samples. [Figure not available: see fulltext.]. © 2021, International Federation for Medical and Biological Engineering.Item Image Colorization Using GANs and Perceptual Loss(Institute of Electrical and Electronics Engineers Inc., 2020) Sankar, R.; Nair, A.; Abhinav, P.; Mothukuri, S.K.P.; Koolagudi, S.G.Image colorization is of great use for several applications, such as the restoration of old images, as well as enabling the storage of grayscale images, which take up less space, which can later be colorized. But this problem is hard since there exist many possible color combinations for a particular grayscale image. Recent developments have aimed to solve this problem using deep learning. But, for achieving good performance, they require highly processed inputs, along with additional elements, such as semantic maps. In this paper, an attempt has been made for generalizing the procedure of colorization using a conditional Deep Convolutional Generative Adversarial Network (DCGAN) by adding "Perceptual Loss". The network is trained over the CIFAR-100 dataset. The results of the proposed generative model with perceptual loss are compared with the existing state-of-the-art systems normal GAN model and U-Net Convolutional model. © 2020 IEEE.
