L, r-Stitch Unit: Encoder-Decoder-CNN Based Image-Mosaicing Mechanism for Stitching Non-Homogeneous Image Sequences

dc.contributor.authorChilukuri, P.K.
dc.contributor.authorPadala, P.
dc.contributor.authorPadala, P.
dc.contributor.authorDesanamukula, V.S.
dc.contributor.authorPvgd, P.R.
dc.date.accessioned2026-02-05T09:27:43Z
dc.date.issued2021
dc.description.abstractImage-stitching (or) mosaicing is considered an active research-topic with numerous use-cases in computer-vision, AR/VR, computer-graphics domains, but maintaining homogeneity among the input image sequences during the stitching/mosaicing process is considered as a primary-limitation major-disadvantage. To tackle these limitations, this article has introduced a robust and reliable image stitching methodology (l,r-Stitch Unit), which considers multiple non-homogeneous image sequences as input to generate a reliable panoramically stitched wide view as the final output. The l,r-Stitch Unit further consists of a pre-processing, post-processing sub-modules a l,r-PanoED-network, where each sub-module is a robust ensemble of several deep-learning, computer-vision image-handling techniques. This article has also introduced a novel convolutional-encoder-decoder deep-neural-network (l,r-PanoED-network) with a unique split-encoding-network methodology, to stitch non-coherent input left, right stereo image pairs. The encoder-network of the proposed l,r-PanoED extracts semantically rich deep-feature-maps from the input to stitch/map them into a wide-panoramic domain, the feature-extraction feature-mapping operations are performed simultaneously in the l,r-PanoED's encoder-network based on the split-encoding-network methodology. The decoder-network of l,r-PanoED adaptively reconstructs the output panoramic-view from the encoder networks' bottle-neck feature-maps. The proposed l,r-Stitch Unit has been rigorously benchmarked with alternative image-stitching methodologies on our custom-built traffic dataset and several other public-datasets. Multiple evaluation metrics (SSIM, PSNR, MSE, L_{\alpha,\beta,\gamma } , FM-rate, Average-latency-time) wild-Conditions (rotational/color/intensity variances, noise, etc) were considered during the benchmarking analysis, and based on the results, our proposed method has outperformed among other image-stitching methodologies and has proved to be effective even in wild non-homogeneous inputs. © 2013 IEEE.
dc.identifier.citationIEEE Access, 2021, 9, , pp. 16761-16782
dc.identifier.urihttps://doi.org/10.1109/ACCESS.2021.3052474
dc.identifier.urihttps://idr.nitk.ac.in/handle/123456789/23489
dc.publisherInstitute of Electrical and Electronics Engineers Inc.
dc.subjectBottles
dc.subjectComputer graphics
dc.subjectComputer vision
dc.subjectDecoding
dc.subjectDeep learning
dc.subjectDeep neural networks
dc.subjectEncoding (symbols)
dc.subjectNetwork coding
dc.subjectConvolutional encoders
dc.subjectEvaluation metrics
dc.subjectHandling technique
dc.subjectImage stitching
dc.subjectNetwork methodologies
dc.subjectResearch topics
dc.subjectStereo image pairs
dc.subjectStitching/mosaicing
dc.subjectStereo image processing
dc.titleL, r-Stitch Unit: Encoder-Decoder-CNN Based Image-Mosaicing Mechanism for Stitching Non-Homogeneous Image Sequences

Files

Collections