Effect of Batch Normalization and Stacked LSTMs on Video Captioning

dc.contributor.authorSarathi, V.
dc.contributor.authorMujumdar, A.
dc.contributor.authorNaik, D.
dc.date.accessioned2026-02-06T06:35:56Z
dc.date.issued2021
dc.description.abstractIntegration of visual content with natural language for generating images or video description has been a challenging task for many years. Recent research in image captioning using Long Short term memory (LSTM) recently has motivated its possible application in video captioning where a video is converted into an array of frames, or images, and this array along with the captions for the video are used to train the LSTM network to associate the video with sentences. However very little is known about using fine tuning techniques such as batch normalization or Stacked LSTMs models in video captioning and how it affects the performance of the model.For this project, we want to compare the performance of the base model described in [1] with batch normalization and stacked LSTMs with base model as our reference. © 2021 IEEE.
dc.identifier.citationProceedings - 5th International Conference on Computing Methodologies and Communication, ICCMC 2021, 2021, Vol., , p. 820-825
dc.identifier.urihttps://doi.org/10.1109/ICCMC51019.2021.9418036
dc.identifier.urihttps://idr.nitk.ac.in/handle/123456789/30151
dc.publisherInstitute of Electrical and Electronics Engineers Inc.
dc.subjectAttention
dc.subjectBidirectional LSTM
dc.subjectVideo Captioning
dc.titleEffect of Batch Normalization and Stacked LSTMs on Video Captioning

Files