Loss Optimised Video Captioning using Deep-LSTM, Attention Mechanism and Weighted Loss Metrices
| dc.contributor.author | Yadav, N. | |
| dc.contributor.author | Naik, D. | |
| dc.date.accessioned | 2026-02-06T06:36:05Z | |
| dc.date.issued | 2021 | |
| dc.description.abstract | The aim of the video captioning task is to use multiple natural-language sentences to define video content. Photographic, graphical, and auditory data are all used in the videos. Our goal is to investigate and recognize the video's visual features, as well as to create a caption so that anyone can get the video's information within a second. Despite the fact, that phase encoder-decoder models have made significant progress, but it still needs many improvements. In the present work, we enhanced the top-down architecture using Bahdanau Attention, Deep-Long Short-Term Memory (Deep-LSTM) and weighted loss function. VGG16 is used to extract the features from the frames. To understand the actions in the video, Deep-LSTM is paired with an attention system. On the MSVD dataset, we analysed the efficiency of our model, which indicates a major improvement over the other state-of-art model. © 2021 IEEE. | |
| dc.identifier.citation | 2021 12th International Conference on Computing Communication and Networking Technologies, ICCCNT 2021, 2021, Vol., , p. - | |
| dc.identifier.uri | https://doi.org/10.1109/ICCCNT51525.2021.9579925 | |
| dc.identifier.uri | https://idr.nitk.ac.in/handle/123456789/30218 | |
| dc.publisher | Institute of Electrical and Electronics Engineers Inc. | |
| dc.subject | Computer Vision | |
| dc.subject | Convolutional Neural network | |
| dc.subject | Deep Neural Network | |
| dc.subject | Image Captioning | |
| dc.subject | NLP | |
| dc.subject | Recurrent Neural Network | |
| dc.subject | Video Captioning | |
| dc.title | Loss Optimised Video Captioning using Deep-LSTM, Attention Mechanism and Weighted Loss Metrices |
