Yadav, N.Naik, D.2026-02-0620212021 12th International Conference on Computing Communication and Networking Technologies, ICCCNT 2021, 2021, Vol., , p. -https://doi.org/10.1109/ICCCNT51525.2021.9579925https://idr.nitk.ac.in/handle/123456789/30218The aim of the video captioning task is to use multiple natural-language sentences to define video content. Photographic, graphical, and auditory data are all used in the videos. Our goal is to investigate and recognize the video's visual features, as well as to create a caption so that anyone can get the video's information within a second. Despite the fact, that phase encoder-decoder models have made significant progress, but it still needs many improvements. In the present work, we enhanced the top-down architecture using Bahdanau Attention, Deep-Long Short-Term Memory (Deep-LSTM) and weighted loss function. VGG16 is used to extract the features from the frames. To understand the actions in the video, Deep-LSTM is paired with an attention system. On the MSVD dataset, we analysed the efficiency of our model, which indicates a major improvement over the other state-of-art model. © 2021 IEEE.Computer VisionConvolutional Neural networkDeep Neural NetworkImage CaptioningNLPRecurrent Neural NetworkVideo CaptioningLoss Optimised Video Captioning using Deep-LSTM, Attention Mechanism and Weighted Loss Metrices