Comparitive Study of GRU and LSTM Cells Based Video Captioning Models

No Thumbnail Available

Date

2021

Journal Title

Journal ISSN

Volume Title

Publisher

Institute of Electrical and Electronics Engineers Inc.

Abstract

Video Captioning task involves generating descriptive text for the events and objects in the videos. It mainly involves taking a video, which is nothing but a sequence of frames, as data from the user and giving a single or multiple sentences (sequence of words) to the user. A lot of research has been done in the area of video captioning. Most of this work is based on using Long Short Term Memory (LSTM) units for avoiding the vanishing gradients problem. In this work, we purpose to implement a video captioning model using Gated Recurrent Units(GRU's), attention mechanism and word embeddings and compare the functionalities and results with traditional models that use LSTM's or Recurrent Neural Networks(RNN's). We train and test our model on the standard MSVD (Microsoft Research Video Description Corpus) dataset. We use a wide range of performance metrics like BLEU score, METEOR score, ROUGE-1, ROUGE-2 and ROUGE-L to evaluate the performance. © 2021 IEEE.

Description

Keywords

Attention, BLEU, Encoders, Gated Reccurent Units, METEOR, RNN, ROUGE, Sequence-to-sequence model, Video Captioning

Citation

2021 12th International Conference on Computing Communication and Networking Technologies, ICCCNT 2021, 2021, Vol., , p. -

Endorsement

Review

Supplemented By

Referenced By