A Semantics-Assisted Video Captioning Model Trained with Scheduled Sampling

08/31/2019
by   Haoran Chen, et al.
5

Given the features of a video, recurrent neural network can be used to automatically generate a caption for the video. Existing methods for video captioning have at least three limitations. First, semantic information has been widely applied to boost the performance of video captioning models, but existing networks often fail to provide meaningful semantic features. Second, Teacher Forcing algorithm is often utilized to optimize video captioning models, but during training and inference, different strategies are applied to guide word generation, which lead to poor performance. Third, current video captioning models are prone to generate relatively short captions, which express video contents inappropriately. Towards resolving these three problems, we make three improvements correspondingly. First of all, we utilize both static spatial features and dynamic spatio-temporal features as input for semantic detection network (SDN) in order to generate meaningful semantic features for videos. Then, we propose a scheduled sampling strategy which gradually transfers the training phase from a teacher guiding manner towards a more self teaching manner. At last, the ordinary logarithm probability loss function is leveraged by sentence length so that short sentence inclination is alleviated. Our model achieves state-of-the-art results on the Youtube2Text dataset and is competitive with the state-of-the-art models on the MSR-VTT dataset.

READ FULL TEXT

page 4

page 5

page 6

page 7

page 13

page 14

page 15

page 16

research
03/21/2018

End-to-End Video Captioning with Multitask Reinforcement Learning

Although end-to-end (E2E) learning has led to promising performance on a...
research
02/12/2021

Annotation Cleaning for the MSR-Video to Text Dataset

The video captioning task is to describe the video contents with natural...
research
04/13/2022

Semantic-Aware Pretraining for Dense Video Captioning

This report describes the details of our approach for the event dense-ca...
research
12/02/2021

Syntax Customized Video Captioning by Imitating Exemplar Sentences

Enhancing the diversity of sentences to describe video contents is an im...
research
08/13/2023

Video Captioning with Aggregated Features Based on Dual Graphs and Gated Fusion

The application of video captioning models aims at translating the conte...
research
05/06/2022

Dual-Level Decoupled Transformer for Video Captioning

Video captioning aims to understand the spatio-temporal semantic concept...
research
09/22/2016

Deep Learning for Video Classification and Captioning

Accelerated by the tremendous increase in Internet bandwidth and storage...

Please sign up or login with your details

Forgot password? Click here to reset