Diverse Video Captioning Through Latent Variable Expansion with Conditional GAN

10/26/2019
by   Huanhou Xiao, et al.
0

Automatically describing video content with text description is challenging but important task, which has been attracting a lot of attention in CV community. Previous works mainly strive for the accuracy of the generated sentences, while ignoring the sentences diversity, which is inconsistent with human behavior. In this paper, we aim to caption each video with multiple descriptions and propose a novel framework. Concretely, for a given video, the intermediate latent variables of conventional encode-decode process are utilized as input to the conditional generative adversarial network (CGAN) with the purpose of generating diverse sentences. We adopt the combination of LSTM and CNN as our generator that produces descriptions conditioned on latent variables and the CNNs as discriminator that assesses the quality of generated sentences. We evaluate our method on the benchmark datasets, where it demonstrates its ability to generate diverse descriptions and achieves competitive or even superior results against other state-of-the-art methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro