Video Captioning with Guidance of Multimodal Latent Topics

08/31/2017
by   Shizhe Chen, et al.
0

The topic diversity of open-domain videos leads to various vocabularies and linguistic expressions in describing video contents, and therefore, makes the video captioning task even more challenging. In this paper, we propose an unified caption framework, M&M TGM, which mines multimodal topics in unsupervised fashion from data and guides the caption decoder with these topics. Compared to pre-defined topics, the mined multimodal topics are more semantically and visually coherent and can reflect the topic distribution of videos better. We formulate the topic-aware caption generation as a multi-task learning problem, in which we add a parallel task, topic prediction, in addition to the caption task. For the topic prediction task, we use the mined topics as the teacher to train a student topic prediction model, which learns to predict the latent topics from multimodal contents of videos. The topic prediction provides intermediate supervision to the learning process. As for the caption task, we propose a novel topic-aware decoder to generate more accurate and detailed video descriptions with the guidance from latent topics. The entire learning procedure is end-to-end and it optimizes both tasks simultaneously. The results from extensive experiments conducted on the MSR-VTT and Youtube2Text datasets demonstrate the effectiveness of our proposed model. M&M TGM not only outperforms prior state-of-the-art methods on multiple evaluation metrics and on both benchmark datasets, but also achieves better generalization ability.

READ FULL TEXT

page 7

page 8

research
08/31/2017

Generating Video Descriptions with Topic Guidance

Generating video descriptions in natural language (a.k.a. video captioni...
research
09/11/2021

TopicRefine: Joint Topic Prediction and Dialogue Response Generation for Multi-turn End-to-End Dialogue System

A multi-turn dialogue always follows a specific topic thread, and topic ...
research
01/20/2022

End-to-end Generative Pretraining for Multimodal Video Captioning

Recent video and language pretraining frameworks lack the ability to gen...
research
06/07/2023

Effective Neural Topic Modeling with Embedding Clustering Regularization

Topic models have been prevalent for decades with various applications. ...
research
11/07/2018

Learning to Compose Topic-Aware Mixture of Experts for Zero-Shot Video Captioning

Although promising results have been achieved in video captioning, exist...
research
10/26/2020

Multimodal Topic Learning for Video Recommendation

Facilitated by deep neural networks, video recommendation systems have m...
research
12/20/2021

Multimodal Adversarially Learned Inference with Factorized Discriminators

Learning from multimodal data is an important research topic in machine ...

Please sign up or login with your details

Forgot password? Click here to reset