Self-supervised pre-training and contrastive representation learning for multiple-choice video QA

09/17/2020
by   Seonhoon Kim, et al.
0

Video Question Answering (Video QA) requires fine-grained understanding of both video and language modalities to answer the given questions. In this paper, we propose novel training schemes for multiple-choice video question answering with a self-supervised pre-training stage and a supervised contrastive learning in the main stage as an auxiliary learning. In the self-supervised pre-training stage, we transform the original problem format of predicting the correct answer into the one that predicts the relevant question to provide a model with broader contextual inputs without any further dataset or annotation. For contrastive learning in the main stage, we add a masking noise to the input corresponding to the ground-truth answer, and consider the original input of the ground-truth answer as a positive sample, while treating the rest as negative samples. By mapping the positive sample closer to the masked input, we show that the model performance is improved. We further employ locally aligned attention to focus more effectively on the video frames that are particularly relevant to the given corresponding subtitle sentences. We evaluate our proposed model on highly competitive benchmark datasets related to multiple-choice videoQA: TVQA, TVQA+, and DramaQA. Experimental results show that our model achieves state-of-the-art performance on all datasets. We also validate our approaches through further analyses.

READ FULL TEXT

page 1

page 7

research
01/05/2021

End-to-End Video Question-Answer Generation with Generator-Pretester Network

We study a novel task, Video Question-Answer Generation (VQAG), for chal...
research
12/12/2022

Momentum Contrastive Pre-training for Question Answering

Existing pre-training methods for extractive Question Answering (QA) gen...
research
10/27/2022

Robust Data2vec: Noise-robust Speech Representation Learning for ASR by Combining Regression and Improved Contrastive Learning

Self-supervised pre-training methods based on contrastive learning or re...
research
10/10/2022

Contrastive Video-Language Learning with Fine-grained Frame Sampling

Despite recent progress in video and language representation learning, t...
research
03/02/2023

QAID: Question Answering Inspired Few-shot Intent Detection

Intent detection with semantically similar fine-grained intents is a cha...
research
04/01/2021

CUPID: Adaptive Curation of Pre-training Data for Video-and-Language Representation Learning

This work concerns video-language pre-training and representation learni...
research
09/27/2021

Context-guided Triple Matching for Multiple Choice Question Answering

The task of multiple choice question answering (MCQA) refers to identify...

Please sign up or login with your details

Forgot password? Click here to reset