A dataset and exploration of models for understanding video data through fill-in-the-blank question-answering

11/23/2016
by   Tegan Maharaj, et al.
0

While deep convolutional neural networks frequently approach or exceed human-level performance at benchmark tasks involving static images, extending this success to moving images is not straightforward. Having models which can learn to understand video is of interest for many applications, including content recommendation, prediction, summarization, event/object detection and understanding human visual perception, but many domains lack sufficient data to explore and perfect video models. In order to address the need for a simple, quantitative benchmark for developing and understanding video, we present MovieFIB, a fill-in-the-blank question-answering dataset with over 300,000 examples, based on descriptive video annotations for the visually impaired. In addition to presenting statistics and a description of the dataset, we perform a detailed analysis of 5 different models' predictions, and compare these with human performance. We investigate the relative importance of language, static (2D) visual features, and moving (3D) visual features; the effects of increasing dataset size, the number of frames sampled; and of vocabulary size. We illustrate that: this task is not solvable by a language model alone; our model combining 2D and 3D visual information indeed provides the best result; all models perform significantly worse than human-level. We provide human evaluations for responses given by different models and find that accuracy on the MovieFIB evaluation corresponds well with human judgement. We suggest avenues for improving video models, and hope that the proposed dataset can be useful for measuring and encouraging progress in this very interesting field.

READ FULL TEXT

page 1

page 4

page 6

page 7

research
04/17/2020

Knowledge-Based Visual Question Answering in Videos

We propose a novel video understanding task by fusing knowledge-based an...
research
09/14/2022

WildQA: In-the-Wild Video Question Answering

Existing video understanding datasets mostly focus on human interactions...
research
06/25/2021

A Picture May Be Worth a Hundred Words for Visual Question Answering

How far can we go with textual representations for understanding picture...
research
05/14/2023

Semantic-aware Dynamic Retrospective-Prospective Reasoning for Event-level Video Question Answering

Event-Level Video Question Answering (EVQA) requires complex reasoning a...
research
05/05/2023

LMEye: An Interactive Perception Network for Large Language Models

Training a Large Visual Language Model (LVLM) from scratch, like GPT-4, ...
research
04/09/2021

Fill-in-the-blank as a Challenging Video Understanding Evaluation Framework

Work to date on language-informed video understanding has primarily addr...
research
04/23/2021

Supervised Video Summarization via Multiple Feature Sets with Parallel Attention

The assignment of importance scores to particular frames or (short) segm...

Please sign up or login with your details

Forgot password? Click here to reset