A Restricted Visual Turing Test for Deep Scene and Event Understanding

12/06/2015
by   Hang Qi, et al.
0

This paper presents a restricted visual Turing test (VTT) for story-line based deep understanding in long-term and multi-camera captured videos. Given a set of videos of a scene (such as a multi-room office, a garden, and a parking lot.) and a sequence of story-line based queries, the task is to provide answers either simply in binary form "true/false" (to a polar query) or in an accurate natural language description (to a non-polar query). Queries, polar or non-polar, consist of view-based queries which can be answered from a particular camera view and scene-centered queries which involves joint inference across different cameras. The story lines are collected to cover spatial, temporal and causal understanding of input videos. The data and queries distinguish our VTT from recently proposed visual question answering in images and video captioning. A vision system is proposed to perform joint video and query parsing which integrates different vision modules, a knowledge base and a query engine. The system provides unified interfaces for different modules so that individual modules can be reconfigured to test a new method. We provide a benchmark dataset and a toolkit for ontology guided story-line query generation which consists of about 93.5 hours videos captured in four different locations and 3,426 queries split into 127 story lines. We also provide a baseline implementation and result analyses.

READ FULL TEXT

page 1

page 3

page 4

page 5

page 8

page 9

research
10/27/2020

Co-attentional Transformers for Story-Based Video Understanding

Inspired by recent trends in vision and language learning, we explore ap...
research
07/04/2017

DeepStory: Video Story QA by Deep Embedded Memory Networks

Question-answering (QA) on video contents is a significant challenge for...
research
08/29/2013

Joint Video and Text Parsing for Understanding Events and Answering Queries

We propose a framework for parsing video and text jointly for understand...
research
05/07/2020

DramaQA: Character-Centered Video Story Understanding with Hierarchical QA

Despite recent progress on computer vision and natural language processi...
research
08/10/2022

Exploring Anchor-based Detection for Ego4D Natural Language Query

In this paper we provide the technique report of Ego4D natural language ...
research
07/21/2018

A Pipeline for Creative Visual Storytelling

Computational visual storytelling produces a textual description of even...
research
10/05/2016

Recognizing and Presenting the Storytelling Video Structure with Deep Multimodal Networks

This paper presents a novel approach for temporal and semantic segmentat...

Please sign up or login with your details

Forgot password? Click here to reset