-
Understanding Image and Text Simultaneously: a Dual Vision-Language Machine Comprehension Task
We introduce a new multi-modal task for computer systems, posed as a com...
read it
-
University of Amsterdam and Renmin University at TRECVID 2017: Searching Video, Detecting Events and Describing Video
In this paper, we summarize our TRECVID 2017 video recognition and retri...
read it
-
Temporal Tessellation: A Unified Approach for Video Analysis
We present a general approach to video understanding, inspired by semant...
read it
-
HLVU : A New Challenge to Test Deep Understanding of Movies the Way Humans do
In this paper we propose a new evaluation challenge and direction in the...
read it
-
Exploiting Feature and Class Relationships in Video Categorization with Regularized Deep Neural Networks
In this paper, we study the challenging problem of categorizing videos a...
read it
-
EM-Based Mixture Models Applied to Video Event Detection
Surveillance system (SS) development requires hi-tech support to prevail...
read it
-
Semantically Sensible Video Captioning (SSVC)
Video captioning, i.e. the task of generating captions from video sequen...
read it
VideoMCC: a New Benchmark for Video Comprehension
While there is overall agreement that future technology for organizing, browsing and searching videos hinges on the development of methods for high-level semantic understanding of video, so far no consensus has been reached on the best way to train and assess models for this task. Casting video understanding as a form of action or event categorization is problematic as it is not fully clear what the semantic classes or abstractions in this domain should be. Language has been exploited to sidestep the problem of defining video categories, by formulating video understanding as the task of captioning or description. However, language is highly complex, redundant and sometimes ambiguous. Many different captions may express the same semantic concept. To account for this ambiguity, quantitative evaluation of video description requires sophisticated metrics, whose performance scores are typically hard to interpret by humans. This paper provides four contributions to this problem. First, we formulate Video Multiple Choice Caption (VideoMCC) as a new well-defined task with an easy-to-interpret performance measure. Second, we describe a general semi-automatic procedure to create benchmarks for this task. Third, we publicly release a large-scale video benchmark created with an implementation of this procedure and we include a human study that assesses human performance on our dataset. Finally, we propose and test a varied collection of approaches on this benchmark for the purpose of gaining a better understanding of the new challenges posed by video comprehension.
READ FULL TEXT
Comments
There are no comments yet.