CogME: A Novel Evaluation Metric for Video Understanding Intelligence

07/21/2021
by   Minjung Shin, et al.
14

Developing video understanding intelligence is quite challenging because it requires holistic integration of images, scripts, and sounds based on natural language processing, temporal dependency, and reasoning. Recently, substantial attempts have been made on several video datasets with associated question answering (QA) on a large scale. However, existing evaluation metrics for video question answering (VideoQA) do not provide meaningful analysis. To make progress, we argue that a well-made framework, established on the way humans understand, is required to explain and evaluate the performance of understanding in detail. Then we propose a top-down evaluation system for VideoQA, based on the cognitive process of humans and story elements: Cognitive Modules for Evaluation (CogME). CogME is composed of three cognitive modules: targets, contents, and thinking. The interaction among the modules in the understanding procedure can be expressed in one sentence as follows: "I understand the CONTENT of the TARGET through a way of THINKING." Each module has sub-components derived from the story elements. We can specify the required aspects of understanding by annotating the sub-components to individual questions. CogME thus provides a framework for an elaborated specification of VideoQA datasets. To examine the suitability of a VideoQA dataset for validating video understanding intelligence, we evaluated the baseline model of the DramaQA dataset by applying CogME. The evaluation reveals that story elements are unevenly reflected in the existing dataset, and the model based on the dataset may cause biased predictions. Although this study has only been able to grasp a narrow range of stories, we expect that it offers the first step in considering the cognitive process of humans on the video understanding intelligence of humans and AI.

READ FULL TEXT

page 1

page 13

page 14

research
05/07/2020

DramaQA: Character-Centered Video Story Understanding with Hierarchical QA

Despite recent progress on computer vision and natural language processi...
research
10/27/2020

Co-attentional Transformers for Story-Based Video Understanding

Inspired by recent trends in vision and language learning, we explore ap...
research
04/01/2019

Constructing Hierarchical Q A Datasets for Video Story Understanding

Video understanding is emerging as a new paradigm for studying human-lik...
research
12/04/2017

SERKET: An Architecture for Connecting Stochastic Models to Realize a Large-Scale Cognitive Model

To realize human-like robot intelligence, a large-scale cognitive archit...
research
10/08/2021

Toward a Human-Level Video Understanding Intelligence

We aim to develop an AI agent that can watch video clips and have a conv...
research
12/01/2019

AntNet: Deep Answer Understanding Network for Natural Reverse QA

This study refers to a reverse question answering(reverse QA) procedure,...
research
08/30/2019

The OMG-Empathy Dataset: Evaluating the Impact of Affective Behavior in Storytelling

Processing human affective behavior is important for developing intellig...

Please sign up or login with your details

Forgot password? Click here to reset