Hybrid Reasoning Network for Video-based Commonsense Captioning

08/05/2021
by   Weijiang Yu, et al.
14

The task of video-based commonsense captioning aims to generate event-wise captions and meanwhile provide multiple commonsense descriptions (e.g., attribute, effect and intention) about the underlying event in the video. Prior works explore the commonsense captions by using separate networks for different commonsense types, which is time-consuming and lacks mining the interaction of different commonsense. In this paper, we propose a Hybrid Reasoning Network (HybridNet) to endow the neural networks with the capability of semantic-level reasoning and word-level reasoning. Firstly, we develop multi-commonsense learning for semantic-level reasoning by jointly training different commonsense types in a unified network, which encourages the interaction between the clues of multiple commonsense descriptions, event-wise captions and videos. Then, there are two steps to achieve the word-level reasoning: (1) a memory module records the history predicted sequence from the previous generation processes; (2) a memory-routed multi-head attention (MMHA) module updates the word-level attention maps by incorporating the history information from the memory module into the transformer decoder for word-level reasoning. Moreover, the multimodal features are used to make full use of diverse knowledge for commonsense reasoning. Experiments and abundant analysis on the large-scale Video-to-Commonsense benchmark show that our HybridNet achieves state-of-the-art performance compared with other methods.

READ FULL TEXT

page 1

page 3

page 5

page 7

research
03/11/2020

Video2Commonsense: Generating Commonsense Descriptions to Enrich Video Captioning

Captioning is a crucial and challenging task for video understanding. In...
research
03/21/2022

Relevant CommonSense Subgraphs for "What if..." Procedural Reasoning

We study the challenge of learning causal reasoning over procedural text...
research
06/04/2019

Relational Reasoning using Prior Knowledge for Visual Captioning

Exploiting relationships among objects has achieved remarkable progress ...
research
04/10/2022

Reasoning with Multi-Structure Commonsense Knowledge in Visual Dialog

Visual Dialog requires an agent to engage in a conversation with humans ...
research
03/14/2023

Implicit and Explicit Commonsense for Multi-sentence Video Captioning

Existing dense or paragraph video captioning approaches rely on holistic...
research
06/04/2021

MERLOT: Multimodal Neural Script Knowledge Models

As humans, we understand events in the visual world contextually, perfor...
research
02/04/2023

Learning to Agree on Vision Attention for Visual Commonsense Reasoning

Visual Commonsense Reasoning (VCR) remains a significant yet challenging...

Please sign up or login with your details

Forgot password? Click here to reset