Learning to Agree on Vision Attention for Visual Commonsense Reasoning

02/04/2023
by   Zhenyang Li, et al.
0

Visual Commonsense Reasoning (VCR) remains a significant yet challenging research problem in the realm of visual reasoning. A VCR model generally aims at answering a textual question regarding an image, followed by the rationale prediction for the preceding answering process. Though these two processes are sequential and intertwined, existing methods always consider them as two independent matching-based instances. They, therefore, ignore the pivotal relationship between the two processes, leading to sub-optimal model performance. This paper presents a novel visual attention alignment method to efficaciously handle these two processes in a unified framework. To achieve this, we first design a re-attention module for aggregating the vision attention map produced in each process. Thereafter, the resultant two sets of attention maps are carefully aligned to guide the two processes to make decisions based on the same image regions. We apply this method to both conventional attention and the recent Transformer models and carry out extensive experiments on the VCR benchmark dataset. The results demonstrate that with the attention alignment module, our method achieves a considerable improvement over the baseline methods, evidently revealing the feasibility of the coupling of the two processes as well as the effectiveness of the proposed method.

READ FULL TEXT

page 1

page 9

page 10

page 11

research
02/25/2022

Joint Answering and Explanation for Visual Commonsense Reasoning

Visual Commonsense Reasoning (VCR), deemed as one challenging extension ...
research
08/06/2021

Interpretable Visual Understanding with Cognitive Attention Network

While image understanding on recognition-level has achieved remarkable a...
research
04/17/2022

Attention Mechanism based Cognition-level Scene Understanding

Given a question-image input, the Visual Commonsense Reasoning (VCR) mod...
research
10/25/2019

Heterogeneous Graph Learning for Visual Commonsense Reasoning

Visual commonsense reasoning task aims at leading the research field int...
research
08/05/2021

Hybrid Reasoning Network for Video-based Commonsense Captioning

The task of video-based commonsense captioning aims to generate event-wi...
research
04/22/2020

Visual Commonsense Graphs: Reasoning about the Dynamic Context of a Still Image

Even from a single frame of a still image, people can reason about the d...
research
07/18/2023

R-Cut: Enhancing Explainability in Vision Transformers with Relationship Weighted Out and Cut

Transformer-based models have gained popularity in the field of natural ...

Please sign up or login with your details

Forgot password? Click here to reset