From Shallow to Deep: Compositional Reasoning over Graphs for Visual Question Answering

06/25/2022
by   Zihao Zhu, et al.
0

In order to achieve a general visual question answering (VQA) system, it is essential to learn to answer deeper questions that require compositional reasoning on the image and external knowledge. Meanwhile, the reasoning process should be explicit and explainable to understand the working mechanism of the model. It is effortless for human but challenging for machines. In this paper, we propose a Hierarchical Graph Neural Module Network (HGNMN) that reasons over multi-layer graphs with neural modules to address the above issues. Specifically, we first encode the image by multi-layer graphs from the visual, semantic and commonsense views since the clues that support the answer may exist in different modalities. Our model consists of several well-designed neural modules that perform specific functions over graphs, which can be used to conduct multi-step reasoning within and between different graphs. Compared to existing modular networks, we extend visual reasoning from one graph to more graphs. We can explicitly trace the reasoning process according to module weights and graph attentions. Experiments show that our model not only achieves state-of-the-art performance on the CRIC dataset but also obtains explicit and explainable reasoning procedures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/31/2020

Cross-modal Knowledge Reasoning for Knowledge-based Visual Question Answering

Knowledge-based Visual Question Answering (KVQA) requires external knowl...
research
12/05/2018

Explainable and Explicit Visual Reasoning over Scene Graphs

We aim to dismantle the prevalent black-box neural architectures used in...
research
07/23/2018

Explainable Neural Computation via Stack Neural Module Networks

In complex inferential tasks like question answering, machine learning m...
research
05/15/2021

Show Why the Answer is Correct! Towards Explainable AI using Compositional Temporal Attention

Visual Question Answering (VQA) models have achieved significant success...
research
03/31/2018

Visual Question Reasoning on General Dependency Tree

The collaborative reasoning for understanding each image-question pair i...
research
10/08/2019

Meta Module Network for Compositional Visual Reasoning

There are two main lines of research on visual reasoning: neural module ...
research
08/08/2019

From Two Graphs to N Questions: A VQA Dataset for Compositional Reasoning on Vision and Commonsense

Visual Question Answering (VQA) is a challenging task for evaluating the...

Please sign up or login with your details

Forgot password? Click here to reset