VisQA: X-raying Vision and Language Reasoning in Transformers

04/02/2021
by   Theo Jaunet, et al.
29

Visual Question Answering systems target answering open-ended textual questions given input images. They are a testbed for learning high-level reasoning with a primary use in HCI, for instance assistance for the visually impaired. Recent research has shown that state-of-the-art models tend to produce answers exploiting biases and shortcuts in the training data, and sometimes do not even look at the input image, instead of performing the required reasoning steps. We present VisQA, a visual analytics tool that explores this question of reasoning vs. bias exploitation. It exposes the key element of state-of-the-art neural models – attention maps in transformers. Our working hypothesis is that reasoning steps leading to model predictions are observable from attention distributions, which are particularly useful for visualization. The design process of VisQA was motivated by well-known bias examples from the fields of deep learning and vision-language reasoning and evaluated in two ways. First, as a result of a collaboration of three fields, machine learning, vision and language reasoning, and data analytics, the work lead to a direct impact on the design and training of a neural model for VQA, improving model performance as a consequence. Second, we also report on the design of VisQA, and a goal-oriented evaluation of VisQA targeting the analysis of a model decision process from multiple experts, providing evidence that it makes the inner workings of models accessible to users.

READ FULL TEXT

page 1

page 5

page 8

research
04/08/2021

How Transferable are Reasoning Patterns in VQA?

Since its inception, Visual Question Answering (VQA) is notoriously know...
research
05/17/2022

Gender and Racial Bias in Visual Question Answering Datasets

Vision-and-language tasks have increasingly drawn more attention as a me...
research
11/02/2016

Dual Attention Networks for Multimodal Reasoning and Matching

We propose Dual Attention Networks (DANs) which jointly leverage visual ...
research
11/02/2020

Reasoning Over History: Context Aware Visual Dialog

While neural models have been shown to exhibit strong performance on sin...
research
01/11/2022

On the Efficacy of Co-Attention Transformer Layers in Visual Question Answering

In recent years, multi-modal transformers have shown significant progres...
research
07/10/2017

Learning Visual Reasoning Without Strong Priors

Achieving artificial visual reasoning - the ability to answer image-rela...
research
02/14/2020

Transformers as Soft Reasoners over Language

AI has long pursued the goal of having systems reason over *explicitly p...

Please sign up or login with your details

Forgot password? Click here to reset