Do Explanations make VQA Models more Predictable to a Human?

10/29/2018
by   Arjun Chandrasekaran, et al.
13

A rich line of research attempts to make deep neural networks more transparent by generating human-interpretable 'explanations' of their decision process, especially for interactive tasks like Visual Question Answering (VQA). In this work, we analyze if existing explanations indeed make a VQA model -- its responses as well as failures -- more predictable to a human. Surprisingly, we find that they do not. On the other hand, we find that human-in-the-loop approaches that treat the model as a black-box do.

READ FULL TEXT

page 2

page 4

page 9

page 10

page 12

page 14

page 15

page 16

research
04/05/2019

Lucid Explanations Help: Using a Human-AI Image-Guessing Game to Evaluate Machine Explanation Helpfulness

While there have been many proposals on how to make AI algorithms more t...
research
01/25/2023

Towards a Unified Model for Generating Answers and Explanations in Visual Question Answering

Providing explanations for visual question answering (VQA) has gained mu...
research
11/09/2022

Towards Reasoning-Aware Explainable VQA

The domain of joint vision-language understanding, especially in the con...
research
07/02/2020

The Impact of Explanations on AI Competency Prediction in VQA

Explainability is one of the key elements for building trust in AI syste...
research
06/28/2020

Improving VQA and its Explanations by Comparing Competing Explanations

Most recent state-of-the-art Visual Question Answering (VQA) systems are...
research
09/08/2018

Faithful Multimodal Explanation for Visual Question Answering

AI systems' ability to explain their reasoning is critical to their util...
research
03/01/2020

A Study on Multimodal and Interactive Explanations for Visual Question Answering

Explainability and interpretability of AI models is an essential factor ...

Please sign up or login with your details

Forgot password? Click here to reset