The Impact of Explanations on AI Competency Prediction in VQA

07/02/2020
by   Kamran Alipour, et al.
12

Explainability is one of the key elements for building trust in AI systems. Among numerous attempts to make AI explainable, quantifying the effect of explanations remains a challenge in conducting human-AI collaborative tasks. Aside from the ability to predict the overall behavior of AI, in many applications, users need to understand an AI agent's competency in different aspects of the task domain. In this paper, we evaluate the impact of explanations on the user's mental model of AI agent competency within the task of visual question answering (VQA). We quantify users' understanding of competency, based on the correlation between the actual system performance and user rankings. We introduce an explainable VQA system that uses spatial and object features and is powered by the BERT language model. Each group of users sees only one kind of explanation to rank the competencies of the VQA model. The proposed model is evaluated through between-subject experiments to probe explanations' impact on the user's perception of competency. The comparison between two VQA models shows BERT based explanations and the use of object features improve the user's prediction of the model's competencies.

READ FULL TEXT

page 3

page 4

page 5

research
03/01/2020

A Study on Multimodal and Interactive Explanations for Visual Question Answering

Explainability and interpretability of AI models is an essential factor ...
research
04/05/2019

Lucid Explanations Help: Using a Human-AI Image-Guessing Game to Evaluate Machine Explanation Helpfulness

While there have been many proposals on how to make AI algorithms more t...
research
10/29/2018

Do Explanations make VQA Models more Predictable to a Human?

A rich line of research attempts to make deep neural networks more trans...
research
11/09/2022

Towards Reasoning-Aware Explainable VQA

The domain of joint vision-language understanding, especially in the con...
research
07/14/2022

Beware the Rationalization Trap! When Language Model Explainability Diverges from our Mental Models of Language

Language models learn and represent language differently than humans; th...
research
11/30/2022

Optimizing Explanations by Network Canonization and Hyperparameter Search

Explainable AI (XAI) is slowly becoming a key component for many AI appl...
research
11/22/2019

Culture-Based Explainable Human-Agent Deconfliction

Law codes and regulations help organise societies for centuries, and as ...

Please sign up or login with your details

Forgot password? Click here to reset