Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction

07/23/2020
by   Eric Chu, et al.
13

We present a randomized controlled trial for a model-in-the-loop regression task, with the goal of measuring the extent to which (1) good explanations of model predictions increase human accuracy, and (2) faulty explanations decrease human trust in the model. We study explanations based on visual saliency in an image-based age prediction task for which humans and learned models are individually capable but not highly proficient and frequently disagree. Our experimental design separates model quality from explanation quality, and makes it possible to compare treatments involving a variety of explanations of varying levels of quality. We find that presenting model predictions improves human accuracy. However, visual explanations of various kinds fail to significantly alter human accuracy or trust in the model - regardless of whether explanations characterize an accurate model, an inaccurate one, or are generated randomly and independently of the input image. These findings suggest the need for greater evaluation of explanations in downstream decision making tasks, better design-based tools for presenting explanations to users, and better approaches for generating explanations.

READ FULL TEXT

page 2

page 3

research
07/14/2023

Dissenting Explanations: Leveraging Disagreement to Reduce Model Overreliance

While explainability is a desirable characteristic of increasingly compl...
research
04/06/2023

Explainable AI And Visual Reasoning: Insights From Radiology

Why do explainable AI (XAI) explanations in radiology, despite their pro...
research
11/27/2020

Teaching the Machine to Explain Itself using Domain Knowledge

Machine Learning (ML) has been increasingly used to aid humans to make b...
research
07/25/2018

Grounding Visual Explanations

Existing visual explanation generating agents learn to fluently justify ...
research
04/14/2021

To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions

In hybrid human-AI systems, users need to decide whether or not to trust...
research
01/30/2022

Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning

Model explanations such as saliency maps can improve user trust in AI by...
research
11/23/2022

Crown-CAM: Reliable Visual Explanations for Tree Crown Detection in Aerial Images

Visual explanation of "black-box" models has enabled researchers and exp...

Please sign up or login with your details

Forgot password? Click here to reset