Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning

01/30/2022
by   Wencan Zhang, et al.
0

Model explanations such as saliency maps can improve user trust in AI by highlighting important features for a prediction. However, these become distorted and misleading when explaining predictions of images that are subject to systematic error (bias). Furthermore, the distortions persist despite model fine-tuning on images biased by different factors (blur, color temperature, day/night). We present Debiased-CAM to recover explanation faithfulness across various bias types and levels by training a multi-input, multi-task model with auxiliary tasks for explanation and bias level predictions. In simulation studies, the approach not only enhanced prediction accuracy, but also generated highly faithful explanations about these predictions as if the images were unbiased. In user studies, debiased explanations improved user task performance, perceived truthfulness and perceived helpfulness. Debiased training can provide a versatile platform for robust performance and explanation faithfulness for a wide range of applications with data biases.

READ FULL TEXT

page 2

page 8

page 28

page 31

page 32

page 36

page 37

research
12/10/2020

Debiased-CAM for bias-agnostic faithful visual explanations of deep convolutional networks

Class activation maps (CAMs) explain convolutional neural network predic...
research
03/09/2020

Explanation-Based Tuning of Opaque Machine Learners with Application to Paper Recommendation

Research in human-centered AI has shown the benefits of machine-learning...
research
03/17/2023

Iterative Partial Fulfillment of Counterfactual Explanations: Benefits and Risks

Counterfactual (CF) explanations, also known as contrastive explanations...
research
08/09/2021

The Weighted Average Illusion: Biases in Perceived Mean Position in Scatterplots

Scatterplots can encode a third dimension by using additional channels l...
research
06/14/2018

Neural Stethoscopes: Unifying Analytic, Auxiliary and Adversarial Network Probing

Model interpretability and systematic, targeted model adaptation present...
research
10/12/2020

The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?

There is a recent surge of interest in using attention as explanation of...
research
07/23/2020

Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction

We present a randomized controlled trial for a model-in-the-loop regress...

Please sign up or login with your details

Forgot password? Click here to reset