Can counterfactual explanations of AI systems' predictions skew lay users' causal intuitions about the world? If so, can we correct for that?

05/12/2022
by   Marko Tesic, et al.
0

Counterfactual (CF) explanations have been employed as one of the modes of explainability in explainable AI-both to increase the transparency of AI systems and to provide recourse. Cognitive science and psychology, however, have pointed out that people regularly use CFs to express causal relationships. Most AI systems are only able to capture associations or correlations in data so interpreting them as casual would not be justified. In this paper, we present two experiment (total N = 364) exploring the effects of CF explanations of AI system's predictions on lay people's causal beliefs about the real world. In Experiment 1 we found that providing CF explanations of an AI system's predictions does indeed (unjustifiably) affect people's causal beliefs regarding factors/features the AI uses and that people are more likely to view them as causal factors in the real world. Inspired by the literature on misinformation and health warning messaging, Experiment 2 tested whether we can correct for the unjustified change in causal beliefs. We found that pointing out that AI systems capture correlations and not necessarily causal relationships can attenuate the effects of CF explanations on people's causal beliefs.

READ FULL TEXT

page 5

page 7

page 11

research
04/21/2022

Features of Explainability: How users understand counterfactual and causal explanations for categorical and continuous features in XAI

Counterfactual explanations are increasingly used to address interpretab...
research
11/17/2022

Explainability Via Causal Self-Talk

Explaining the behavior of AI systems is an important problem that, in p...
research
01/31/2022

Causal Explanations and XAI

Although standard Machine Learning models are optimized for making predi...
research
12/28/2021

Towards Relatable Explainable AI with the Perceptual Process

Machine learning models need to provide contrastive explanations, since ...
research
04/30/2021

Using Small MUSes to Explain How to Solve Pen and Paper Puzzles

Pen and paper puzzles like Sudoku, Futoshiki and Skyscrapers are hugely ...
research
10/02/2022

AI-Assisted Discovery of Quantitative and Formal Models in Social Science

In social science, formal and quantitative models, such as ones describi...
research
11/12/2021

Explainable AI for Psychological Profiling from Digital Footprints: A Case Study of Big Five Personality Predictions from Spending Data

Every step we take in the digital world leaves behind a record of our be...

Please sign up or login with your details

Forgot password? Click here to reset