A Turing Test for Transparency

06/21/2021
by   Felix Biessmann, et al.
0

A central goal of explainable artificial intelligence (XAI) is to improve the trust relationship in human-AI interaction. One assumption underlying research in transparent AI systems is that explanations help to better assess predictions of machine learning (ML) models, for instance by enabling humans to identify wrong predictions more efficiently. Recent empirical evidence however shows that explanations can have the opposite effect: When presenting explanations of ML predictions humans often tend to trust ML predictions even when these are wrong. Experimental evidence suggests that this effect can be attributed to how intuitive, or human, an AI or explanation appears. This effect challenges the very goal of XAI and implies that responsible usage of transparent AI methods has to consider the ability of humans to distinguish machine generated from human explanations. Here we propose a quantitative metric for XAI methods based on Turing's imitation game, a Turing Test for Transparency. A human interrogator is asked to judge whether an explanation was generated by a human or by an XAI method. Explanations of XAI methods that can not be detected by humans above chance performance in this binary classification task are passing the test. Detecting such explanations is a requirement for assessing and calibrating the trust relationship in human-AI interaction. We present experimental results on a crowd-sourced text classification task demonstrating that even for basic ML models and XAI approaches most participants were not able to differentiate human from machine generated explanations. We discuss ethical and practical implications of our results for applications of transparent ML.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2019

Do Human Rationales Improve Machine Explanations?

Work on "learning with rationales" shows that humans providing explanati...
research
03/01/2022

Explaining RADAR features for detecting spoofing attacks in Connected Autonomous Vehicles

Connected autonomous vehicles (CAVs) are anticipated to have built-in AI...
research
02/11/2020

Leveraging Rationales to Improve Human Task Performance

Machine learning (ML) systems across many application areas are increasi...
research
01/24/2020

Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience

Active Learning (AL) is a human-in-the-loop Machine Learning paradigm fa...
research
01/25/2022

Explanatory Learning: Beyond Empiricism in Neural Networks

We introduce Explanatory Learning (EL), a framework to let machines use ...
research
02/16/2022

XAI in the context of Predictive Process Monitoring: Too much to Reveal

Predictive Process Monitoring (PPM) has been integrated into process min...
research
02/07/2021

Mitigating belief projection in explainable artificial intelligence via Bayesian Teaching

State-of-the-art deep-learning systems use decision rules that are chall...

Please sign up or login with your details

Forgot password? Click here to reset