How model accuracy and explanation fidelity influence user trust

07/26/2019
by   Andrea Papenmeier, et al.
0

Machine learning systems have become popular in fields such as marketing, financing, or data mining. While they are highly accurate, complex machine learning systems pose challenges for engineers and users. Their inherent complexity makes it impossible to easily judge their fairness and the correctness of statistically learned relations between variables and classes. Explainable AI aims to solve this challenge by modelling explanations alongside with the classifiers, potentially improving user trust and acceptance. However, users should not be fooled by persuasive, yet untruthful explanations. We therefore conduct a user study in which we investigate the effects of model accuracy and explanation fidelity, i.e. how truthfully the explanation represents the underlying model, on user trust. Our findings show that accuracy is more important for user trust than explainability. Adding an explanation for a classification result can potentially harm trust, e.g. when adding nonsensical explanations. We also found that users cannot be tricked by high-fidelity explanations into having trust for a bad classifier. Furthermore, we found a mismatch between observed (implicit) and self-reported (explicit) trust.

READ FULL TEXT
research
04/26/2022

User Trust on an Explainable AI-based Medical Diagnosis Support System

Recent research has supported that system explainability improves user t...
research
06/22/2020

Fanoos: Multi-Resolution, Multi-Strength, Interactive Explanations for Learned Systems

Machine learning becomes increasingly important to tune or even synthesi...
research
02/24/2020

SupRB: A Supervised Rule-based Learning System for Continuous Problems

We propose the SupRB learning system, a new Pittsburgh-style learning cl...
research
04/12/2023

Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks

Explainable AI has become a popular tool for validating machine learning...
research
09/15/2019

X-ToM: Explaining with Theory-of-Mind for Gaining Justified Human Trust

We present a new explainable AI (XAI) framework aimed at increasing just...
research
07/16/2019

Evaluating Explanation Without Ground Truth in Interpretable Machine Learning

Interpretable Machine Learning (IML) has become increasingly important i...
research
09/21/2023

On the Definition of Appropriate Trust and the Tools that Come with it

Evaluating the efficiency of human-AI interactions is challenging, inclu...

Please sign up or login with your details

Forgot password? Click here to reset