Play MNIST For Me! User Studies on the Effects of Post-Hoc, Example-Based Explanations Error Rates on Debugging a Deep Learning, Black-Box Classifier

09/10/2020
by   Courtney Ford, et al.
0

This paper reports two experiments (N=349) on the impact of post hoc explanations by example and error rates on peoples perceptions of a black box classifier. Both experiments show that when people are given case based explanations, from an implemented ANN CBR twin system, they perceive miss classifications to be more correct. They also show that as error rates increase above 4 less reasonable and less trustworthy. The implications of these results for XAI are discussed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/19/2022

Explaining Classifications to Non Experts: An XAI User Study of Post Hoc Explanations for a Classifier When People Lack Expertise

Very few eXplainable AI (XAI) studies consider how users understanding o...
research
09/29/2021

Critical Empirical Study on Black-box Explanations in AI

This paper provides empirical concerns about post-hoc explanations of bl...
research
04/03/2023

An Interpretable Loan Credit Evaluation Method Based on Rule Representation Learner

The interpretability of model has become one of the obstacles to its wid...
research
09/28/2022

Shining a Light on Forensic Black-Box Studies

Forensic science plays a critical role in the American criminal justice ...
research
02/04/2020

Bridging the Gap: Providing Post-Hoc Symbolic Explanations for Sequential Decision-Making Problems with Black Box Simulators

As more and more complex AI systems are introduced into our day-to-day l...
research
01/31/2022

Fair Wrapping for Black-box Predictions

We introduce a new family of techniques to post-process ("wrap") a black...
research
10/28/2021

Explaining Latent Representations with a Corpus of Examples

Modern machine learning models are complicated. Most of them rely on con...

Please sign up or login with your details

Forgot password? Click here to reset