Don't Explain without Verifying Veracity: An Evaluation of Explainable AI with Video Activity Recognition

05/05/2020
by   Mahsan Nourani, et al.
10

Explainable machine learning and artificial intelligence models have been used to justify a model's decision-making process. This added transparency aims to help improve user performance and understanding of the underlying model. However, in practice, explainable systems face many open questions and challenges. Specifically, designers might reduce the complexity of deep learning models in order to provide interpretability. The explanations generated by these simplified models, however, might not accurately justify and be truthful to the model. This can further add confusion to the users as they might not find the explanations meaningful with respect to the model predictions. Understanding how these explanations affect user behavior is an ongoing challenge. In this paper, we explore how explanation veracity affects user performance and agreement in intelligent systems. Through a controlled user study with an explainable activity recognition system, we compare variations in explanation veracity for a video review and querying task. The results suggest that low veracity explanations significantly decrease user performance and agreement compared to both accurate explanations and a system without explanations. These findings demonstrate the importance of accurate and understandable explanations and caution that poor explanations can sometimes be worse than no explanations with respect to their effect on user performance and reliance on an AI system.

READ FULL TEXT

page 7

page 12

page 14

page 15

page 22

page 23

page 24

page 25

research
05/20/2021

Explainable Activity Recognition for Smart Home Systems

Smart home environments are designed to provide services that help impro...
research
06/09/2023

Strategies to exploit XAI to improve classification systems

Explainable Artificial Intelligence (XAI) aims to provide insights into ...
research
01/10/2022

Effective Representation to Capture Collaboration Behaviors between Explainer and User

An explainable AI (XAI) model aims to provide transparency (in the form ...
research
11/22/2019

Culture-Based Explainable Human-Agent Deconfliction

Law codes and regulations help organise societies for centuries, and as ...
research
12/04/2020

Challenging common interpretability assumptions in feature attribution explanations

As machine learning and algorithmic decision making systems are increasi...
research
08/14/2023

Can we Agree? On the Rashōmon Effect and the Reliability of Post-Hoc Explainable AI

The Rashōmon effect poses challenges for deriving reliable knowledge fro...
research
05/03/2023

Calibrated Explanations: with Uncertainty Information and Counterfactuals

Artificial Intelligence (AI) has become an integral part of decision sup...

Please sign up or login with your details

Forgot password? Click here to reset