Towards Reconciling Usability and Usefulness of Explainable AI Methodologies

01/13/2023
by   Pradyumna Tambwekar, et al.
0

Interactive Artificial Intelligence (AI) agents are becoming increasingly prevalent in society. However, application of such systems without understanding them can be problematic. Black-box AI systems can lead to liability and accountability issues when they produce an incorrect decision. Explainable AI (XAI) seeks to bridge the knowledge gap, between developers and end-users, by offering insights into how an AI algorithm functions. Many modern algorithms focus on making the AI model "transparent", i.e. unveil the inherent functionality of the agent in a simpler format. However, these approaches do not cater to end-users of these systems, as users may not possess the requisite knowledge to understand these explanations in a reasonable amount of time. Therefore, to be able to develop suitable XAI methods, we need to understand the factors which influence subjective perception and objective usability. In this paper, we present a novel user-study which studies four differing XAI modalities commonly employed in prior work for explaining AI behavior, i.e. Decision Trees, Text, Programs. We study these XAI modalities in the context of explaining the actions of a self-driving car on a highway, as driving is an easily understandable real-world task and self-driving cars is a keen area of interest within the AI community. Our findings highlight internal consistency issues wherein participants perceived language explanations to be significantly more usable, however participants were better able to objectively understand the decision making process of the car through a decision tree explanation. Our work also provides further evidence of importance of integrating user-specific and situational criteria into the design of XAI systems. Our findings show that factors such as computer science experience, and watching the car succeed or fail can impact the perception and usefulness of the explanation.

READ FULL TEXT

page 4

page 7

page 14

research
06/19/2020

Does Explainable Artificial Intelligence Improve Human Decision-Making?

Explainable AI provides insight into the "why" for model predictions, of...
research
09/07/2022

Responsibility: An Example-based Explainable AI approach via Training Process Inspection

Explainable Artificial Intelligence (XAI) methods are intended to help h...
research
10/27/2022

Painting the black box white: experimental findings from applying XAI to an ECG reading setting

The shift from symbolic AI systems to black-box, sub-symbolic, and stati...
research
04/21/2022

Perception Visualization: Seeing Through the Eyes of a DNN

Artificial intelligence (AI) systems power the world we live in. Deep ne...
research
02/06/2022

The Self-Driving Car: Crossroads at the Bleeding Edge of Artificial Intelligence and Law

Artificial intelligence (AI) features are increasingly being embedded in...
research
03/06/2013

A Study of Scaling Issues in Bayesian Belief Networks for Ship Classification

The problems associated with scaling involve active and challenging rese...
research
06/03/2021

Toward Explainable Users: Using NLP to Enable AI to Understand Users' Perceptions of Cyber Attacks

To understand how end-users conceptualize consequences of cyber security...

Please sign up or login with your details

Forgot password? Click here to reset