Explainable AI and Adoption of Algorithmic Advisors: an Experimental Study

01/05/2021
by   Daniel Ben David, et al.
0

Machine learning is becoming a commonplace part of our technological experience. The notion of explainable AI (XAI) is attractive when regulatory or usability considerations necessitate the ability to back decisions with a coherent explanation. A large body of research has addressed algorithmic methods of XAI, but it is still unclear how to determine what is best suited to create human cooperation and adoption of automatic systems. Here we develop an experimental methodology where participants play a web-based game, during which they receive advice from either a human or algorithmic advisor, accompanied with explanations that vary in nature between experimental conditions. We use a reference-dependent decision-making framework, evaluate the game results over time, and in various key situations, to determine whether the different types of explanations affect the readiness to adopt, willingness to pay and trust a financial AI consultant. We find that the types of explanations that promotes adoption during first encounter differ from those that are most successful following failure or when cost is involved. Furthermore, participants are willing to pay more for AI-advice that includes explanations. These results add to the literature on the importance of XAI for algorithmic adoption and trust.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2020

Who is this Explanation for? Human Intelligence and Knowledge Graphs for eXplainable AI

eXplainable AI focuses on generating explanations for the output of an A...
research
04/15/2021

LEx: A Framework for Operationalising Layers of Machine Learning Explanations

Several social factors impact how people respond to AI explanations used...
research
04/06/2023

Explainable AI And Visual Reasoning: Insights From Radiology

Why do explainable AI (XAI) explanations in radiology, despite their pro...
research
03/09/2023

Explainable Goal Recognition: A Framework Based on Weight of Evidence

We introduce and evaluate an eXplainable Goal Recognition (XGR) model th...
research
12/13/2022

Explanations Can Reduce Overreliance on AI Systems During Decision-Making

Prior work has identified a resilient phenomenon that threatens the perf...
research
06/16/2021

Toward Affective XAI: Facial Affect Analysis for Understanding Explainable Human-AI Interactions

As machine learning approaches are increasingly used to augment human de...
research
03/27/2023

Monetizing Explainable AI: A Double-edged Sword

Algorithms used by organizations increasingly wield power in society as ...

Please sign up or login with your details

Forgot password? Click here to reset