The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations

07/28/2021
by   Upol Ehsan, et al.
19

Explainability of AI systems is critical for users to take informed actions and hold systems accountable. While "opening the opaque box" is important, understanding who opens the box can govern if the Human-AI interaction is effective. In this paper, we conduct a mixed-methods study of how two different groups of whos–people with and without a background in AI–perceive different types of AI explanations. These groups were chosen to look at how disparities in AI backgrounds can exacerbate the creator-consumer gap. We quantitatively share what the perceptions are along five dimensions: confidence, intelligence, understandability, second chance, and friendliness. Qualitatively, we highlight how the AI background influences each group's interpretations and elucidate why the differences might exist through the lenses of appropriation and cognitive heuristics. We find that (1) both groups had unwarranted faith in numbers, to different extents and for different reasons, (2) each group found explanatory values in different explanations that went beyond the usage we designed them for, and (3) each group had different requirements of what counts as humanlike explanations. Using our findings, we discuss potential negative consequences such as harmful manipulation of user trust and propose design interventions to mitigate them. By bringing conscious awareness to how and why AI backgrounds shape perceptions of potential creators and consumers in XAI, our work takes a formative step in advancing a pluralistic Human-centered Explainable AI discourse.

READ FULL TEXT
research
05/02/2022

Creative Uses of AI Systems and their Explanations: A Case Study from Insurance

Recent works have recognized the need for human-centered perspectives wh...
research
09/26/2021

Explainability Pitfalls: Beyond Dark Patterns in Explainable AI

To make Explainable AI (XAI) systems trustworthy, understanding harmful ...
research
10/02/2022

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

Despite the proliferation of explainable AI (XAI) methods, little is und...
research
10/05/2022

On the Influence of Cognitive Styles on Users' Understanding of Explanations

Artificial intelligence (AI) is becoming increasingly complex, making it...
research
05/23/2023

What Else Do I Need to Know? The Effect of Background Information on Users' Reliance on AI Systems

AI systems have shown impressive performance at answering questions by r...
research
05/24/2023

Anthropomorphization of AI: Opportunities and Risks

Anthropomorphization is the tendency to attribute human-like traits to n...
research
07/24/2023

Regulating AI manipulation: Applying Insights from behavioral economics and psychology to enhance the practicality of the EU AI Act

The EU AI Act Article 5 is designed to regulate AI manipulation to preve...

Please sign up or login with your details

Forgot password? Click here to reset