DeepAI AI Chat
Log In Sign Up

Why Don't You Do Something About It? Outlining Connections between AI Explanations and User Actions

by   Gennie Mansi, et al.
Georgia Institute of Technology

A core assumption of explainable AI systems is that explanations change what users know, thereby enabling them to act within their complex socio-technical environments. Despite the centrality of action, explanations are often organized and evaluated based on technical aspects. Prior work varies widely in the connections it traces between information provided in explanations and resulting user actions. An important first step in centering action in evaluations is understanding what the XAI community collectively recognizes as the range of information that explanations can present and what actions are associated with them. In this paper, we present our framework, which maps prior work on information presented in explanations and user action, and we discuss the gaps we uncovered about the information presented to users.


Metrics for Explainable AI: Challenges and Prospects

The question addressed in this paper is: If we present to a user an AI s...

Explainable Object-induced Action Decision for Autonomous Vehicles

A new paradigm is proposed for autonomous driving. The new paradigm lies...

Subgoal-Based Explanations for Unreliable Intelligent Decision Support Systems

Intelligent decision support (IDS) systems leverage artificial intellige...

Towards Providing Explanations for AI Planner Decisions

In order to engender trust in AI, humans must understand what an AI syst...

Introspection-based Explainable Reinforcement Learning in Episodic and Non-episodic Scenarios

With the increasing presence of robotic systems and human-robot environm...

Explainability Pitfalls: Beyond Dark Patterns in Explainable AI

To make Explainable AI (XAI) systems trustworthy, understanding harmful ...

Generating User-friendly Explanations for Loan Denials using GANs

Financial decisions impact our lives, and thus everyone from the regulat...