Subgoal-Based Explanations for Unreliable Intelligent Decision Support Systems

01/11/2022
by   Devleena Das, et al.
0

Intelligent decision support (IDS) systems leverage artificial intelligence techniques to generate recommendations that guide human users through the decision making phases of a task. However, a key challenge is that IDS systems are not perfect, and in complex real-world scenarios may produce incorrect output or fail to work altogether. The field of explainable AI planning (XAIP) has sought to develop techniques that make the decision making of sequential decision making AI systems more explainable to end-users. Critically, prior work in applying XAIP techniques to IDS systems has assumed that the plan being proposed by the planner is always optimal, and therefore the action or plan being recommended as decision support to the user is always correct. In this work, we examine novice user interactions with a non-robust IDS system – one that occasionally recommends the wrong action, and one that may become unavailable after users have become accustomed to its guidance. We introduce a novel explanation type, subgoal-based explanations, for planning-based IDS systems, that supplements traditional IDS output with information about the subgoal toward which the recommended action would contribute. We demonstrate that subgoal-based explanations lead to improved user task performance, improve user ability to distinguish optimal and suboptimal IDS recommendations, are preferred by users, and enable more robust user performance in the case of IDS failure

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2023

Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven decision support

In this paper, we argue for a paradigm shift from the current model of e...
research
02/26/2020

The Emerging Landscape of Explainable AI Planning and Decision Making

In this paper, we provide a comprehensive outline of the different threa...
research
11/19/2020

RADAR-X: An Interactive Interface Pairing Contrastive Explanations with Revised Plan Suggestions

Empowering decision support systems with automated planning has received...
research
01/05/2021

Explainable AI for Robot Failures: Generating Explanations that Improve User Assistance in Fault Recovery

With the growing capabilities of intelligent systems, the integration of...
research
11/18/2020

Explainable AI for System Failures: Generating Explanations that Improve Human Assistance in Fault Recovery

With the growing capabilities of intelligent systems, the integration of...
research
05/10/2023

Why Don't You Do Something About It? Outlining Connections between AI Explanations and User Actions

A core assumption of explainable AI systems is that explanations change ...
research
08/08/2022

An Empirical Evaluation of Predicted Outcomes as Explanations in Human-AI Decision-Making

In this work, we empirically examine human-AI decision-making in the pre...

Please sign up or login with your details

Forgot password? Click here to reset