DeepAI AI Chat
Log In Sign Up

The Emerging Landscape of Explainable AI Planning and Decision Making

by   Tathagata Chakraborti, et al.
Arizona State University

In this paper, we provide a comprehensive outline of the different threads of work in Explainable AI Planning (XAIP) that has emerged as a focus area in the last couple of years and contrast that with earlier efforts in the field in terms of techniques, target users, and delivery mechanisms. We hope that the survey will provide guidance to new researchers in automated planning towards the role of explanations in the effective design of human-in-the-loop systems, as well as provide the established researcher with some perspective on the evolution of the exciting world of explainable planning.


page 1

page 2

page 3

page 4


Subgoal-Based Explanations for Unreliable Intelligent Decision Support Systems

Intelligent decision support (IDS) systems leverage artificial intellige...

On the Relationship Between KR Approaches for Explainable Planning

In this paper, we build upon notions from knowledge representation and r...

Towards Explainable AI Planning as a Service

Explainable AI is an important area of research within which Explainable...

JEDAI Explains Decision-Making AI

This paper presents JEDAI, an AI system designed for outreach and educat...

What Does Explainable AI Really Mean? A New Conceptualization of Perspectives

We characterize three notions of explainable AI that cut across research...

Human-AI Symbiosis: A Survey of Current Approaches

In this paper, we aim at providing a comprehensive outline of the differ...

Explainable Composition of Aggregated Assistants

A new design of an AI assistant that has become increasingly popular is ...

Code Repositories