DeepAI AI Chat
Log In Sign Up

Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy

01/28/2017
by   Tathagata Chakraborti, et al.
0

When AI systems interact with humans in the loop, they are often called on to provide explanations for their plans and behavior. Past work on plan explanations primarily involved the AI system explaining the correctness of its plan and the rationale for its decision in terms of its own model. Such soliloquy is wholly inadequate in most realistic scenarios where the humans have domain and task models that differ significantly from that used by the AI system. We posit that the explanations are best studied in light of these differing models. In particular, we show how explanation can be seen as a "model reconciliation problem" (MRP), where the AI system in effect suggests changes to the human's model, so as to make its plan be optimal with respect to that changed human model. We will study the properties of such explanations, present algorithms for automatically computing them, and evaluate the performance of the algorithms.

READ FULL TEXT
02/03/2018

Plan Explanations as Model Reconciliation -- An Empirical Study

Recent work in explanation generation for decision making agents has loo...
10/15/2018

Towards Providing Explanations for AI Planner Decisions

In order to engender trust in AI, humans must understand what an AI syst...
02/19/2018

Hierarchical Expertise-Level Modeling for User Specific Robot-Behavior Explanations

There is a growing interest within the AI research community to develop ...
01/27/2022

Diagnosing AI Explanation Methods with Folk Concepts of Behavior

When explaining AI behavior to humans, how is the communicated informati...
11/04/2018

Explaining Explanations in AI

Recent work on interpretability in machine learning and AI has focused o...
02/05/2020

`Why not give this work to them?' Explaining AI-Moderated Task-Allocation Outcomes using Negotiation Trees

The problem of multi-agent task allocation arises in a variety of scenar...