Towards an Explanation Space to Align Humans and Explainable-AI Teamwork

by   Garrick Cabour, et al.

Providing meaningful and actionable explanations to end-users is a fundamental prerequisite for implementing explainable intelligent systems in the real world. Explainability is a situated interaction between a user and the AI system rather than being static design principles. The content of explanations is context-dependent and must be defined by evidence about the user and its context. This paper seeks to operationalize this concept by proposing a formative architecture that defines the explanation space from a user-inspired perspective. The architecture comprises five intertwined components to outline explanation requirements for a task: (1) the end-users mental models, (2) the end-users cognitive process, (3) the user interface, (4) the human-explainer agent, and the (5) agent process. We first define each component of the architecture. Then we present the Abstracted Explanation Space, a modeling tool that aggregates the architecture's components to support designers in systematically aligning explanations with the end-users work practices, needs, and goals. It guides the specifications of what needs to be explained (content - end-users mental model), why this explanation is necessary (context - end-users cognitive process), to delimit how to explain it (format - human-explainer agent and user interface), and when should the explanations be given. We then exemplify the tool's use in an ongoing case study in the aircraft maintenance domain. Finally, we discuss possible contributions of the tool, known limitations/areas for improvement, and future work to be done.



There are no comments yet.


page 1

page 2

page 3

page 4


Explanation Ontology: A Model of Explanations for User-Centered AI

Explainability has been a goal for Artificial Intelligence (AI) systems ...

How to Answer Why – Evaluating the Explanations of AI Through Mental Model Analysis

To achieve optimal human-system integration in the context of user-AI in...

Creation of User Friendly Datasets: Insights from a Case Study concerning Explanations of Loan Denials

Most explainable AI (XAI) techniques are concerned with the design of al...

Cases for Explainable Software Systems:Characteristics and Examples

The need for systems to explain behavior to users has become more eviden...

Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions

Automated rationale generation is an approach for real-time explanation ...

Sequential Explanations with Mental Model-Based Policies

The act of explaining across two parties is a feedback loop, where one p...

Explanation from Specification

Explainable components in XAI algorithms often come from a familiar set ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.