From Philosophy to Interfaces: an Explanatory Method and a Tool Inspired by Achinstein's Theory of Explanation

09/09/2021
by   Francesco Sovrano, et al.
0

We propose a new method for explanations in Artificial Intelligence (AI) and a tool to test its expressive power within a user interface. In order to bridge the gap between philosophy and human-computer interfaces, we show a new approach for the generation of interactive explanations based on a sophisticated pipeline of AI algorithms for structuring natural language documents into knowledge graphs, answering questions effectively and satisfactorily. Among the mainstream philosophical theories of explanation we identified one that in our view is more easily applicable as a practical model for user-centric tools: Achinstein's Theory of Explanation. With this work we aim to prove that the theory proposed by Achinstein can be actually adapted for being implemented into a concrete software application, as an interactive process answering questions. To this end we found a way to handle the generic (archetypal) questions that implicitly characterise an explanatory processes as preliminary overviews rather than as answers to explicit questions, as commonly understood. To show the expressive power of this approach we designed and implemented a pipeline of AI algorithms for the generation of interactive explanations under the form of overviews, focusing on this aspect of explanations rather than on existing interfaces and presentation logic layers for question answering. We tested our hypothesis on a well-known XAI-powered credit approval system by IBM, comparing CEM, a static explanatory tool for post-hoc explanations, with an extension we developed adding interactive explanations based on our model. The results of the user study, involving more than 100 participants, showed that our proposed solution produced a statistically relevant improvement on effectiveness (U=931.0, p=0.036) over the baseline, thus giving evidence in favour of our theory.

READ FULL TEXT
research
10/02/2021

Generating User-Centred Explanations via Illocutionary Question Answering: From Philosophy to Interfaces

We propose a new method for generating explanations with Artificial Inte...
research
12/16/2021

Explanation as Question Answering based on Design Knowledge

Explanation of an AI agent requires knowledge of its design and operatio...
research
09/08/2020

QED: A Framework and Dataset for Explanations in Question Answering

A question answering system that in addition to providing an answer prov...
research
06/08/2022

Explanation as Question Answering based on a Task Model of the Agent's Design

We describe a stance towards the generation of explanations in AI agents...
research
09/09/2021

Modelling GDPR-Compliant Explanations for Trustworthy AI

Through the General Data Protection Regulation (GDPR), the European Unio...
research
11/19/2020

Iterative Planning with Plan-Space Explanations: A Tool and User Study

In a variety of application settings, the user preference for a planning...
research
07/30/2023

Augmented Math: Authoring AR-Based Explorable Explanations by Augmenting Static Math Textbooks

We introduce Augmented Math, a machine learning-based approach to author...

Please sign up or login with your details

Forgot password? Click here to reset