A Measure of Explanatory Effectiveness

05/20/2023
by   Dylan Cope, et al.
0

In most conversations about explanation and AI, the recipient of the explanation (the explainee) is suspiciously absent, despite the problem being ultimately communicative in nature. We pose the problem `explaining AI systems' in terms of a two-player cooperative game in which each agent seeks to maximise our proposed measure of explanatory effectiveness. This measure serves as a foundation for the automated assessment of explanations, in terms of the effects that any given action in the game has on the internal state of the explainee.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/05/2019

Lucid Explanations Help: Using a Human-AI Image-Guessing Game to Evaluate Machine Explanation Helpfulness

While there have been many proposals on how to make AI algorithms more t...
research
12/08/2017

Nintendo Super Smash Bros. Melee: An "Untouchable" Agent

Nintendo's Super Smash Bros. Melee fighting game can be emulated on mode...
research
07/18/2018

Generating Levels That Teach Mechanics

The automatic generation of game tutorials is a challenging AI problem. ...
research
10/05/2022

Exploring Effectiveness of Explanations for Appropriate Trust: Lessons from Cognitive Psychology

The rapid development of Artificial Intelligence (AI) requires developer...
research
08/19/2020

Mediating Community-AI Interaction through Situated Explanation: The Case of AI-Led Moderation

Artificial intelligence (AI) has become prevalent in our everyday techno...
research
08/08/2023

Adding Why to What? Analyses of an Everyday Explanation

In XAI it is important to consider that, in contrast to explanations for...
research
01/11/2019

Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions

Automated rationale generation is an approach for real-time explanation ...

Please sign up or login with your details

Forgot password? Click here to reset