Do Human Rationales Improve Machine Explanations?

05/31/2019
by   Julia Strout, et al.
0

Work on "learning with rationales" shows that humans providing explanations to a machine learning system can improve the system's predictive accuracy. However, this work has not been connected to work in "explainable AI" which concerns machines explaining their reasoning to humans. In this work, we show that learning with rationales can also improve the quality of the machine's explanations as evaluated by human judges. Specifically, we present experiments showing that, for CNN- based text classification, explanations generated using "supervised attention" are judged superior to explanations generated using normal unsupervised attention.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/16/2020

Harnessing Explanations to Bridge AI and Humans

Machine learning models are increasingly integrated into societally crit...
research
06/21/2021

A Turing Test for Transparency

A central goal of explainable artificial intelligence (XAI) is to improv...
research
12/10/2020

DAX: Deep Argumentative eXplanation for Neural Networks

Despite the rapid growth in attention on eXplainable AI (XAI) of late, e...
research
05/23/2020

Towards Analogy-Based Explanations in Machine Learning

Principles of analogical reasoning have recently been applied in the con...
research
11/27/2020

Reflective-Net: Learning from Explanations

Humans possess a remarkable capability to make fast, intuitive decisions...
research
04/16/2021

Towards Human-Understandable Visual Explanations:Imperceptible High-frequency Cues Can Better Be Removed

Explainable AI (XAI) methods focus on explaining what a neural network h...
research
02/24/2021

Teach Me to Explain: A Review of Datasets for Explainable NLP

Explainable NLP (ExNLP) has increasingly focused on collecting human-ann...

Please sign up or login with your details

Forgot password? Click here to reset