Diagnostics-Guided Explanation Generation

09/08/2021
by   Pepa Atanasova, et al.
0

Explanations shed light on a machine learning model's rationales and can aid in identifying deficiencies in its reasoning process. Explanation generation models are typically trained in a supervised way given human explanations. When such annotations are not available, explanations are often selected as those portions of the input that maximise a downstream task's performance, which corresponds to optimising an explanation's Faithfulness to a given model. Faithfulness is one of several so-called diagnostic properties, which prior work has identified as useful for gauging the quality of an explanation without requiring annotations. Other diagnostic properties are Data Consistency, which measures how similar explanations are for similar input instances, and Confidence Indication, which shows whether the explanation reflects the confidence of the model. In this work, we show how to directly optimise for these diagnostic properties when training a model to generate sentence-level explanations, which markedly improves explanation quality, agreement with human rationales, and downstream task performance on three complex reasoning tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/03/2019

Personalized explanation in machine learning

Explanation in machine learning and related fields such as artificial in...
research
02/21/2023

Parallel Sentence-Level Explanation Generation for Real-World Low-Resource Scenarios

In order to reveal the rationale behind model predictions, many works ha...
research
06/11/2021

Local Explanation of Dialogue Response Generation

In comparison to the interpretation of classification models, the explan...
research
07/30/2022

On Interactive Explanations as Non-Monotonic Reasoning

Recent work shows issues of consistency with explanations, with methods ...
research
06/27/2022

RES: A Robust Framework for Guiding Visual Explanation

Despite the fast progress of explanation techniques in modern Deep Neura...
research
12/18/2019

Clusters in Explanation Space: Inferring disease subtypes from model explanations

Identification of disease subtypes and corresponding biomarkers can subs...
research
08/21/2021

Learn-Explain-Reinforce: Counterfactual Reasoning and Its Guidance to Reinforce an Alzheimer's Disease Diagnosis Model

Existing studies on disease diagnostic models focus either on diagnostic...

Please sign up or login with your details

Forgot password? Click here to reset