DeepAI
Log In Sign Up

Improving Neural Model Performance through Natural Language Feedback on Their Explanations

04/18/2021
by   Aman Madaan, et al.
0

A class of explainable NLP models for reasoning tasks support their decisions by generating free-form or structured explanations, but what happens when these supporting structures contain errors? Our goal is to allow users to interactively correct explanation structures through natural language feedback. We introduce MERCURIE - an interactive system that refines its explanations for a given reasoning task by getting human feedback in natural language. Our approach generates graphs that have 40 the off-the-shelf system. Further, simply appending the corrected explanation structures to the output leads to a gain of 1.2 points on accuracy on defeasible reasoning across all three domains. We release a dataset of over 450k graphs for defeasible reasoning generated by our system at https://tinyurl.com/mercurie .

READ FULL TEXT

page 1

page 2

page 3

page 4

05/25/2020

NILE : Natural Language Inference with Faithful Natural Language Explanations

The recent growth in the popularity and success of deep learning models ...
05/12/2022

e-CARE: a New Dataset for Exploring Explainable Causal Reasoning

Understanding causality has vital importance for various Natural Languag...
06/04/2019

Learning to Explain: Answering Why-Questions via Rephrasing

Providing plausible responses to why questions is a challenging but crit...
12/16/2020

LIREx: Augmenting Language Inference with Relevant Explanation

Natural language explanations (NLEs) are a special form of data annotati...
09/26/2022

Impact of Feedback Type on Explanatory Interactive Learning

Explanatory Interactive Learning (XIL) collects user feedback on visual ...
04/30/2020

WT5?! Training Text-to-Text Models to Explain their Predictions

Neural networks have recently achieved human-level performance on variou...
10/14/2022

InterFair: Debiasing with Natural Language Feedback for Fair Interpretable Predictions

Debiasing methods in NLP models traditionally focus on isolating informa...