Log In Sign Up

Improving Neural Model Performance through Natural Language Feedback on Their Explanations

by   Aman Madaan, et al.

A class of explainable NLP models for reasoning tasks support their decisions by generating free-form or structured explanations, but what happens when these supporting structures contain errors? Our goal is to allow users to interactively correct explanation structures through natural language feedback. We introduce MERCURIE - an interactive system that refines its explanations for a given reasoning task by getting human feedback in natural language. Our approach generates graphs that have 40 the off-the-shelf system. Further, simply appending the corrected explanation structures to the output leads to a gain of 1.2 points on accuracy on defeasible reasoning across all three domains. We release a dataset of over 450k graphs for defeasible reasoning generated by our system at .


page 1

page 2

page 3

page 4


NILE : Natural Language Inference with Faithful Natural Language Explanations

The recent growth in the popularity and success of deep learning models ...

e-CARE: a New Dataset for Exploring Explainable Causal Reasoning

Understanding causality has vital importance for various Natural Languag...

Learning to Explain: Answering Why-Questions via Rephrasing

Providing plausible responses to why questions is a challenging but crit...

LIREx: Augmenting Language Inference with Relevant Explanation

Natural language explanations (NLEs) are a special form of data annotati...

Impact of Feedback Type on Explanatory Interactive Learning

Explanatory Interactive Learning (XIL) collects user feedback on visual ...

WT5?! Training Text-to-Text Models to Explain their Predictions

Neural networks have recently achieved human-level performance on variou...

InterFair: Debiasing with Natural Language Feedback for Fair Interpretable Predictions

Debiasing methods in NLP models traditionally focus on isolating informa...