DeepAI AI Chat
Log In Sign Up

Teach Me to Explain: A Review of Datasets for Explainable NLP

by   Sarah Wiegreffe, et al.

Explainable NLP (ExNLP) has increasingly focused on collecting human-annotated explanations. These explanations are used downstream in three ways: as data augmentation to improve performance on a predictive task, as a loss signal to train models to produce explanations for their predictions, and as a means to evaluate the quality of model-generated explanations. In this review, we identify three predominant classes of explanations (highlights, free-text, and structured), organize the literature on annotating each type, point to what has been learned to date, and give recommendations for collecting ExNLP datasets in the future.


Explainable Entity-based Recommendations with Knowledge Graphs

Explainable recommendation is an important task. Many methods have been ...

Do Human Rationales Improve Machine Explanations?

Work on "learning with rationales" shows that humans providing explanati...

Are Human Explanations Always Helpful? Towards Objective Evaluation of Human Natural Language Explanations

Human-annotated labels and explanations are critical for training explai...

On the Diversity and Limits of Human Explanations

A growing effort in NLP aims to build datasets of human explanations. Ho...

ProtoVAE: A Trustworthy Self-Explainable Prototypical Variational Model

The need for interpretable models has fostered the development of self-e...

SPECTRA: Sparse Structured Text Rationalization

Selective rationalization aims to produce decisions along with rationale...