ERASER: A Benchmark to Evaluate Rationalized NLP Models

11/08/2019
by   Jay DeYoung, et al.
35

State-of-the-art models in NLP are now predominantly based on deep neural networks that are generally opaque in terms of how they come to specific predictions. This limitation has led to increased interest in designing more interpretable deep models for NLP that can reveal the `reasoning' underlying model outputs. But work in this direction has been conducted on different datasets and tasks with correspondingly unique aims and metrics; this makes it difficult to track progress. We propose the Evaluating Rationales And Simple English Reasoning (ERASER) benchmark to advance research on interpretable models in NLP. This benchmark comprises multiple datasets and tasks for which human annotations of "rationales" (supporting evidence) have been collected. We propose several metrics that aim to capture how well the rationales provided by models align with human rationales, and also how faithful these rationales are (i.e., the degree to which provided rationales influenced the corresponding predictions). Our hope is that releasing this benchmark facilitates progress on designing more interpretable NLP systems. The benchmark, code, and documentation are available at: www.eraserbenchmark.com .

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/22/2021

Small-Bench NLP: Benchmark for small single GPU trained models in Natural Language Processing

Recent progress in the Natural Language Processing domain has given us s...
research
12/24/2020

QUACKIE: A NLP Classification Task With Ground Truth Explanations

NLP Interpretability aims to increase trust in model predictions. This m...
research
02/15/2022

MuLD: The Multitask Long Document Benchmark

The impressive progress in NLP techniques has been driven by the develop...
research
04/10/2022

Re-Examining Human Annotations for Interpretable NLP

Explanation methods in Interpretable NLP often explain the model's decis...
research
05/23/2022

A Fine-grained Interpretability Evaluation Benchmark for Neural NLP

While there is increasing concern about the interpretability of neural m...
research
12/02/2021

How not to Lie with a Benchmark: Rearranging NLP Leaderboards

Comparison with a human is an essential requirement for a benchmark for ...
research
06/11/2021

A Discussion on Building Practical NLP Leaderboards: The Case of Machine Translation

Recent advances in AI and ML applications have benefited from rapid prog...

Please sign up or login with your details

Forgot password? Click here to reset