Few-Shot Self-Rationalization with Natural Language Prompts

11/16/2021
by   Ana Marasović, et al.
0

Self-rationalization models that predict task labels and generate free-text elaborations for their predictions could enable more intuitive interaction with NLP systems. These models are, however, currently trained with a large amount of human-written free-text explanations for each task which hinders their broader usage. We propose to study a more realistic setting of self-rationalization using few training examples. We present FEB – a standardized collection of four existing English-language datasets and associated metrics. We identify the right prompting approach by extensively exploring natural language prompts on FEB. Then, by using this prompt and scaling the model size, we demonstrate that making progress on few-shot self-rationalization is possible. We show there is still ample room for improvement in this task: the average plausibility of generated explanations assessed by human annotators is at most 51 explanations is 76 spur the community to take on the few-shot self-rationalization challenge.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/12/2021

Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Few-shot NLI

Although neural models have shown strong performance in datasets such as...
10/24/2020

Measuring Association Between Labels and Free-Text Rationales

Interpretable NLP has taking increasing interest in ensuring that explan...
03/16/2022

Are Shortest Rationales the Best Explanations for Human Understanding?

Existing self-explaining models typically favor extracting the shortest ...
09/17/2021

Self-training with Few-shot Rationalization: Teacher Explanations Aid Student in Few-shot NLU

While pre-trained language models have obtained state-of-the-art perform...
03/29/2018

MemGEN: Memory is All You Need

We propose a new learning paradigm called Deep Memory. It has the potent...
12/12/2021

Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations

Recently, there has been an increasing interest in models that generate ...
12/24/2020

To what extent do human explanations of model behavior align with actual model behavior?

Given the increasingly prominent role NLP models (will) play in our live...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.