ZARA: Improving Few-Shot Self-Rationalization for Small Language Models

05/12/2023
by   Wei-Lin Chen, et al.
0

Language models (LMs) that jointly generate end-task answers as well as free-text rationales are known as self-rationalization models. Recent works demonstrate great performance gain for self-rationalization by few-shot prompting LMs with rationale-augmented exemplars. However, the ability to benefit from explanations only emerges with large-scale LMs, which have poor accessibility. In this work, we explore the less-studied setting of leveraging explanations for small LMs to improve few-shot self-rationalization. We first revisit the relationship between rationales and answers. Inspired by the implicit mental process of how human beings assess explanations, we present a novel approach, Zero-shot Augmentation of Rationale-Answer pairs (ZARA), to automatically construct pseudo-parallel data for self-training by reducing the problem of plausibility judgement to natural language inference. Experimental results show ZARA achieves SOTA performance on the FEB benchmark, for both the task accuracy and the explanation metric. In addition, we conduct human and quantitative evaluation validating ZARA's ability to automatically identify plausible and accurate rationale-answer pairs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/16/2021

Few-Shot Self-Rationalization with Natural Language Prompts

Self-rationalization models that predict task labels and generate free-t...
research
04/05/2022

Can language models learn from explanations in context?

Large language models can perform new tasks by adapting to a few in-cont...
research
12/21/2022

Crowd Score: A Method for the Evaluation of Jokes using Large Language Model AI Voters as Judges

This paper presents the Crowd Score, a novel method to assess the funnin...
research
07/06/2023

PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations

Nowadays, the quality of responses generated by different modern large l...
research
06/05/2023

Few Shot Rationale Generation using Self-Training with Dual Teachers

Self-rationalizing models that also generate a free-text explanation for...
research
05/31/2023

Majority Rule: better patching via Self-Consistency

Large Language models (LLMs) can be induced to solve non-trivial problem...
research
11/17/2022

ProtSi: Prototypical Siamese Network with Data Augmentation for Few-Shot Subjective Answer Evaluation

Subjective answer evaluation is a time-consuming and tedious task, and t...

Please sign up or login with your details

Forgot password? Click here to reset