FLamE: Few-shot Learning from Natural Language Explanations

06/13/2023
by   Yangqiaoyu Zhou, et al.
0

Natural language explanations have the potential to provide rich information that in principle guides model reasoning. Yet, recent work by Lampinen et al. (2022) has shown limited utility of natural language explanations in improving classification. To effectively learn from explanations, we present FLamE, a two-stage few-shot learning framework that first generates explanations using GPT-3, and then finetunes a smaller model (e.g., RoBERTa) with generated explanations. Our experiments on natural language inference demonstrate effectiveness over strong baselines, increasing accuracy by 17.6 Babbage and 5.7 performance, human evaluation surprisingly reveals that the majority of generated explanations does not adequately justify classification decisions. Additional analyses point to the important role of label-specific cues (e.g., "not know" for the neutral label) in generated explanations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2021

Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Few-shot NLI

Although neural models have shown strong performance in datasets such as...
research
05/25/2020

NILE : Natural Language Inference with Faithful Natural Language Explanations

The recent growth in the popularity and success of deep learning models ...
research
05/06/2022

The Unreliability of Explanations in Few-Shot In-Context Learning

How can prompting a large language model like GPT-3 with explanations im...
research
05/22/2023

MaNtLE: Model-agnostic Natural Language Explainer

Understanding the internal reasoning behind the predictions of machine l...
research
11/07/2022

Zero-Shot Classification by Logical Reasoning on Natural Language Explanations

Humans can classify an unseen category by reasoning on its language expl...
research
09/19/2023

Rigorously Assessing Natural Language Explanations of Neurons

Natural language is an appealing medium for explaining how large languag...
research
11/14/2022

Are Hard Examples also Harder to Explain? A Study with Human and Model-Generated Explanations

Recent work on explainable NLP has shown that few-shot prompting can ena...

Please sign up or login with your details

Forgot password? Click here to reset