Continued Pretraining for Better Zero- and Few-Shot Promptability

10/19/2022
by   Zhaofeng Wu, et al.
0

Recently introduced language model prompting methods can achieve high accuracy in zero- and few-shot settings while requiring few to no learned task-specific parameters. Nevertheless, these methods still often trail behind full model finetuning. In this work, we investigate if a dedicated continued pretraining stage could improve "promptability", i.e., zero-shot performance with natural language prompts or few-shot performance with prompt tuning. We reveal settings where existing continued pretraining methods lack promptability. We also identify current methodological gaps, which we fill with thorough large-scale experiments. We demonstrate that a simple recipe, continued pretraining that incorporates a trainable prompt during multi-task learning, leads to improved promptability in both zero- and few-shot settings compared to existing methods, up to 31 that continued pretraining using MAML-style meta-learning, a method that directly optimizes few-shot promptability, yields subpar performance. We validate our findings with two prompt tuning methods, and, based on our results, we provide concrete recommendations to optimize promptability for different use cases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/20/2022

Boosting Natural Language Generation from Instructions with Meta-Learning

Recent work has shown that language models (LMs) trained with multi-task...
research
03/07/2022

Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer

Hyperparameter (HP) tuning in deep learning is an expensive process, pro...
research
07/15/2021

FLEX: Unifying Evaluation for Few-Shot NLP

Few-shot NLP research is highly active, yet conducted in disjoint resear...
research
10/11/2022

Contrastive Training Improves Zero-Shot Classification of Semi-structured Documents

We investigate semi-structured document classification in a zero-shot se...
research
08/10/2022

Patching open-vocabulary models by interpolating weights

Open-vocabulary models like CLIP achieve high accuracy across many image...
research
09/08/2021

Continuous Entailment Patterns for Lexical Inference in Context

Combining a pretrained language model (PLM) with textual patterns has be...
research
04/28/2022

On the Effect of Pretraining Corpora on In-context Learning by a Large-scale Language Model

Many recent studies on large-scale language models have reported success...

Please sign up or login with your details

Forgot password? Click here to reset