Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models

06/24/2021
by   Robert L. Logan IV, et al.
0

Prompting language models (LMs) with training examples and task descriptions has been seen as critical to recent successes in few-shot learning. In this work, we show that finetuning LMs in the few-shot setting can considerably reduce the need for prompt engineering. In fact, one can use null prompts, prompts that contain neither task-specific templates nor training examples, and achieve competitive accuracy to manually-tuned prompts across a wide range of tasks. While finetuning LMs does introduce new parameters for each downstream task, we show that this memory overhead can be substantially reduced: finetuning only the bias terms can achieve comparable or better accuracy than standard finetuning while only updating 0.1 recommend finetuning LMs for few-shot learning as it is more accurate, robust to different prompts, and can be made nearly as efficient as using frozen LMs.

READ FULL TEXT

page 4

page 5

page 6

research
06/03/2021

Reordering Examples Helps during Priming-based Few-Shot Learning

The ability to learn from limited data, or few-shot learning, is a desir...
research
02/19/2021

Calibrate Before Use: Improving Few-Shot Performance of Language Models

GPT-3 can perform numerous tasks when provided a natural language prompt...
research
08/05/2022

Few-shot Learning with Retrieval Augmented Language Models

Large language models have shown impressive few-shot results on a wide r...
research
03/29/2018

MemGEN: Memory is All You Need

We propose a new learning paradigm called Deep Memory. It has the potent...
research
05/24/2021

True Few-Shot Learning with Language Models

Pretrained language models (LMs) perform well on many tasks even when le...
research
12/08/2022

Demystifying Prompts in Language Models via Perplexity Estimation

Language models can be prompted to perform a wide variety of zero- and f...
research
01/31/2023

Differentiable Entailment for Parameter Efficient Few Shot Learning

Few-shot learning allows pre-trained language models to adapt to downstr...

Please sign up or login with your details

Forgot password? Click here to reset