Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models

by   Robert L. Logan IV, et al.

Prompting language models (LMs) with training examples and task descriptions has been seen as critical to recent successes in few-shot learning. In this work, we show that finetuning LMs in the few-shot setting can considerably reduce the need for prompt engineering. In fact, one can use null prompts, prompts that contain neither task-specific templates nor training examples, and achieve competitive accuracy to manually-tuned prompts across a wide range of tasks. While finetuning LMs does introduce new parameters for each downstream task, we show that this memory overhead can be substantially reduced: finetuning only the bias terms can achieve comparable or better accuracy than standard finetuning while only updating 0.1 recommend finetuning LMs for few-shot learning as it is more accurate, robust to different prompts, and can be made nearly as efficient as using frozen LMs.


page 4

page 5

page 6


Reordering Examples Helps during Priming-based Few-Shot Learning

The ability to learn from limited data, or few-shot learning, is a desir...

Calibrate Before Use: Improving Few-Shot Performance of Language Models

GPT-3 can perform numerous tasks when provided a natural language prompt...

Few-shot Learning with Retrieval Augmented Language Models

Large language models have shown impressive few-shot results on a wide r...

True Few-Shot Learning with Language Models

Pretrained language models (LMs) perform well on many tasks even when le...

MemGEN: Memory is All You Need

We propose a new learning paradigm called Deep Memory. It has the potent...

One of these (Few) Things is Not Like the Others

To perform well, most deep learning based image classification systems r...

Prototypical Calibration for Few-shot Learning of Language Models

In-context learning of GPT-like models has been recognized as fragile ac...