True Few-Shot Learning with Language Models

05/24/2021
by   Ethan Perez, et al.
12

Pretrained language models (LMs) perform well on many tasks even when learning from a few examples, but prior work uses many held-out examples to tune various aspects of learning, such as hyperparameters, training objectives, and natural language templates ("prompts"). Here, we evaluate the few-shot ability of LMs when such held-out examples are unavailable, a setting we call true few-shot learning. We test two model selection criteria, cross-validation and minimum description length, for choosing LM prompts and hyperparameters in the true few-shot setting. On average, both marginally outperform random selection and greatly underperform selection based on held-out examples. Moreover, selection criteria often prefer models that perform significantly worse than randomly-selected ones. We find similar results even when taking into account our uncertainty in a model's true performance during selection, as well as when varying the amount of computation and number of examples used for selection. Overall, our findings suggest that prior work significantly overestimated the true few-shot ability of LMs given the difficulty of few-shot model selection.

READ FULL TEXT

page 5

page 8

page 18

06/24/2021

Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models

Prompting language models (LMs) with training examples and task descript...
11/26/2021

True Few-Shot Learning with Prompts – A Real-World Perspective

Prompt-based approaches are strong at few-shot learning. However, Perez ...
06/03/2021

Reordering Examples Helps during Priming-based Few-Shot Learning

The ability to learn from limited data, or few-shot learning, is a desir...
11/04/2021

CLUES: Few-Shot Learning Evaluation in Natural Language Understanding

Most recent progress in natural language understanding (NLU) has been dr...
02/15/2021

Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm

Prevailing methods for mapping large generative language models to super...
02/19/2021

Calibrate Before Use: Improving Few-Shot Performance of Language Models

GPT-3 can perform numerous tasks when provided a natural language prompt...
07/07/2022

Diagnosing and Remedying Shot Sensitivity with Cosine Few-Shot Learners

Few-shot recognition involves training an image classifier to distinguis...

Code Repositories

true_few_shot

Code for the paper "True Few-Shot Learning in Language Models" (https://arxiv.org/abs/2105.11447)


view repo