What In-Context Learning "Learns" In-Context: Disentangling Task Recognition and Task Learning

05/16/2023
by   Jane Pan, et al.
10

Large language models (LLMs) exploit in-context learning (ICL) to solve tasks with only a few demonstrations, but its mechanisms are not yet well-understood. Some works suggest that LLMs only recall already learned concepts from pre-training, while others hint that ICL performs implicit learning over demonstrations. We characterize two ways through which ICL leverages demonstrations. Task recognition (TR) captures the extent to which LLMs can recognize a task through demonstrations – even without ground-truth labels – and apply their pre-trained priors, whereas task learning (TL) is the ability to capture new input-label mappings unseen in pre-training. Using a wide range of classification datasets and three LLM families (GPT-3, LLaMA and OPT), we design controlled experiments to disentangle the roles of TR and TL in ICL. We show that (1) models can achieve non-trivial performance with only TR, and TR does not further improve with larger models or more demonstrations; (2) LLMs acquire TL as the model scales, and TL's performance consistently improves with more demonstrations in context. Our findings unravel two different forces behind ICL and we advocate for discriminating them in future ICL research due to their distinct nature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/25/2022

Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?

Large language models (LMs) are able to in-context learn – perform a new...
research
06/16/2022

Self-Generated In-Context Learning: Leveraging Auto-regressive Language Models as a Demonstration Generator

Large-scale pre-trained language models (PLMs) are well-known for being ...
research
07/23/2023

In-Context Learning in Large Language Models Learns Label Relationships but Is Not Conventional Learning

The performance of Large Language Models (LLMs) on downstream tasks ofte...
research
05/25/2022

Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations

Despite recent explosion in research interests, in-context learning and ...
research
07/11/2023

Towards Understanding In-Context Learning with Contrastive Demonstrations and Saliency Maps

We investigate the role of various demonstration components in the in-co...
research
01/27/2023

Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Learning

In recent years, pre-trained large language models have demonstrated rem...
research
05/24/2023

Coverage-based Example Selection for In-Context Learning

In-context learning (ICL), the ability of large language models to perfo...

Please sign up or login with your details

Forgot password? Click here to reset