Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers

12/20/2022
by   Damai Dai, et al.
0

Large pretrained language models have shown surprising In-Context Learning (ICL) ability. With a few demonstration input-label pairs, they can predict the label for an unseen input without additional parameter updates. Despite the great success in performance, the working mechanism of ICL still remains an open problem. In order to better understand how ICL works, this paper explains language models as meta-optimizers and understands ICL as a kind of implicit finetuning. Theoretically, we figure out that the Transformer attention has a dual form of gradient descent based optimization. On top of it, we understand ICL as follows: GPT first produces meta-gradients according to the demonstration examples, and then these meta-gradients are applied to the original GPT to build an ICL model. Experimentally, we comprehensively compare the behavior of ICL and explicit finetuning based on real tasks to provide empirical evidence that supports our understanding. The results prove that ICL behaves similarly to explicit finetuning at the prediction level, the representation level, and the attention behavior level. Further, inspired by our understanding of meta-optimization, we design a momentum-based attention by analogy with the momentum-based gradient descent algorithm. Its consistently better performance over vanilla attention supports our understanding again from another aspect, and more importantly, it shows the potential to utilize our understanding for future model designing.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/22/2023

Iterative Forward Tuning Boosts In-context Learning in Language Models

Large language models (LLMs) have exhibited an emergent in-context learn...
research
05/22/2023

Meta-in-context learning in large language models

Large language models have shown tremendous performance in a variety of ...
research
04/10/2021

Meta-tuning Language Models to Answer Prompts Better

Large pretrained language models like GPT-3 have acquired a surprising a...
research
05/23/2023

Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning

In-context learning (ICL) emerges as a promising capability of large lan...
research
08/14/2023

CausalLM is not optimal for in-context learning

Recent empirical evidence indicates that transformer based in-context le...
research
06/20/2021

Memory Augmented Optimizers for Deep Learning

Popular approaches for minimizing loss in data-driven learning often inv...
research
05/31/2021

Why does CTC result in peaky behavior?

The peaky behavior of CTC models is well known experimentally. However, ...

Please sign up or login with your details

Forgot password? Click here to reset