Transformers learn to implement preconditioned gradient descent for in-context learning

06/01/2023
by   Kwangjun Ahn, et al.
0

Motivated by the striking ability of transformers for in-context learning, several works demonstrate that transformers can implement algorithms like gradient descent. By a careful construction of weights, these works show that multiple layers of transformers are expressive enough to simulate gradient descent iterations. Going beyond the question of expressivity, we ask: Can transformers learn to implement such algorithms by training over random problem instances? To our knowledge, we make the first theoretical progress toward this question via analysis of the loss landscape for linear transformers trained over random instances of linear regression. For a single attention layer, we prove the global minimum of the training objective implements a single iteration of preconditioned gradient descent. Notably, the preconditioning matrix not only adapts to the input distribution but also to the variance induced by data inadequacy. For a transformer with k attention layers, we prove certain critical points of the training objective implement k iterations of preconditioned gradient descent. Our results call for future theoretical studies on learning algorithms by training transformers.

READ FULL TEXT

page 9

page 12

research
12/15/2022

Transformers learn in-context by gradient descent

Transformers have become the state-of-the-art neural network architectur...
research
07/07/2023

One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention

Recent works have empirically analyzed in-context learning and shown tha...
research
06/16/2023

Trained Transformers Learn Linear Models In-Context

Attention-based neural networks such as transformers have demonstrated a...
research
09/04/2023

Gated recurrent neural networks discover attention

Recent architectural developments have enabled recurrent neural networks...
research
08/14/2023

CausalLM is not optimal for in-context learning

Recent empirical evidence indicates that transformer based in-context le...
research
08/31/2023

Transformers as Support Vector Machines

Since its inception in "Attention Is All You Need", transformer architec...
research
11/28/2022

What learning algorithm is in-context learning? Investigations with linear models

Neural sequence models, especially transformers, exhibit a remarkable ca...

Please sign up or login with your details

Forgot password? Click here to reset