Scan and Snap: Understanding Training Dynamics and Token Composition in 1-layer Transformer

05/25/2023
by   Yuandong Tian, et al.
20

Transformer architecture has shown impressive performance in multiple research domains and has become the backbone of many neural network models. However, there is limited understanding on how it works. In particular, with a simple predictive loss, how the representation emerges from the gradient training dynamics remains a mystery. In this paper, for 1-layer transformer with one self-attention layer plus one decoder layer, we analyze its SGD training dynamics for the task of next token prediction in a mathematically rigorous manner. We open the black box of the dynamic process of how the self-attention layer combines input tokens, and reveal the nature of underlying inductive bias. More specifically, with the assumption (a) no positional encoding, (b) long input sequence, and (c) the decoder layer learns faster than the self-attention layer, we prove that self-attention acts as a discriminative scanning algorithm: starting from uniform attention, it gradually attends more to distinct key tokens for a specific next token to be predicted, and pays less attention to common key tokens that occur across different next tokens. Among distinct tokens, it progressively drops attention weights, following the order of low to high co-occurrence between the key and the query token in the training set. Interestingly, this procedure does not lead to winner-takes-all, but decelerates due to a phase transition that is controllable by the learning rates of the two layers, leaving (almost) fixed token combination. We verify this scan and snap dynamics on synthetic and real-world data (WikiText).

READ FULL TEXT

page 8

page 9

research
05/02/2020

Synthesizer: Rethinking Self-Attention in Transformer Models

The dot product self-attention is known to be central and indispensable ...
research
05/24/2023

Predicting Token Impact Towards Efficient Vision Transformer

Token filtering to reduce irrelevant tokens prior to self-attention is a...
research
08/13/2020

On the Importance of Local Information in Transformer Based Models

The self-attention module is a key component of Transformer-based models...
research
03/05/2021

Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth

Attention-based architectures have become ubiquitous in machine learning...
research
07/12/2021

The Brownian motion in the transformer model

Transformer is the state of the art model for many language and visual t...
research
08/31/2023

Transformers as Support Vector Machines

Since its inception in "Attention Is All You Need", transformer architec...
research
06/08/2021

On Improving Adversarial Transferability of Vision Transformers

Vision transformers (ViTs) process input images as sequences of patches ...

Please sign up or login with your details

Forgot password? Click here to reset