Thinking Like Transformers

06/13/2021
by   Gail Weiss, et al.
0

What is the computational model behind a Transformer? Where recurrent neural networks have direct parallels in finite state machines, allowing clear discussion and thought around architecture variants or trained models, Transformers have no such familiar parallel. In this paper we aim to change that, proposing a computational model for the transformer-encoder in the form of a programming language. We map the basic components of a transformer-encoder – attention and feed-forward computation – into simple primitives, around which we form a programming language: the Restricted Access Sequence Processing Language (RASP). We show how RASP can be used to program solutions to tasks that could conceivably be learned by a Transformer, and how a Transformer can be trained to mimic a RASP solution. In particular, we provide RASP programs for histograms, sorting, and Dyck-languages. We further use our model to relate their difficulty in terms of the number of required layers and attention heads: analyzing a RASP program implies a maximum number of heads and layers necessary to encode a task in a transformer. Finally, we see how insights gained from our abstraction might be used to explain phenomena seen in recent works.

READ FULL TEXT

page 2

page 9

page 12

page 14

page 18

page 19

page 20

page 21

research
06/01/2023

Learning Transformer Programs

Recent research in mechanistic interpretability has attempted to reverse...
research
01/30/2023

Looped Transformers as Programmable Computers

We present a framework for using transformer networks as universal compu...
research
03/15/2023

Transformer Models for Type Inference in the Simply Typed Lambda Calculus: A Case Study in Deep Learning for Code

Despite a growing body of work at the intersection of deep learning and ...
research
07/10/2018

Universal Transformers

Self-attentive feed-forward sequence models have been shown to achieve i...
research
03/24/2021

Finetuning Pretrained Transformers into RNNs

Transformers have outperformed recurrent neural networks (RNNs) in natur...
research
12/05/2018

Attending to Mathematical Language with Transformers

Mathematical expressions were generated, evaluated and used to train neu...
research
02/12/2020

GLU Variants Improve Transformer

Gated Linear Units (arXiv:1612.08083) consist of the component-wise prod...

Please sign up or login with your details

Forgot password? Click here to reset