Transformer Feed-Forward Layers Are Key-Value Memories

12/29/2020
by   Mor Geva, et al.
29

Feed-forward layers constitute two-thirds of a transformer model's parameters, yet their role in the network remains under-explored. We show that feed-forward layers in transformer-based language models operate as key-value memories, where each key correlates with textual patterns in the training examples, and each value induces a distribution over the output vocabulary. Our experiments show that the learned patterns are human-interpretable, and that lower layers tend to capture shallow patterns, while upper layers learn more semantic ones. The values complement the keys' input patterns by inducing output distributions that concentrate probability mass on tokens likely to appear immediately after each pattern, particularly in the upper layers. Finally, we demonstrate that the output of a feed-forward layer is a composition of its memories, which is subsequently refined throughout the model's layers via residual connections to produce the final output distribution.

READ FULL TEXT

page 1

page 3

page 6

page 7

research
03/28/2022

Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space

Transformer-based language models (LMs) are at the core of modern NLP, b...
research
12/23/2021

Forward Composition Propagation for Explainable Neural Reasoning

This paper proposes an algorithm called Forward Composition Propagation ...
research
02/01/2023

Feed-Forward Blocks Control Contextualization in Masked Language Models

Understanding the inner workings of neural network models is a crucial s...
research
09/04/2023

One Wide Feedforward is All You Need

The Transformer architecture has two main non-embedding components: Atte...
research
08/09/2021

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction

Neural painting refers to the procedure of producing a series of strokes...
research
10/23/2022

SC-wLS: Towards Interpretable Feed-forward Camera Re-localization

Visual re-localization aims to recover camera poses in a known environme...
research
09/17/2022

Introspective Learning : A Two-Stage Approach for Inference in Neural Networks

In this paper, we advocate for two stages in a neural network's decision...

Please sign up or login with your details

Forgot password? Click here to reset