DeepAI AI Chat
Log In Sign Up

Transformer Feed-Forward Layers Are Key-Value Memories

by   Mor Geva, et al.

Feed-forward layers constitute two-thirds of a transformer model's parameters, yet their role in the network remains under-explored. We show that feed-forward layers in transformer-based language models operate as key-value memories, where each key correlates with textual patterns in the training examples, and each value induces a distribution over the output vocabulary. Our experiments show that the learned patterns are human-interpretable, and that lower layers tend to capture shallow patterns, while upper layers learn more semantic ones. The values complement the keys' input patterns by inducing output distributions that concentrate probability mass on tokens likely to appear immediately after each pattern, particularly in the upper layers. Finally, we demonstrate that the output of a feed-forward layer is a composition of its memories, which is subsequently refined throughout the model's layers via residual connections to produce the final output distribution.


page 1

page 3

page 6

page 7


Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space

Transformer-based language models (LMs) are at the core of modern NLP, b...

Forward Composition Propagation for Explainable Neural Reasoning

This paper proposes an algorithm called Forward Composition Propagation ...

Feed-Forward Blocks Control Contextualization in Masked Language Models

Understanding the inner workings of neural network models is a crucial s...

One Wide Feedforward is All You Need

The Transformer architecture has two main non-embedding components: Atte...

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction

Neural painting refers to the procedure of producing a series of strokes...

SC-wLS: Towards Interpretable Feed-forward Camera Re-localization

Visual re-localization aims to recover camera poses in a known environme...

Introspective Learning : A Two-Stage Approach for Inference in Neural Networks

In this paper, we advocate for two stages in a neural network's decision...