A Mathematical Theory of Attention

by   James Vuckovic, et al.

Attention is a powerful component of modern neural networks across a wide variety of domains. However, despite its ubiquity in machine learning, there is a gap in our understanding of attention from a theoretical point of view. We propose a framework to fill this gap by building a mathematically equivalent model of attention using measure theory. With this model, we are able to interpret self-attention as a system of self-interacting particles, we shed light on self-attention from a maximum entropy perspective, and we show that attention is actually Lipschitz-continuous (with an appropriate metric) under suitable assumptions. We then apply these insights to the problem of mis-specified input data; infinitely-deep, weight-sharing self-attention networks; and more general Lipschitz estimates for a specific type of attention studied in concurrent work.


page 1

page 2

page 3

page 4


The Lipschitz Constant of Self-Attention

Lipschitz constants of neural networks have been explored in various con...

On the Regularity of Attention

Attention is a powerful component of modern neural networks across a wid...

Theoretical Limitations of Self-Attention in Neural Sequence Models

Transformers are emerging as the new workhorse of NLP, showing great suc...

Lipschitz Normalization for Self-Attention Layers with Application to Graph Neural Networks

Attention based neural networks are state of the art in a large range of...

SAC: Accelerating and Structuring Self-Attention via Sparse Adaptive Connection

While the self-attention mechanism has been widely used in a wide variet...

Limits to Depth Efficiencies of Self-Attention

Self-attention architectures, which are rapidly pushing the frontier in ...

Constructive Type-Logical Supertagging with Self-Attention Networks

We propose a novel application of self-attention networks towards gramma...