Transformers Can Be Expressed In First-Order Logic with Majority

10/06/2022
by   William Merrill, et al.
0

Characterizing the implicit structure of the computation within neural networks is a foundational problem in the area of deep learning interpretability. Can the inner decision process of neural networks be captured symbolically in some familiar logic? We show that any fixed-precision transformer neural network can be translated into an equivalent fixed-size 𝖥𝖮(𝖬) formula, i.e., a first-order logic formula that, in addition to standard universal and existential quantifiers, may also contain majority-vote quantifiers. The proof idea is to design highly uniform boolean threshold circuits that can simulate transformers, and then leverage known theoretical connections between circuits and logic. Our results reveal a surprisingly simple formalism for capturing the behavior of transformers, show that simple problems like integer division are "transformer-hard", and provide valuable insights for comparing transformers to other models like RNNs. Our results suggest that first-order logic with majority may be a useful language for expressing programs extracted from transformers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/02/2022

Log-Precision Transformers are Constant-Depth Uniform Threshold Circuits

We prove that transformer neural networks with logarithmic precision in ...
research
08/06/2023

Average-Hard Attention Transformers are Constant-Depth Uniform Threshold Circuits

Transformers have emerged as a widely used neural network model for vari...
research
01/25/2023

Tighter Bounds on the Expressivity of Transformer Encoders

Characterizing neural networks in terms of better-understood formal syst...
research
06/30/2021

On the Power of Saturated Transformers: A View from Circuit Complexity

Transformers have become a standard architecture for many NLP problems. ...
research
02/25/2021

How to represent part-whole hierarchies in a neural network

This paper does not describe a working system. Instead, it presents a si...
research
05/05/2023

A technical note on bilinear layers for interpretability

The ability of neural networks to represent more features than neurons m...
research
05/24/2023

Can Transformers Learn to Solve Problems Recursively?

Neural networks have in recent years shown promise for helping software ...

Please sign up or login with your details

Forgot password? Click here to reset