Formal Language Recognition by Hard Attention Transformers: Perspectives from Circuit Complexity

04/13/2022
by   Yiding Hao, et al.
5

This paper analyzes three formal models of Transformer encoders that differ in the form of their self-attention mechanism: unique hard attention (UHAT); generalized unique hard attention (GUHAT), which generalizes UHAT; and averaging hard attention (AHAT). We show that UHAT and GUHAT Transformers, viewed as string acceptors, can only recognize formal languages in the complexity class AC^0, the class of languages recognizable by families of Boolean circuits of constant depth and polynomial size. This upper bound subsumes Hahn's (2020) results that GUHAT cannot recognize the DYCK languages or the PARITY language, since those languages are outside AC^0 (Furst et al., 1984). In contrast, the non-AC^0 languages MAJORITY and DYCK-1 are recognizable by AHAT networks, implying that AHAT can recognize languages that UHAT and GUHAT cannot.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/06/2023

Average-Hard Attention Transformers are Constant-Depth Uniform Threshold Circuits

Transformers have emerged as a widely used neural network model for vari...
research
01/25/2023

Tighter Bounds on the Expressivity of Transformer Encoders

Characterizing neural networks in terms of better-understood formal syst...
research
07/02/2022

Log-Precision Transformers are Constant-Depth Uniform Threshold Circuits

We prove that transformer neural networks with logarithmic precision in ...
research
06/30/2021

On the Power of Saturated Transformers: A View from Circuit Complexity

Transformers have become a standard architecture for many NLP problems. ...
research
02/24/2022

Overcoming a Theoretical Limitation of Self-Attention

Although transformers are remarkably effective for many tasks, there are...
research
09/23/2020

On the Ability of Self-Attention Networks to Recognize Counter Languages

Transformers have supplanted recurrent models in a large number of NLP t...
research
02/25/2023

The 𝖠𝖢^0-Complexity Of Visibly Pushdown Languages

We concern ourselves with the question which visibly pushdown languages ...

Please sign up or login with your details

Forgot password? Click here to reset