Overcoming a Theoretical Limitation of Self-Attention

02/24/2022
by   David Chiang, et al.
0

Although transformers are remarkably effective for many tasks, there are some surprisingly easy-looking regular languages that they struggle with. Hahn shows that for languages where acceptance depends on a single input symbol, a transformer's classification decisions become less and less confident (that is, with cross-entropy approaching 1 bit per string) as input strings get longer and longer. We examine this limitation using two languages: PARITY, the language of bit strings with an odd number of 1s, and FIRST, the language of bit strings starting with a 1. We demonstrate three ways of overcoming the limitation suggested by Hahn's lemma. First, we settle an open question by constructing a transformer that recognizes PARITY with perfect accuracy, and similarly for FIRST. Second, we use layer normalization to bring the cross-entropy of both models arbitrarily close to zero. Third, when transformers need to focus on a single position, as for FIRST, we find that they can fail to generalize to longer strings; we offer a simple remedy to this problem that also improves length generalization in machine translation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/02/2023

Evaluating Transformer's Ability to Learn Mildly Context-Sensitive Languages

Despite that Transformers perform well in NLP tasks, recent studies sugg...
research
05/05/2023

Transformer Working Memory Enables Regular Language Reasoning and Natural Language Length Extrapolation

Unlike recurrent models, conventional wisdom has it that Transformers ca...
research
10/09/2020

How Can Self-Attention Networks Recognize Dyck-n Languages?

We focus on the recognition of Dyck-n (𝒟_n) languages with self-attentio...
research
04/13/2022

Formal Language Recognition by Hard Attention Transformers: Perspectives from Circuit Complexity

This paper analyzes three formal models of Transformer encoders that dif...
research
09/23/2020

On the Ability of Self-Attention Networks to Recognize Counter Languages

Transformers have supplanted recurrent models in a large number of NLP t...
research
11/08/2020

On the Practical Ability of Recurrent Neural Networks to Recognize Hierarchical Languages

While recurrent models have been effective in NLP tasks, their performan...
research
02/02/2018

From Clustering Supersequences to Entropy Minimizing Subsequences for Single and Double Deletions

A binary string transmitted via a memoryless i.i.d. deletion channel is ...

Please sign up or login with your details

Forgot password? Click here to reset