Self-Attention Networks Can Process Bounded Hierarchical Languages

05/24/2021
βˆ™
by   Shunyu Yao, et al.
βˆ™
12
βˆ™

Despite their impressive performance in NLP, self-attention networks were recently proved to be limited for processing formal languages with hierarchical structure, such as 𝖣𝗒𝖼𝗄_k, the language consisting of well-nested parentheses of k types. This suggested that natural language can be approximated well with models that are too weak for formal languages, or that the role of hierarchy and recursion in natural language might be limited. We qualify this implication by proving that self-attention networks can process 𝖣𝗒𝖼𝗄_k, D, the subset of 𝖣𝗒𝖼𝗄_k with depth bounded by D, which arguably better captures the bounded hierarchical structure of natural language. Specifically, we construct a hard-attention network with D+1 layers and O(log k) memory size (per token per layer) that recognizes 𝖣𝗒𝖼𝗄_k, D, and a soft-attention network with two layers and O(log k) memory size that generates 𝖣𝗒𝖼𝗄_k, D. Experiments show that self-attention networks trained on 𝖣𝗒𝖼𝗄_k, D generalize to longer inputs with near-perfect accuracy, and also verify the theoretical memory advantage of self-attention networks over recurrent networks.

READ FULL TEXT
research
βˆ™ 06/16/2019

Theoretical Limitations of Self-Attention in Neural Sequence Models

Transformers are emerging as the new workhorse of NLP, showing great suc...
research
βˆ™ 10/15/2020

RNNs can generate bounded hierarchical languages with optimal memory

Recurrent neural networks empirically generate natural language with hig...
research
βˆ™ 11/08/2020

On the Practical Ability of Recurrent Neural Networks to Recognize Hierarchical Languages

While recurrent models have been effective in NLP tasks, their performan...
research
βˆ™ 01/31/2018

Reinforced Self-Attention Network: a Hybrid of Hard and Soft Attention for Sequence Modeling

Many natural language processing tasks solely rely on sparse dependencie...
research
βˆ™ 10/09/2020

How Can Self-Attention Networks Recognize Dyck-n Languages?

We focus on the recognition of Dyck-n (π’Ÿ_n) languages with self-attentio...
research
βˆ™ 06/22/2020

Limits to Depth Efficiencies of Self-Attention

Self-attention architectures, which are rapidly pushing the frontier in ...
research
βˆ™ 03/05/2021

Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth

Attention-based architectures have become ubiquitous in machine learning...

Please sign up or login with your details

Forgot password? Click here to reset