Updater-Extractor Architecture for Inductive World State Representations

04/12/2021
by   Arseny Moskvichev, et al.
0

Developing NLP models traditionally involves two stages - training and application. Retention of information acquired after training (at application time) is architecturally limited by the size of the model's context window (in the case of transformers), or by the practical difficulties associated with long sequences (in the case of RNNs). In this paper, we propose a novel transformer-based Updater-Extractor architecture and a training procedure that can work with sequences of arbitrary length and refine its knowledge about the world based on linguistic inputs. We explicitly train the model to incorporate incoming information into its world state representation, obtaining strong inductive generalization and the ability to handle extremely long-range dependencies. We prove a lemma that provides a theoretical basis for our approach. The result also provides insight into success and failure modes of models trained with variants of Truncated Back-Propagation Through Time (such as Transformer XL). Empirically, we investigate the model performance on three different tasks, demonstrating its promise. This preprint is still a work in progress. At present, we focused on easily interpretable tasks, leaving the application of the proposed ideas to practical NLP applications for the future.

READ FULL TEXT

page 9

page 10

page 11

research
02/16/2022

The NLP Task Effectiveness of Long-Range Transformers

Transformer models cannot easily scale to long sequences due to their O(...
research
10/14/2020

Memformer: The Memory-Augmented Transformer

Transformer models have obtained remarkable accomplishments in various N...
research
07/05/2023

Facing off World Model Backbones: RNNs, Transformers, and S4

World models are a fundamental component in model-based reinforcement le...
research
10/16/2021

Transformer with a Mixture of Gaussian Keys

Multi-head attention is a driving force behind state-of-the-art transfor...
research
06/08/2023

Decision S4: Efficient Sequence-Based RL via State Spaces Layers

Recently, sequence learning methods have been applied to the problem of ...
research
12/17/2020

A Generalization of Transformer Networks to Graphs

We propose a generalization of transformer neural network architecture f...
research
10/06/2021

How BPE Affects Memorization in Transformers

Training data memorization in NLP can both be beneficial (e.g., closed-b...

Please sign up or login with your details

Forgot password? Click here to reset