DeepAI AI Chat
Log In Sign Up

Temporal Latent Bottleneck: Synthesis of Fast and Slow Processing Mechanisms in Sequence Learning

05/30/2022
by   Aniket Didolkar, et al.
14

Recurrent neural networks have a strong inductive bias towards learning temporally compressed representations, as the entire history of a sequence is represented by a single vector. By contrast, Transformers have little inductive bias towards learning temporally compressed representations, as they allow for attention over all previously computed elements in a sequence. Having a more compressed representation of a sequence may be beneficial for generalization, as a high-level representation may be more easily re-used and re-purposed and will contain fewer irrelevant details. At the same time, excessive compression of representations comes at the cost of expressiveness. We propose a solution which divides computation into two streams. A slow stream that is recurrent in nature aims to learn a specialized and compressed representation, by forcing chunks of K time steps into a single representation which is divided into multiple vectors. At the same time, a fast stream is parameterized as a Transformer to process chunks consisting of K time-steps conditioned on the information in the slow-stream. In the proposed approach we hope to gain the expressiveness of the Transformer, while encouraging better compression and structuring of representations in the slow stream. We show the benefits of the proposed method in terms of improved sample efficiency and generalization performance as compared to various competitive baselines for visual perception and sequential decision making tasks.

READ FULL TEXT

page 17

page 19

02/27/2021

Transformers with Competitive Ensembles of Independent Mechanisms

An important development in deep learning from the earliest MLPs has bee...
06/12/2022

InBiaseD: Inductive Bias Distillation to Improve Generalization and Robustness through Shape-awareness

Humans rely less on spurious correlations and trivial cues, such as text...
12/11/2020

Unsupervised Learning of slow features for Data Efficient Regression

Research in computational neuroscience suggests that the human brain's u...
02/21/2020

Accessing Higher-level Representations in Sequential Transformers with Feedback Memory

Transformers are feedforward networks that can process input tokens in p...
03/05/2021

Slow-Fast Auditory Streams For Audio Recognition

We propose a two-stream convolutional network for audio recognition, tha...
03/30/2023

Learning in Factored Domains with Information-Constrained Visual Representations

Humans learn quickly even in tasks that contain complex visual informati...
11/02/2019

FCEM: A Novel Fast Correlation Extract Model For Real Time Steganalysis of VoIP Stream via Multi-head Attention

Extracting correlation features between codes-words with high computatio...