Recasting Self-Attention with Holographic Reduced Representations

05/31/2023
by   Mohammad Mahmudul Alam, et al.
0

In recent years, self-attention has become the dominant paradigm for sequence modeling in a variety of domains. However, in domains with very long sequence lengths the 𝒪(T^2) memory and 𝒪(T^2 H) compute costs can make using transformers infeasible. Motivated by problems in malware detection, where sequence lengths of T ≥ 100,000 are a roadblock to deep learning, we re-cast self-attention using the neuro-symbolic approach of Holographic Reduced Representations (HRR). In doing so we perform the same high-level strategy of the standard self-attention: a set of queries matching against a set of keys, and returning a weighted response of the values for each key. Implemented as a “Hrrformer” we obtain several benefits including 𝒪(T H log H) time complexity, 𝒪(T H) space complexity, and convergence in 10× fewer epochs. Nevertheless, the Hrrformer achieves near state-of-the-art accuracy on LRA benchmarks and we are able to learn with just a single layer. Combined, these benefits make our Hrrformer the first viable Transformer for such long malware classification sequences and up to 280× faster to train on the Long Range Arena benchmark. Code is available at <https://github.com/NeuromorphicComputationResearchProgram/Hrrformer>

READ FULL TEXT

page 7

page 15

page 18

page 19

research
07/05/2021

Long-Short Transformer: Efficient Transformers for Language and Vision

Transformers have achieved success in both language and vision domains. ...
research
05/24/2023

Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator

The transformer model is known to be computationally demanding, and proh...
research
02/07/2021

Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention

Transformers have emerged as a powerful tool for a broad range of natura...
research
11/17/2019

MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning

In sequence to sequence learning, the self-attention mechanism proves to...
research
12/17/2020

Classifying Sequences of Extreme Length with Constant Memory Applied to Malware Detection

Recent works within machine learning have been tackling inputs of ever-i...
research
05/09/2021

FNet: Mixing Tokens with Fourier Transforms

We show that Transformer encoder architectures can be massively sped up,...
research
12/10/2021

Self-attention Does Not Need O(n^2) Memory

We present a very simple algorithm for attention that requires O(1) memo...

Please sign up or login with your details

Forgot password? Click here to reset