RITA: Group Attention is All You Need for Timeseries Analytics

06/02/2023
by   Jiaming Liang, et al.
0

Timeseries analytics is of great importance in many real-world applications. Recently, the Transformer model, popular in natural language processing, has been leveraged to learn high quality feature embeddings from timeseries, core to the performance of various timeseries analytics tasks. However, the quadratic time and space complexities limit Transformers' scalability, especially for long timeseries. To address these issues, we develop a timeseries analytics tool, RITA, which uses a novel attention mechanism, named group attention, to address this scalability issue. Group attention dynamically clusters the objects based on their similarity into a small number of groups and approximately computes the attention at the coarse group granularity. It thus significantly reduces the time and space complexity, yet provides a theoretical guarantee on the quality of the computed attention. The dynamic scheduler of RITA continuously adapts the number of groups and the batch size in the training process, ensuring group attention always uses the fewest groups needed to meet the approximation quality requirement. Extensive experiments on various timeseries datasets and analytics tasks demonstrate that RITA outperforms the state-of-the-art in accuracy and is significantly faster – with speedups of up to 63X.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/18/2023

PEGA: Personality-Guided Preference Aggregator for Ephemeral Group Recommendation

Recently, making recommendations for ephemeral groups which contain dyna...
research
03/08/2022

Dynamic Group Transformer: A General Vision Transformer Backbone with Dynamic Group Attention

Recently, Transformers have shown promising performance in various visio...
research
08/10/2021

Adaptive Multi-Resolution Attention with Linear Complexity

Transformers have improved the state-of-the-art across numerous tasks in...
research
09/26/2022

Fast-FNet: Accelerating Transformer Encoder Models via Efficient Fourier Layers

Transformer-based language models utilize the attention mechanism for su...
research
10/06/2021

Ripple Attention for Visual Perception with Sub-quadratic Complexity

Transformer architectures are now central to modeling in natural languag...
research
03/07/2021

GANav: Group-wise Attention Network for Classifying Navigable Regions in Unstructured Outdoor Environments

We present a new learning-based method for identifying safe and navigabl...
research
08/11/2023

CA2: Cyber Attacks Analytics

The VAST Challenge 2020 Mini-Challenge 1 requires participants to identi...

Please sign up or login with your details

Forgot password? Click here to reset