Large Scale Time-Series Representation Learning via Simultaneous Low and High Frequency Feature Bootstrapping

04/24/2022
by   Vandan Gorade, et al.
0

Learning representation from unlabeled time series data is a challenging problem. Most existing self-supervised and unsupervised approaches in the time-series domain do not capture low and high-frequency features at the same time. Further, some of these methods employ large scale models like transformers or rely on computationally expensive techniques such as contrastive learning. To tackle these problems, we propose a non-contrastive self-supervised learning approach efficiently captures low and high-frequency time-varying features in a cost-effective manner. Our method takes raw time series data as input and creates two different augmented views for two branches of the model, by randomly sampling the augmentations from same family. Following the terminology of BYOL, the two branches are called online and target network which allows bootstrapping of the latent representation. In contrast to BYOL, where a backbone encoder is followed by multilayer perceptron (MLP) heads, the proposed model contains additional temporal convolutional network (TCN) heads. As the augmented views are passed through large kernel convolution blocks of the encoder, the subsequent combination of MLP and TCN enables an effective representation of low as well as high-frequency time-varying features due to the varying receptive fields. The two modules (MLP and TCN) act in a complementary manner. We train an online network where each module learns to predict the outcome of the respective module of target network branch. To demonstrate the robustness of our model we performed extensive experiments and ablation studies on five real-world time-series datasets. Our method achieved state-of-art performance on all five real-world datasets.

READ FULL TEXT
research
10/13/2022

LEAVES: Learning Views for Time-Series Data in Contrastive Learning

Contrastive learning, a self-supervised learning method that can learn r...
research
06/23/2022

Utilizing Expert Features for Contrastive Learning of Time-Series Representations

We present an approach that incorporates expert knowledge for time-serie...
research
07/20/2023

Sequential Multi-Dimensional Self-Supervised Learning for Clinical Time Series

Self-supervised learning (SSL) for clinical time series data has receive...
research
07/18/2023

U-shaped Transformer: Retain High Frequency Context in Time Series Analysis

Time series prediction plays a crucial role in various industrial fields...
research
09/13/2023

Latent Representation and Simulation of Markov Processes via Time-Lagged Information Bottleneck

Markov processes are widely used mathematical models for describing dyna...
research
07/06/2020

Compact representation of temporal processes in echosounder time series via matrix decomposition

Echosounders are high-frequency sonar systems widely used to observe mid...

Please sign up or login with your details

Forgot password? Click here to reset