Cross Reconstruction Transformer for Self-Supervised Time Series Representation Learning

05/20/2022
by   Wenrui Zhang, et al.
19

Unsupervised/self-supervised representation learning in time series is critical since labeled samples are usually scarce in real-world scenarios. Existing approaches mainly leverage the contrastive learning framework, which automatically learns to understand the similar and dissimilar data pairs. Nevertheless, they are restricted to the prior knowledge of constructing pairs, cumbersome sampling policy, and unstable performances when encountering sampling bias. Also, few works have focused on effectively modeling across temporal-spectral relations to extend the capacity of representations. In this paper, we aim at learning representations for time series from a new perspective and propose Cross Reconstruction Transformer (CRT) to solve the aforementioned problems in a unified way. CRT achieves time series representation learning through a cross-domain dropping-reconstruction task. Specifically, we transform time series into the frequency domain and randomly drop certain parts in both time and frequency domains. Dropping can maximally preserve the global context compared to cropping and masking. Then a transformer architecture is utilized to adequately capture the cross-domain correlations between temporal and spectral information through reconstructing data in both domains, which is called Dropped Temporal-Spectral Modeling. To discriminate the representations in global latent space, we propose Instance Discrimination Constraint to reduce the mutual information between different time series and sharpen the decision boundaries. Additionally, we propose a specified curriculum learning strategy to optimize the CRT, which progressively increases the dropping ratio in the training process.

READ FULL TEXT
research
03/02/2023

Multi-Task Self-Supervised Time-Series Representation Learning

Time-series representation learning can extract representations from dat...
research
02/08/2022

Unsupervised Time-Series Representation Learning with Iterative Bilinear Temporal-Spectral Fusion

Unsupervised/self-supervised time series representation learning is a ch...
research
02/02/2023

SimMTM: A Simple Pre-Training Framework for Masked Time-Series Modeling

Time series analysis is widely used in extensive areas. Recently, to red...
research
11/27/2020

Self-Supervised Time Series Representation Learning by Inter-Intra Relational Reasoning

Self-supervised learning achieves superior performance in many domains b...
research
03/17/2022

Mixing Up Contrastive Learning: Self-Supervised Representation Learning for Time Series

The lack of labeled data is a key challenge for learning useful represen...
research
02/08/2022

Spectral Propagation Graph Network for Few-shot Time Series Classification

Few-shot Time Series Classification (few-shot TSC) is a challenging prob...
research
09/14/2023

Learning Beyond Similarities: Incorporating Dissimilarities between Positive Pairs in Self-Supervised Time Series Learning

By identifying similarities between successive inputs, Self-Supervised L...

Please sign up or login with your details

Forgot password? Click here to reset