Multi-Task Self-Supervised Time-Series Representation Learning

03/02/2023
by   Heejeong Choi, et al.
0

Time-series representation learning can extract representations from data with temporal dynamics and sparse labels. When labeled data are sparse but unlabeled data are abundant, contrastive learning, i.e., a framework to learn a latent space where similar samples are close to each other while dissimilar ones are far from each other, has shown outstanding performance. This strategy can encourage varied consistency of time-series representations depending on the positive pair selection and contrastive loss. We propose a new time-series representation learning method by combining the advantages of self-supervised tasks related to contextual, temporal, and transformation consistency. It allows the network to learn general representations for various downstream tasks and domains. Specifically, we first adopt data preprocessing to generate positive and negative pairs for each self-supervised task. The model then performs contextual, temporal, and transformation contrastive learning and is optimized jointly using their contrastive losses. We further investigate an uncertainty weighting approach to enable effective multi-task learning by considering the contribution of each consistency. We evaluate the proposed framework on three downstream tasks: time-series classification, forecasting, and anomaly detection. Experimental results show that our method not only outperforms the benchmark models on these downstream tasks, but also shows efficiency in cross-domain transfer learning.

READ FULL TEXT
research
08/13/2022

Self-supervised Contrastive Representation Learning for Semi-supervised Time-Series Classification

Learning time-series representations when only unlabeled data or few lab...
research
06/11/2023

Learning Robust and Consistent Time Series Representations: A Dilated Inception-Based Approach

Representation learning for time series has been an important research a...
research
11/13/2021

Evaluating Contrastive Learning on Wearable Timeseries for Downstream Clinical Outcomes

Vast quantities of person-generated health data (wearables) are collecte...
research
05/20/2022

Cross Reconstruction Transformer for Self-Supervised Time Series Representation Learning

Unsupervised/self-supervised representation learning in time series is c...
research
12/08/2021

Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework

As a seminal tool in self-supervised representation learning, contrastiv...
research
08/22/2022

Anatomy-Aware Contrastive Representation Learning for Fetal Ultrasound

Self-supervised contrastive representation learning offers the advantage...
research
10/17/2020

i-Mix: A Strategy for Regularizing Contrastive Representation Learning

Contrastive representation learning has shown to be an effective way of ...

Please sign up or login with your details

Forgot password? Click here to reset