Self-Supervised Contrastive Pre-Training For Time Series via Time-Frequency Consistency

06/17/2022
by   Xiang Zhang, et al.
0

Pre-training on time series poses a unique challenge due to the potential mismatch between pre-training and target domains, such as shifts in temporal dynamics, fast-evolving trends, and long-range and short cyclic effects, which can lead to poor downstream performance. While domain adaptation methods can mitigate these shifts, most methods need examples directly from the target domain, making them suboptimal for pre-training. To address this challenge, methods need to accommodate target domains with different temporal dynamics and be capable of doing so without seeing any target examples during pre-training. Relative to other modalities, in time series, we expect that time-based and frequency-based representations of the same example are located close together in the time-frequency space. To this end, we posit that time-frequency consistency (TF-C) – embedding a time-based neighborhood of a particular example close to its frequency-based neighborhood and back – is desirable for pre-training. Motivated by TF-C, we define a decomposable pre-training model, where the self-supervised signal is provided by the distance between time and frequency components, each individually trained by contrastive estimation. We evaluate the new method on eight datasets, including electrodiagnostic testing, human activity recognition, mechanical fault detection, and physical status monitoring. Experiments against eight state-of-the-art methods show that TF-C outperforms baselines by 15.4 (e.g., fine-tuning an EEG-pretrained model on EMG data) and by up to 8.4 score) in challenging one-to-many settings, reflecting the breadth of scenarios that arise in real-world applications. The source code and datasets are available at https: //anonymous.4open.science/r/TFC-pretraining-6B07.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/06/2023

Domain Adaptation for Time Series Under Feature and Label Shifts

The transfer of models trained on labeled datasets in a source domain to...
research
08/26/2022

Self-Supervised Human Activity Recognition with Localized Time-Frequency Contrastive Representation Learning

In this paper, we propose a self-supervised learning solution for human ...
research
02/01/2022

Understanding Cross-Domain Few-Shot Learning: An Experimental Study

Cross-domain few-shot learning has drawn increasing attention for handli...
research
09/11/2023

Examining the Effect of Pre-training on Time Series Classification

Although the pre-training followed by fine-tuning paradigm is used exten...
research
03/17/2021

Self-Supervised Learning of Audio Representations from Permutations with Differentiable Ranking

Self-supervised pre-training using so-called "pretext" tasks has recentl...
research
03/29/2022

Learning neural audio features without supervision

Deep audio classification, traditionally cast as training a deep neural ...
research
09/12/2023

Frequency-Aware Masked Autoencoders for Multimodal Pretraining on Biosignals

Leveraging multimodal information from biosignals is vital for building ...

Please sign up or login with your details

Forgot password? Click here to reset