Lead-agnostic Self-supervised Learning for Local and Global Representations of Electrocardiogram

03/14/2022
by   JungWoo Oh, et al.
0

In recent years, self-supervised learning methods have shown significant improvement for pre-training with unlabeled data and have proven helpful for electrocardiogram signals. However, most previous pre-training methods for electrocardiogram focused on capturing only global contextual representations. This inhibits the models from learning fruitful representation of electrocardiogram, which results in poor performance on downstream tasks. Additionally, they cannot fine-tune the model with an arbitrary set of electrocardiogram leads unless the models were pre-trained on the same set of leads. In this work, we propose an ECG pre-training method that learns both local and global contextual representations for better generalizability and performance on downstream tasks. In addition, we propose random lead masking as an ECG-specific augmentation method to make our proposed model robust to an arbitrary set of leads. Experimental results on two downstream tasks, cardiac arrhythmia classification and patient identification, show that our proposed approach outperforms other state-of-the-art methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/07/2022

MimCo: Masked Image Modeling Pre-training with Contrastive Teacher

Recent masked image modeling (MIM) has received much attention in self-s...
research
05/27/2020

CLOCS: Contrastive Learning of Cardiac Signals

The healthcare industry generates troves of unlabelled physiological dat...
research
12/09/2022

Benchmarking Self-Supervised Learning on Diverse Pathology Datasets

Computational pathology can lead to saving human lives, but models are a...
research
03/03/2023

Towards Democratizing Joint-Embedding Self-Supervised Learning

Joint Embedding Self-Supervised Learning (JE-SSL) has seen rapid develop...
research
08/03/2022

GROWN+UP: A Graph Representation Of a Webpage Network Utilizing Pre-training

Large pre-trained neural networks are ubiquitous and critical to the suc...
research
09/30/2022

Match to Win: Analysing Sequences Lengths for Efficient Self-supervised Learning in Speech and Audio

Self-supervised learning (SSL) has proven vital in speech and audio-rela...
research
07/29/2022

Global-Local Self-Distillation for Visual Representation Learning

The downstream accuracy of self-supervised methods is tightly linked to ...

Please sign up or login with your details

Forgot password? Click here to reset