COCOA: Cross Modality Contrastive Learning for Sensor Data

07/31/2022
by   Shohreh Deldari, et al.
2

Self-Supervised Learning (SSL) is a new paradigm for learning discriminative representations without labelled data and has reached comparable or even state-of-the-art results in comparison to supervised counterparts. Contrastive Learning (CL) is one of the most well-known approaches in SSL that attempts to learn general, informative representations of data. CL methods have been mostly developed for applications in computer vision and natural language processing where only a single sensor modality is used. A majority of pervasive computing applications, however, exploit data from a range of different sensor modalities. While existing CL methods are limited to learning from one or two data sources, we propose COCOA (Cross mOdality COntrastive leArning), a self-supervised model that employs a novel objective function to learn quality representations from multisensor data by computing the cross-correlation between different data modalities and minimizing the similarity between irrelevant instances. We evaluate the effectiveness of COCOA against eight recently introduced state-of-the-art self-supervised models, and two supervised baselines across five public datasets. We show that COCOA achieves superior classification performance to all other approaches. Also, COCOA is far more label-efficient than the other baselines including the fully supervised model using only one-tenth of available labelled data.

READ FULL TEXT
research
10/31/2020

A Survey on Contrastive Self-supervised Learning

Self-supervised learning has gained popularity because of its ability to...
research
02/07/2022

data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language

While the general idea of self-supervised learning is identical across m...
research
05/20/2022

Contrastive Learning with Cross-Modal Knowledge Mining for Multimodal Human Activity Recognition

Human Activity Recognition is a field of research where input data can t...
research
03/30/2021

Contrastive Learning of Single-Cell Phenotypic Representations for Treatment Classification

Learning robust representations to discriminate cell phenotypes based on...
research
04/01/2022

WavFT: Acoustic model finetuning with labelled and unlabelled data

Unsupervised and self-supervised learning methods have leveraged unlabel...
research
06/21/2021

Visual Probing: Cognitive Framework for Explaining Self-Supervised Image Representations

Recently introduced self-supervised methods for image representation lea...
research
04/21/2022

PreTraM: Self-Supervised Pre-training via Connecting Trajectory and Map

Deep learning has recently achieved significant progress in trajectory f...

Please sign up or login with your details

Forgot password? Click here to reset