DeepAI AI Chat
Log In Sign Up

Contrastive learning, multi-view redundancy, and linear models

by   Christopher Tosh, et al.

Self-supervised learning is an empirically successful approach to unsupervised learning based on creating artificial supervised learning problems. A popular self-supervised approach to representation learning is contrastive learning, which leverages naturally occurring pairs of similar and dissimilar data points, or multiple views of the same data. This work provides a theoretical analysis of contrastive learning in the multi-view setting, where two views of each datum are available. The main result is that linear functions of the learned representations are nearly optimal on downstream prediction tasks whenever the two views provide redundant information about the label.


page 1

page 2

page 3

page 4


Demystifying Self-Supervised Learning: An Information-Theoretical Framework

Self-supervised representation learning adopts self-defined signals as s...

Contrastive Spatial Reasoning on Multi-View Line Drawings

Spatial reasoning on multi-view line drawings by state-of-the-art superv...

Contrastive estimation reveals topic posterior information to linear models

Contrastive learning is an approach to representation learning that util...

Self-supervised Contrastive Learning of Multi-view Facial Expressions

Facial expression recognition (FER) has emerged as an important componen...

Multispectral Self-Supervised Learning with Viewmaker Networks

Contrastive learning methods have been applied to a range of domains and...

Contrastive Learning from Demonstrations

This paper presents a framework for learning visual representations from...

Self-supervised Representation Learning on Electronic Health Records with Graph Kernel Infomax

Learning Electronic Health Records (EHRs) representation is a preeminent...