On minimal variations for unsupervised representation learning

11/07/2022
by   Vivien Cabannes, et al.
0

Unsupervised representation learning aims at describing raw data efficiently to solve various downstream tasks. It has been approached with many techniques, such as manifold learning, diffusion maps, or more recently self-supervised learning. Those techniques are arguably all based on the underlying assumption that target functions, associated with future downstream tasks, have low variations in densely populated regions of the input space. Unveiling minimal variations as a guiding principle behind unsupervised representation learning paves the way to better practical guidelines for self-supervised learning algorithms.

READ FULL TEXT
research
06/10/2020

Demystifying Self-Supervised Learning: An Information-Theoretical Framework

Self-supervised representation learning adopts self-defined signals as s...
research
10/17/2021

Self-Supervised Learning for Binary Networks by Joint Classifier Training

Despite the great success of self-supervised learning with large floatin...
research
11/08/2021

Characterizing the adversarial vulnerability of speech self-supervised learning

A leaderboard named Speech processing Universal PERformance Benchmark (S...
research
08/18/2023

Data Compression and Inference in Cosmology with Self-Supervised Machine Learning

The influx of massive amounts of data from current and upcoming cosmolog...
research
08/03/2023

On the Transition from Neural Representation to Symbolic Knowledge

Bridging the huge disparity between neural and symbolic representation c...
research
06/02/2023

Masked Autoencoder for Unsupervised Video Summarization

Summarizing a video requires a diverse understanding of the video, rangi...
research
03/11/2020

Rethinking Image Mixture for Unsupervised Visual Representation Learning

In supervised learning, smoothing label/prediction distribution in neura...

Please sign up or login with your details

Forgot password? Click here to reset