MetaMask: Revisiting Dimensional Confounder for Self-Supervised Learning

09/16/2022
by   Jiangmeng Li, et al.
0

As a successful approach to self-supervised learning, contrastive learning aims to learn invariant information shared among distortions of the input sample. While contrastive learning has yielded continuous advancements in sampling strategy and architecture design, it still remains two persistent defects: the interference of task-irrelevant information and sample inefficiency, which are related to the recurring existence of trivial constant solutions. From the perspective of dimensional analysis, we find out that the dimensional redundancy and dimensional confounder are the intrinsic issues behind the phenomena, and provide experimental evidence to support our viewpoint. We further propose a simple yet effective approach MetaMask, short for the dimensional Mask learned by Meta-learning, to learn representations against dimensional redundancy and confounder. MetaMask adopts the redundancy-reduction technique to tackle the dimensional redundancy issue and innovatively introduces a dimensional mask to reduce the gradient effects of specific dimensions containing the confounder, which is trained by employing a meta-learning paradigm with the objective of improving the performance of masked representations on a typical self-supervised task. We provide solid theoretical analyses to prove MetaMask can obtain tighter risk bounds for downstream classification compared to typical contrastive methods. Empirically, our method achieves state-of-the-art performance on various benchmarks.

READ FULL TEXT
research
08/19/2021

Self-Supervised Video Representation Learning with Meta-Contrastive Network

Self-supervised learning has been successfully applied to pre-train vide...
research
03/01/2021

A Brief Summary of Interactions Between Meta-Learning and Self-Supervised Learning

This paper briefly reviews the connections between meta-learning and sel...
research
01/11/2022

Bootstrapping Informative Graph Augmentation via A Meta Learning Approach

Recent works explore learning graph representations in a self-supervised...
research
10/31/2022

A picture of the space of typical learnable tasks

We develop a technique to analyze representations learned by deep networ...
research
06/09/2023

Exploring Effective Mask Sampling Modeling for Neural Image Compression

Image compression aims to reduce the information redundancy in images. M...
research
05/12/2022

The Mechanism of Prediction Head in Non-contrastive Self-supervised Learning

Recently the surprising discovery of the Bootstrap Your Own Latent (BYOL...
research
06/29/2021

MAML is a Noisy Contrastive Learner

Model-agnostic meta-learning (MAML) is one of the most popular and widel...

Please sign up or login with your details

Forgot password? Click here to reset