How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders

10/15/2022
by   Qi Zhang, et al.
5

Masked Autoencoders (MAE) based on a reconstruction task have risen to be a promising paradigm for self-supervised learning (SSL) and achieve state-of-the-art performance across different benchmark datasets. However, despite its impressive empirical success, there is still limited theoretical understanding of it. In this paper, we propose a theoretical understanding of how masking matters for MAE to learn meaningful features. We establish a close connection between MAE and contrastive learning, which shows that MAE implicit aligns the mask-induced positive pairs. Built upon this connection, we develop the first downstream guarantees for MAE methods, and analyze the effect of mask ratio. Besides, as a result of the implicit alignment, we also point out the dimensional collapse issue of MAE, and propose a Uniformity-enhanced MAE (U-MAE) loss that can effectively address this issue and bring significant improvements on real-world datasets, including CIFAR-10, ImageNet-100, and ImageNet-1K. Code is available at (https://github.com/zhangq327/U-MAE).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/18/2021

Self-Supervised Visual Representations Learning by Contrastive Mask Prediction

Advanced self-supervised visual representation learning methods rely on ...
research
05/20/2022

MaskGAE: Masked Graph Modeling Meets Graph Autoencoders

We present masked graph autoencoder (MaskGAE), a self-supervised learnin...
research
06/07/2023

On the Generalization of Multi-modal Contrastive Learning

Multi-modal contrastive learning (MMCL) has recently garnered considerab...
research
08/19/2023

Forecast-MAE: Self-supervised Pre-training for Motion Forecasting with Masked Autoencoders

This study explores the application of self-supervised learning (SSL) to...
research
10/07/2022

An Investigation into Whitening Loss for Self-supervised Learning

A desirable objective in self-supervised learning (SSL) is to avoid feat...
research
04/24/2020

Extending and Analyzing Self-Supervised Learning Across Domains

Self-supervised representation learning has achieved impressive results ...
research
03/04/2023

Towards a Unified Theoretical Understanding of Non-contrastive Learning via Rank Differential Mechanism

Recently, a variety of methods under the name of non-contrastive learnin...

Please sign up or login with your details

Forgot password? Click here to reset