Masked Frequency Modeling for Self-Supervised Visual Pre-Training

06/15/2022
by   Jiahao Xie, et al.
0

We present Masked Frequency Modeling (MFM), a unified frequency-domain-based approach for self-supervised pre-training of visual models. Instead of randomly inserting mask tokens to the input embeddings in the spatial domain, in this paper, we shift the perspective to the frequency domain. Specifically, MFM first masks out a portion of frequency components of the input image and then predicts the missing frequencies on the frequency spectrum. Our key insight is that predicting masked components in the frequency domain is more ideal to reveal underlying image patterns rather than predicting masked patches in the spatial domain, due to the heavy spatial redundancy. Our findings suggest that with the right configuration of mask-and-predict strategy, both the structural information within high-frequency components and the low-level statistics among low-frequency counterparts are useful in learning good representations. For the first time, MFM demonstrates that, for both ViT and CNN, a simple non-Siamese framework can learn meaningful representations even using none of the following: (i) extra data, (ii) extra model, (iii) mask token. Experimental results on ImageNet and several robustness benchmarks show the competitive performance and advanced robustness of MFM compared with recent masked image modeling approaches. Furthermore, we also comprehensively investigate the effectiveness of classical image restoration tasks for representation learning from a unified frequency perspective and reveal their intriguing relations with our MFM approach. Project page: https://www.mmlab-ntu.com/project/mfm/index.html.

READ FULL TEXT

page 2

page 7

page 12

page 13

page 14

research
02/07/2022

Corrupted Image Modeling for Self-Supervised Visual Pre-Training

We introduce Corrupted Image Modeling (CIM) for self-supervised visual p...
research
04/18/2022

The Devil is in the Frequency: Geminated Gestalt Autoencoder for Self-Supervised Visual Pre-Training

The self-supervised Masked Image Modeling (MIM) schema, following "mask-...
research
03/22/2023

Correlational Image Modeling for Self-Supervised Visual Pre-Training

We introduce Correlational Image Modeling (CIM), a novel and surprisingl...
research
12/16/2021

Masked Feature Prediction for Self-Supervised Visual Pre-Training

We present Masked Feature Prediction (MaskFeat) for self-supervised pre-...
research
10/30/2022

A simple, efficient and scalable contrastive masked autoencoder for learning visual representations

We introduce CAN, a simple, efficient and scalable method for self-super...
research
08/22/2023

SPANet: Frequency-balancing Token Mixer using Spectral Pooling Aggregation Modulation

Recent studies show that self-attentions behave like low-pass filters (a...
research
06/09/2023

Exploring Effective Mask Sampling Modeling for Neural Image Compression

Image compression aims to reduce the information redundancy in images. M...

Please sign up or login with your details

Forgot password? Click here to reset