The Devil is in the Frequency: Geminated Gestalt Autoencoder for Self-Supervised Visual Pre-Training

04/18/2022
by   Hao Liu, et al.
7

The self-supervised Masked Image Modeling (MIM) schema, following "mask-and-reconstruct" pipeline of recovering contents from masked image, has recently captured the increasing interest in the multimedia community, owing to the excellent ability of learning visual representation from unlabeled data. Aiming at learning representations with high semantics abstracted, a group of works attempts to reconstruct non-semantic pixels with large-ratio masking strategy, which may suffer from "over-smoothing" problem, while others directly infuse semantics into targets in off-line way requiring extra data. Different from them, we shift the perspective to the Fourier domain which naturally has global perspective and present a new Masked Image Modeling (MIM), termed Geminated Gestalt Autoencoder (Ge^2-AE) for visual pre-training. Specifically, we equip our model with geminated decoders in charge of reconstructing image contents from both pixel and frequency space, where each other serves as not only the complementation but also the reciprocal constraints. Through this way, more robust representations can be learned in the pre-trained encoders, of which the effectiveness is confirmed by the juxtaposing experimental results on downstream recognition tasks. We also conduct several quantitative and qualitative experiments to investigate the learning behavior of our method. To our best knowledge, this is the first MIM work to solve the visual pre-training through the lens of frequency domain.

READ FULL TEXT

page 1

page 5

page 8

page 9

page 12

page 13

research
06/15/2022

Masked Frequency Modeling for Self-Supervised Visual Pre-Training

We present Masked Frequency Modeling (MFM), a unified frequency-domain-b...
research
02/27/2020

Learning Representations by Predicting Bags of Visual Words

Self-supervised representation learning targets to learn convnet-based i...
research
12/21/2021

Self-Supervised Learning based Monaural Speech Enhancement with Multi-Task Pre-Training

In self-supervised learning, it is challenging to reduce the gap between...
research
04/04/2023

RARE: Robust Masked Graph Autoencoder

Masked graph autoencoder (MGAE) has emerged as a promising self-supervis...
research
03/18/2023

HybridMIM: A Hybrid Masked Image Modeling Framework for 3D Medical Image Segmentation

Masked image modeling (MIM) with transformer backbones has recently been...
research
04/12/2023

Hard Patches Mining for Masked Image Modeling

Masked image modeling (MIM) has attracted much research attention due to...
research
10/11/2021

SignBERT: Pre-Training of Hand-Model-Aware Representation for Sign Language Recognition

Hand gesture serves as a critical role in sign language. Current deep-le...

Please sign up or login with your details

Forgot password? Click here to reset