Context Autoencoder for Self-Supervised Representation Learning

02/07/2022
by   Xiaokang Chen, et al.
6

We present a novel masked image modeling (MIM) approach, context autoencoder (CAE), for self-supervised learning. We randomly partition the image into two sets: visible patches and masked patches. The CAE architecture consists of: (i) an encoder that takes visible patches as input and outputs their latent representations, (ii) a latent context regressor that predicts the masked patch representations from the visible patch representations that are not updated in this regressor, (iii) a decoder that takes the estimated masked patch representations as input and makes predictions for the masked patches, and (iv) an alignment module that aligns the masked patch representation estimation with the masked patch representations computed from the encoder. In comparison to previous MIM methods that couple the encoding and decoding roles, e.g., using a single module in BEiT, our approach attempts to separate the encoding role (content understanding) from the decoding role (making predictions for masked patches) using different modules, improving the content understanding capability. In addition, our approach makes predictions from the visible patches to the masked patches in the latent representation space that is expected to take on semantics. In addition, we present the explanations about why contrastive pretraining and supervised pretraining perform similarly and why MIM potentially performs better. We demonstrate the effectiveness of our CAE through superior transfer performance in downstream tasks: semantic segmentation, and object detection and instance segmentation.

READ FULL TEXT

page 3

page 4

page 5

page 6

page 13

page 14

research
04/01/2023

Mask Hierarchical Features For Self-Supervised Learning

This paper shows that Masking the Deep hierarchical features is an effic...
research
06/16/2022

Patch-level Representation Learning for Self-supervised Vision Transformers

Recent self-supervised learning (SSL) methods have shown impressive resu...
research
10/26/2022

Masked Modeling Duo: Learning Representations by Encouraging Both Networks to Model the Input

Masked Autoencoders is a simple yet powerful self-supervised learning me...
research
04/08/2021

HindSight: A Graph-Based Vision Model Architecture For Representing Part-Whole Hierarchies

This paper presents a model architecture for encoding the representation...
research
03/02/2023

Hierarchical discriminative learning improves visual representations of biomedical microscopy

Learning high-quality, self-supervised, visual representations is essent...
research
12/05/2022

Location-Aware Self-Supervised Transformers

Pixel-level labels are particularly expensive to acquire. Hence, pretrai...
research
10/29/2021

PEDENet: Image Anomaly Localization via Patch Embedding and Density Estimation

A neural network targeting at unsupervised image anomaly localization, c...

Please sign up or login with your details

Forgot password? Click here to reset