Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget

04/20/2023
by   Johannes Lehner, et al.
0

Masked Image Modeling (MIM) methods, like Masked Autoencoders (MAE), efficiently learn a rich representation of the input. However, for adapting to downstream tasks, they require a sufficient amount of labeled data since their rich features capture not only objects but also less relevant image background. In contrast, Instance Discrimination (ID) methods focus on objects. In this work, we study how to combine the efficiency and scalability of MIM with the ability of ID to perform downstream classification in the absence of large amounts of labeled data. To this end, we introduce Masked Autoencoder Contrastive Tuning (MAE-CT), a sequential approach that applies Nearest Neighbor Contrastive Learning (NNCLR) to a pre-trained MAE. MAE-CT tunes the rich features such that they form semantic clusters of objects without using any labels. Applied to large and huge Vision Transformer (ViT) models, MAE-CT matches or excels previous self-supervised methods trained on ImageNet in linear probing, k-NN and low-shot classification accuracy as well as in unsupervised clustering accuracy. Notably, similar results can be achieved without additional image augmentations. While ID methods generally rely on hand-crafted augmentations to avoid shortcut learning, we find that nearest neighbor lookup is sufficient and that this data-driven augmentation effect improves with model size. MAE-CT is compute efficient. For instance, starting from a MAE pre-trained ViT-L/16, MAE-CT increases the ImageNet 1 accuracy from 67.7 k-NN accuracy from 60.6

READ FULL TEXT

page 4

page 9

page 22

page 24

page 25

research
04/29/2021

With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations

Self-supervised learning algorithms based on instance discrimination tra...
research
02/24/2020

Deep Nearest Neighbor Anomaly Detection

Nearest neighbors is a successful and long-standing technique for anomal...
research
04/05/2023

Self-Supervised Siamese Autoencoders

Fully supervised models often require large amounts of labeled training ...
research
11/25/2021

Semantic-Aware Generation for Self-Supervised Visual Representation Learning

In this paper, we propose a self-supervised visual representation learni...
research
08/18/2021

Self-Supervised Visual Representations Learning by Contrastive Mask Prediction

Advanced self-supervised visual representation learning methods rely on ...
research
06/16/2022

Beyond Supervised vs. Unsupervised: Representative Benchmarking and Analysis of Image Representation Learning

By leveraging contrastive learning, clustering, and other pretext tasks,...
research
01/27/2023

Leveraging the Third Dimension in Contrastive Learning

Self-Supervised Learning (SSL) methods operate on unlabeled data to lear...

Please sign up or login with your details

Forgot password? Click here to reset