Emerging Properties in Self-Supervised Vision Transformers

by   Mathilde Caron, et al.

In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architecture works particularly well, we make the following observations: first, self-supervised ViT features contain explicit information about the semantic segmentation of an image, which does not emerge as clearly with supervised ViTs, nor with convnets. Second, these features are also excellent k-NN classifiers, reaching 78.3 also underlines the importance of momentum encoder, multi-crop training, and the use of small patches with ViTs. We implement our findings into a simple self-supervised method, called DINO, which we interpret as a form of self-distillation with no labels. We show the synergy between DINO and ViTs by achieving 80.1


page 1

page 7

page 8

page 14

page 20


Attention Distillation: self-supervised vision transformer students need more guidance

Self-supervised learning has been widely applied to train high-quality v...

PatchRot: A Self-Supervised Technique for Training Vision Transformers

Vision transformers require a huge amount of labeled data to outperform ...

An Empirical Study of Training Self-Supervised Vision Transformers

This paper does not describe a novel method. Instead, it studies a strai...

Self-Supervised Leaf Segmentation under Complex Lighting Conditions

As an essential prerequisite task in image-based plant phenotyping, leaf...

Where are my Neighbors? Exploiting Patches Relations in Self-Supervised Vision Transformer

Vision Transformers (ViTs) enabled the use of transformer architecture o...

Uncovering the Inner Workings of STEGO for Safe Unsupervised Semantic Segmentation

Self-supervised pre-training strategies have recently shown impressive r...

Position Labels for Self-Supervised Vision Transformer

Position encoding is important for vision transformer (ViT) to capture t...

Code Repositories


PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO

view repo


Retrieve Steam games with similar store banners, with Facebook's DINO.

view repo


replicating and improving facebookai's self supervised DINO and Semi supervised PAWS

view repo


Colección de investigaciones de Transformers en Computer Vision

view repo

Please sign up or login with your details

Forgot password? Click here to reset