dino
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
view repo
In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architecture works particularly well, we make the following observations: first, self-supervised ViT features contain explicit information about the semantic segmentation of an image, which does not emerge as clearly with supervised ViTs, nor with convnets. Second, these features are also excellent k-NN classifiers, reaching 78.3 also underlines the importance of momentum encoder, multi-crop training, and the use of small patches with ViTs. We implement our findings into a simple self-supervised method, called DINO, which we interpret as a form of self-distillation with no labels. We show the synergy between DINO and ViTs by achieving 80.1
READ FULL TEXTPyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
None
Retrieve Steam games with similar store banners, with Facebook's DINO.
replicating and improving facebookai's self supervised DINO and Semi supervised PAWS
Colección de investigaciones de Transformers en Computer Vision