Towards Self-Supervised Learning of Global and Object-Centric Representations

03/11/2022
by   Federico Baldassarre, et al.
0

Self-supervision allows learning meaningful representations of natural images which usually contain one central object. How well does it transfer to multi-entity scenes? We discuss key aspects of learning structured object-centric representations with self-supervision and validate our insights through several experiments on the CLEVR dataset. Regarding the architecture, we confirm the importance of competition for attention-based object discovery, where each image patch is exclusively attended by one object. For training, we show that contrastive losses equipped with matching can be applied directly in a latent space, avoiding pixel-based reconstruction. However, such an optimization objective is sensitive to false negatives (recurring objects) and false positives (matching errors). Thus, careful consideration is required around data augmentation and negative sample selection.

READ FULL TEXT

page 4

page 13

research
12/10/2021

Learning Representations with Contrastive Self-Supervised Learning for Histopathology Applications

Unsupervised learning has made substantial progress over the last few ye...
research
03/09/2021

Self-Supervision by Prediction for Object Discovery in Videos

Despite their irresistible success, deep learning algorithms still heavi...
research
10/25/2022

Learning Explicit Object-Centric Representations with Vision Transformers

With the recent successful adaptation of transformers to the vision doma...
research
01/18/2023

ViT-AE++: Improving Vision Transformer Autoencoder for Self-supervised Medical Image Representations

Self-supervised learning has attracted increasing attention as it learns...
research
06/16/2021

PatchNet: Unsupervised Object Discovery based on Patch Embedding

We demonstrate that frequently appearing objects can be discovered by tr...
research
06/07/2023

Coarse Is Better? A New Pipeline Towards Self-Supervised Learning with Uncurated Images

Most self-supervised learning (SSL) methods often work on curated datase...
research
04/18/2022

Inductive Biases for Object-Centric Representations of Complex Textures

Understanding which inductive biases could be useful for the unsupervise...

Please sign up or login with your details

Forgot password? Click here to reset