Spatially Consistent Representation Learning

03/10/2021
by   Byungseok Roh, et al.
0

Self-supervised learning has been widely used to obtain transferrable representations from unlabeled images. Especially, recent contrastive learning methods have shown impressive performances on downstream image classification tasks. While these contrastive methods mainly focus on generating invariant global representations at the image-level under semantic-preserving transformations, they are prone to overlook spatial consistency of local representations and therefore have a limitation in pretraining for localization tasks such as object detection and instance segmentation. Moreover, aggressively cropped views used in existing contrastive methods can minimize representation distances between the semantically different regions of a single image. In this paper, we propose a spatially consistent representation learning algorithm (SCRL) for multi-object and location-specific tasks. In particular, we devise a novel self-supervised objective that tries to produce coherent spatial representations of a randomly cropped local region according to geometric translations and zooming operations. On various downstream localization tasks with benchmark datasets, the proposed SCRL shows significant performance improvements over the image-level supervised pretraining as well as the state-of-the-art self-supervised learning methods.

READ FULL TEXT

page 3

page 8

page 13

page 14

research
05/10/2022

CoDo: Contrastive Learning with Downstream Background Invariance for Detection

The prior self-supervised learning researches mainly select image-level ...
research
03/15/2022

InsCon:Instance Consistency Feature Representation via Self-Supervised Learning

Feature representation via self-supervised learning has reached remarkab...
research
03/29/2022

Self-Supervised Image Representation Learning with Geometric Set Consistency

We propose a method for self-supervised image representation learning un...
research
07/09/2022

A Study on Self-Supervised Object Detection Pretraining

In this work, we study different approaches to self-supervised pretraini...
research
01/12/2023

Self-Supervised Image-to-Point Distillation via Semantically Tolerant Contrastive Loss

An effective framework for learning 3D representations for perception ta...
research
04/17/2023

Self-Supervised Learning from Non-Object Centric Images with a Geometric Transformation Sensitive Architecture

Most invariance-based self-supervised methods rely on single object-cent...
research
06/10/2021

Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations

Contrastive self-supervised learning has outperformed supervised pretrai...

Please sign up or login with your details

Forgot password? Click here to reset