Aligning Pretraining for Detection via Object-Level Contrastive Learning

06/04/2021
by   Fangyun Wei, et al.
0

Image-level contrastive representation learning has proven to be highly effective as a generic model for transfer learning. Such generality for transfer learning, however, sacrifices specificity if we are interested in a certain downstream task. We argue that this could be sub-optimal and thus advocate a design principle which encourages alignment between the self-supervised pretext task and the downstream task. In this paper, we follow this principle with a pretraining method specifically designed for the task of object detection. We attain alignment in the following three aspects: 1) object-level representations are introduced via selective search bounding boxes as object proposals; 2) the pretraining network architecture incorporates the same dedicated modules used in the detection pipeline (e.g. FPN); 3) the pretraining is equipped with object detection properties such as object-level translation invariance and scale invariance. Our method, called Selective Object COntrastive learning (SoCo), achieves state-of-the-art results for transfer performance on COCO detection using a Mask R-CNN framework. Code and models will be made available.

READ FULL TEXT
research
05/10/2022

CoDo: Contrastive Learning with Downstream Background Invariance for Detection

The prior self-supervised learning researches mainly select image-level ...
research
02/16/2021

Instance Localization for Self-supervised Detection Pretraining

Prior research on self-supervised learning has led to considerable progr...
research
06/11/2020

What makes instance discrimination good for transfer learning?

Unsupervised visual pretraining based on the instance discrimination pre...
research
07/09/2022

A Study on Self-Supervised Object Detection Pretraining

In this work, we study different approaches to self-supervised pretraini...
research
12/09/2022

Contrastive View Design Strategies to Enhance Robustness to Domain Shifts in Downstream Object Detection

Contrastive learning has emerged as a competitive pretraining method for...
research
06/08/2021

DETReg: Unsupervised Pretraining with Region Priors for Object Detection

Unsupervised pretraining has recently proven beneficial for computer vis...
research
03/08/2021

Unsupervised Pretraining for Object Detection by Patch Reidentification

Unsupervised representation learning achieves promising performances in ...

Please sign up or login with your details

Forgot password? Click here to reset