Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning

11/26/2021
by   Chenhongyi Yang, et al.
0

The goal of contrastive learning based pre-training is to leverage large quantities of unlabeled data to produce a model that can be readily adapted downstream. Current approaches revolve around solving an image discrimination task: given an anchor image, an augmented counterpart of that image, and some other images, the model must produce representations such that the distance between the anchor and its counterpart is small, and the distances between the anchor and the other images are large. There are two significant problems with this approach: (i) by contrasting representations at the image-level, it is hard to generate detailed object-sensitive features that are beneficial to downstream object-level tasks such as instance segmentation; (ii) the augmentation strategy of producing an augmented counterpart is fixed, making learning less effective at the later stages of pre-training. In this work, we introduce Curricular Contrastive Object-level Pre-training (CCOP) to tackle these problems: (i) we use selective search to find rough object regions and use them to build an inter-image object-level contrastive loss and an intra-image object-level discrimination loss into our pre-training objective; (ii) we present a curriculum learning mechanism that adaptively augments the generated regions, which allows the model to consistently acquire a useful learning signal, even in the later stages of pre-training. Our experiments show that our approach improves on the MoCo v2 baseline by a large margin on multiple object-level tasks when pre-training on multi-object scene image datasets. Code is available at https://github.com/ChenhongyiYang/CCOP.

READ FULL TEXT

page 3

page 5

page 9

research
05/01/2023

CSP: Self-Supervised Contrastive Spatial Pre-Training for Geospatial-Visual Representations

Geo-tagged images are publicly available in large quantities, whereas la...
research
06/08/2023

R-MAE: Regions Meet Masked Autoencoders

Vision-specific concepts such as "region" have played a key role in exte...
research
09/27/2022

UniCLIP: Unified Framework for Contrastive Language-Image Pre-training

Pre-training vision-language models with contrastive objectives has show...
research
06/16/2022

Let Invariant Rationale Discovery Inspire Graph Contrastive Learning

Leading graph contrastive learning (GCL) methods perform graph augmentat...
research
03/17/2022

Modulated Contrast for Versatile Image Synthesis

Perceiving the similarity between images has been a long-standing and fu...
research
12/06/2021

Separated Contrastive Learning for Organ-at-Risk and Gross-Tumor-Volume Segmentation with Limited Annotation

Automatic delineation of organ-at-risk (OAR) and gross-tumor-volume (GTV...
research
04/09/2021

Patient Contrastive Learning: a Performant, Expressive, and Practical Approach to ECG Modeling

Supervised machine learning applications in health care are often limite...

Please sign up or login with your details

Forgot password? Click here to reset