Colorization as a Proxy Task for Visual Understanding

03/11/2017
by   Gustav Larsson, et al.
0

We investigate and improve self-supervision as a drop-in replacement for ImageNet pretraining, focusing on automatic colorization as the proxy task. Self-supervised training has been shown to be more promising for utilizing unlabeled data than other, traditional unsupervised learning methods. We build on this success and evaluate the ability of our self-supervised network in several contexts. On VOC segmentation and classification tasks, we present results that are state-of-the-art among methods not using ImageNet labels for pretraining representations. Moreover, we present the first in-depth analysis of self-supervision via colorization, concluding that formulation of the loss, training details and network architecture play important roles in its effectiveness. This investigation is further expanded by revisiting the ImageNet pretraining paradigm, asking questions such as: How much training data is needed? How many labels are needed? How much do features change when fine-tuned? We relate these questions back to self-supervision by showing that colorization provides a similarly powerful supervisory signal as various flavors of ImageNet pretraining.

READ FULL TEXT

page 1

page 3

page 8

research
10/15/2020

Representation Learning via Invariant Causal Mechanisms

Self-supervised learning has emerged as a strategy to reduce the relianc...
research
03/19/2021

Efficient Visual Pretraining with Contrastive Detection

Self-supervised pretraining has been shown to yield powerful representat...
research
03/31/2020

How Useful is Self-Supervised Pretraining for Visual Tasks?

Recent advances have spurred incredible progress in self-supervised pret...
research
09/27/2021

PASS: An ImageNet replacement for self-supervised pretraining without humans

Computer vision has long relied on ImageNet and other large datasets of ...
research
06/07/2019

Selfie: Self-supervised Pretraining for Image Embedding

We introduce a pretraining technique called Selfie, which stands for SEL...
research
09/02/2021

Better Self-training for Image Classification through Self-supervision

Self-training is a simple semi-supervised learning approach: Unlabelled ...
research
07/28/2023

SimDETR: Simplifying self-supervised pretraining for DETR

DETR-based object detectors have achieved remarkable performance but are...

Please sign up or login with your details

Forgot password? Click here to reset