Tracking Emerges by Colorizing Videos

06/25/2018
by   Carl Vondrick, et al.
0

We use large amounts of unlabeled video to learn models for visual tracking without manual human supervision. We leverage the natural temporal coherency of color to create a model that learns to colorize gray-scale videos by copying colors from a reference frame. Quantitative and qualitative experiments suggest that this task causes the model to automatically learn to track visual regions. Although the model is trained without any ground-truth labels, our method learns to track well enough to outperform optical flow based methods. Finally, our results suggest that failures to track are correlated with failures to colorize, indicating that advancing video colorization may further improve self-supervised visual tracking.

READ FULL TEXT

page 2

page 4

page 5

page 7

page 11

page 12

page 13

page 14

research
05/02/2019

Self-supervised Learning for Video Correspondence Flow

The objective of this paper is self-supervised learning of feature embed...
research
04/09/2018

The Sound of Pixels

We introduce PixelPlayer, a system that, by leveraging large amounts of ...
research
03/18/2019

Learning Correspondence from the Cycle-Consistency of Time

We introduce a self-supervised method for learning visual correspondence...
research
10/27/2016

SoundNet: Learning Sound Representations from Unlabeled Video

We learn rich natural sound representations by capitalizing on large amo...
research
09/21/2018

Temporal Interpolation as an Unsupervised Pretraining Task for Optical Flow Estimation

The difficulty of annotating training data is a major obstacle to using ...
research
07/14/2023

CoTracker: It is Better to Track Together

Methods for video motion prediction either estimate jointly the instanta...
research
06/07/2022

Online Deep Clustering with Video Track Consistency

Several unsupervised and self-supervised approaches have been developed ...

Please sign up or login with your details

Forgot password? Click here to reset