CroCo: Self-Supervised Pre-training for 3D Vision Tasks by Cross-View Completion

10/19/2022
by   Philippe Weinzaepfel, et al.
0

Masked Image Modeling (MIM) has recently been established as a potent pre-training paradigm. A pretext task is constructed by masking patches in an input image, and this masked content is then predicted by a neural network using visible patches as sole input. This pre-training leads to state-of-the-art performance when finetuned for high-level semantic tasks, e.g. image classification and object detection. In this paper we instead seek to learn representations that transfer well to a wide variety of 3D vision and lower-level geometric downstream tasks, such as depth prediction or optical flow estimation. Inspired by MIM, we propose an unsupervised representation learning task trained from pairs of images showing the same scene from different viewpoints. More precisely, we propose the pretext task of cross-view completion where the first input image is partially masked, and this masked content has to be reconstructed from the visible content and the second image. In single-view MIM, the masked content often cannot be inferred precisely from the visible portion only, so the model learns to act as a prior influenced by high-level semantics. In contrast, this ambiguity can be resolved with cross-view completion from the second unmasked image, on the condition that the model is able to understand the spatial relationship between the two images. Our experiments show that our pretext task leads to significantly improved performance for monocular 3D vision downstream tasks such as depth estimation. In addition, our model can be directly applied to binocular downstream tasks like optical flow or relative camera pose estimation, for which we obtain competitive results without bells and whistles, i.e., using a generic architecture without any task-specific design.

READ FULL TEXT

page 1

page 2

page 18

page 22

page 23

page 26

page 27

page 30

research
11/18/2022

Improved Cross-view Completion Pre-training for Stereo Matching

Despite impressive performance for high-level downstream tasks, self-sup...
research
05/27/2022

Architecture-Agnostic Masked Image Modeling – From ViT back to CNN

Masked image modeling (MIM), an emerging self-supervised pre-training me...
research
12/05/2022

Images Speak in Images: A Generalist Painter for In-Context Visual Learning

In-context learning, as a new paradigm in NLP, allows the model to rapid...
research
07/03/2022

Can Language Understand Depth?

Besides image classification, Contrastive Language-Image Pre-training (C...
research
06/17/2021

Unsupervised Path Representation Learning with Curriculum Negative Sampling

Path representations are critical in a variety of transportation applica...
research
06/02/2023

The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation

Denoising diffusion probabilistic models have transformed image generati...
research
05/21/2022

Improvements to Self-Supervised Representation Learning for Masked Image Modeling

This paper explores improvements to the masked image modeling (MIM) para...

Please sign up or login with your details

Forgot password? Click here to reset