A Closer Look at Invariances in Self-supervised Pre-training for 3D Vision

07/11/2022
by   Lanxiao Li, et al.
0

Self-supervised pre-training for 3D vision has drawn increasing research interest in recent years. In order to learn informative representations, a lot of previous works exploit invariances of 3D features, e.g., perspective-invariance between views of the same scene, modality-invariance between depth and RGB images, format-invariance between point clouds and voxels. Although they have achieved promising results, previous researches lack a systematic and fair comparison of these invariances. To address this issue, our work, for the first time, introduces a unified framework, under which various pre-training methods can be investigated. We conduct extensive experiments and provide a closer look at the contributions of different invariances in 3D pre-training. Also, we propose a simple but effective method that jointly pre-trains a 3D encoder and a depth map encoder using contrastive learning. Models pre-trained with our method gain significant performance boost in downstream tasks. For instance, a pre-trained VoteNet outperforms previous methods on SUN RGB-D and ScanNet object detection benchmarks with a clear margin.

READ FULL TEXT
research
05/28/2022

A Closer Look at Self-supervised Lightweight Vision Transformers

Self-supervised learning on large-scale Vision Transformers (ViTs) as pr...
research
06/07/2023

Randomized 3D Scene Generation for Generalizable Self-supervised Pre-training

Capturing and labeling real-world 3D data is laborious and time-consumin...
research
02/13/2023

CoMAE: Single Model Hybrid Pre-training on Small-Scale RGB-D Datasets

Current RGB-D scene recognition approaches often train two standalone ba...
research
06/01/2021

Exploring the Diversity and Invariance in Yourself for Visual Pre-Training Task

Recently, self-supervised learning methods have achieved remarkable succ...
research
02/11/2023

Rethinking Vision Transformer and Masked Autoencoder in Multimodal Face Anti-Spoofing

Recently, vision transformer (ViT) based multimodal learning methods hav...
research
01/05/2023

Event Camera Data Pre-training

This paper proposes a pre-trained neural network for handling event came...
research
12/29/2022

Self-Supervised Pre-training for 3D Point Clouds via View-Specific Point-to-Image Translation

The past few years have witnessed the prevalence of self-supervised repr...

Please sign up or login with your details

Forgot password? Click here to reset