Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast

05/31/2023
by   Guofan Fan, et al.
0

Geometry and color information provided by the point clouds are both crucial for 3D scene understanding. Two pieces of information characterize the different aspects of point clouds, but existing methods lack an elaborate design for the discrimination and relevance. Hence we explore a 3D self-supervised paradigm that can better utilize the relations of point cloud information. Specifically, we propose a universal 3D scene pre-training framework via Geometry-Color Contrast (Point-GCC), which aligns geometry and color information using a Siamese network. To take care of actual application tasks, we design (i) hierarchical supervision with point-level contrast and reconstruct and object-level contrast based on the novel deep clustering module to close the gap between pre-training and downstream tasks; (ii) architecture-agnostic backbone to adapt for various downstream models. Benefiting from the object-level representation associated with downstream tasks, Point-GCC can directly evaluate model performance and the result demonstrates the effectiveness of our methods. Transfer learning results on a wide range of tasks also show consistent improvements across all datasets. e.g., new state-of-the-art object detection results on SUN RGB-D and S3DIS datasets. Codes will be released at https://github.com/Asterisci/Point-GCC.

READ FULL TEXT

page 9

page 16

research
10/28/2022

Self-Supervised Learning with Multi-View Rendering for 3D Point Cloud Analysis

Recently, great progress has been made in 3D deep learning with the emer...
research
07/01/2022

Masked Autoencoders for Self-Supervised Learning on Automotive Point Clouds

Masked autoencoding has become a successful pre-training paradigm for Tr...
research
07/11/2023

Self-supervised adversarial masking for 3D point cloud representation learning

Self-supervised methods have been proven effective for learning deep rep...
research
01/09/2023

Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling

We identify and overcome two key obstacles in extending the success of B...
research
03/17/2022

DATA: Domain-Aware and Task-Aware Pre-training

The paradigm of training models on massive data without label through se...
research
05/19/2023

PointGPT: Auto-regressively Generative Pre-training from Point Clouds

Large language models (LLMs) based on the generative pre-training transf...
research
12/06/2022

GD-MAE: Generative Decoder for MAE Pre-training on LiDAR Point Clouds

Despite the tremendous progress of Masked Autoencoders (MAE) in developi...

Please sign up or login with your details

Forgot password? Click here to reset