CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth Pre-training

10/03/2022
by   Tianyu Huang, et al.
20

Pre-training across 3D vision and language remains under development because of limited training data. Recent works attempt to transfer vision-language pre-training models to 3D vision. PointCLIP converts point cloud data to multi-view depth maps, adopting CLIP for shape classification. However, its performance is restricted by the domain gap between rendered depth maps and images, as well as the diversity of depth distributions. To address this issue, we propose CLIP2Point, an image-depth pre-training method by contrastive learning to transfer CLIP to the 3D domain, and adapt it to point cloud classification. We introduce a new depth rendering setting that forms a better visual effect, and then render 52,460 pairs of images and depth maps from ShapeNet for pre-training. The pre-training scheme of CLIP2Point combines cross-modality learning to enforce the depth features for capturing expressive visual and textual features and intra-modality learning to enhance the invariance of depth aggregation. Additionally, we propose a novel Dual-Path Adapter (DPA) module, i.e., a dual-path structure with simplified adapters for few-shot learning. The dual-path structure allows the joint use of CLIP and CLIP2Point, and the simplified adapter can well fit few-shot tasks without post-search. Experimental results show that CLIP2Point is effective in transferring CLIP knowledge to 3D vision. Our CLIP2Point outperforms PointCLIP and other self-supervised 3D networks, achieving state-of-the-art results on zero-shot and few-shot classification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/04/2021

PointCLIP: Point Cloud Understanding by CLIP

Recently, zero-shot and few-shot learning via Contrastive Vision-Languag...
research
05/25/2023

DiffCLIP: Leveraging Stable Diffusion for Language Grounded 3D Classification

Large pre-trained models have had a significant impact on computer visio...
research
07/27/2023

Take-A-Photo: 3D-to-2D Generative Pre-training of Point Cloud Models

With the overwhelming trend of mask image modeling led by MAE, generativ...
research
02/27/2023

Joint-MAE: 2D-3D Joint Masked Autoencoders for 3D Point Cloud Pre-training

Masked Autoencoders (MAE) have shown promising performance in self-super...
research
11/21/2022

PointCLIP V2: Adapting CLIP for Powerful 3D Open-world Learning

Contrastive Language-Image Pre-training (CLIP) has shown promising open-...
research
12/29/2022

Self-Supervised Pre-training for 3D Point Clouds via View-Specific Point-to-Image Translation

The past few years have witnessed the prevalence of self-supervised repr...
research
07/01/2021

Learning to See before Learning to Act: Visual Pre-training for Manipulation

Does having visual priors (e.g. the ability to detect objects) facilitat...

Please sign up or login with your details

Forgot password? Click here to reset