CLIP-FO3D: Learning Free Open-world 3D Scene Representations from 2D Dense CLIP

03/08/2023
by   Junbo Zhang, et al.
0

Training a 3D scene understanding model requires complicated human annotations, which are laborious to collect and result in a model only encoding close-set object semantics. In contrast, vision-language pre-training models (e.g., CLIP) have shown remarkable open-world reasoning properties. To this end, we propose directly transferring CLIP's feature space to 3D scene understanding model without any form of supervision. We first modify CLIP's input and forwarding process so that it can be adapted to extract dense pixel features for 3D scene contents. We then project multi-view image features to the point cloud and train a 3D scene understanding model with feature distillation. Without any annotations or additional training, our model achieves promising annotation-free semantic segmentation results on open-vocabulary semantics and long-tailed concepts. Besides, serving as a cross-modal pre-training framework, our method can be used to improve data efficiency during fine-tuning. Our model outperforms previous SOTA methods in various zero-shot and data-efficient learning benchmarks. Most importantly, our model successfully inherits CLIP's rich-structured knowledge, allowing 3D scene understanding models to recognize not only object concepts but also open-world semantics.

READ FULL TEXT

page 1

page 3

page 7

page 8

research
12/02/2021

DenseCLIP: Extract Free Dense Labels from CLIP

Contrastive Language-Image Pre-training (CLIP) has made a remarkable bre...
research
01/12/2023

CLIP2Scene: Towards Label-efficient 3D Scene Understanding by CLIP

Contrastive language-image pre-training (CLIP) achieves promising result...
research
04/03/2023

RegionPLC: Regional Point-Language Contrastive Learning for Open-World 3D Scene Understanding

Existing 3D scene understanding tasks have achieved high performance on ...
research
11/28/2022

OpenScene: 3D Scene Understanding with Open Vocabularies

Traditional 3D scene understanding approaches rely on labeled 3D dataset...
research
08/18/2023

RLIPv2: Fast Scaling of Relational Language-Image Pre-training

Relational Language-Image Pre-training (RLIP) aims to align vision repre...
research
03/23/2023

Visually-Prompted Language Model for Fine-Grained Scene Graph Generation in an Open World

Scene Graph Generation (SGG) aims to extract <subject, predicate, object...
research
06/06/2023

Towards Label-free Scene Understanding by Vision Foundation Models

Vision foundation models such as Contrastive Vision-Language Pre-trainin...

Please sign up or login with your details

Forgot password? Click here to reset