official implementation of "CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth Pre-training"
Pre-training across 3D vision and language remains under development because of limited training data. Recent works attempt to transfer vision-language pre-training models to 3D vision. PointCLIP converts point cloud data to multi-view depth maps, adopting CLIP for shape classification. However, its performance is restricted by the domain gap between rendered depth maps and images, as well as the diversity of depth distributions. To address this issue, we propose CLIP2Point, an image-depth pre-training method by contrastive learning to transfer CLIP to the 3D domain, and adapt it to point cloud classification. We introduce a new depth rendering setting that forms a better visual effect, and then render 52,460 pairs of images and depth maps from ShapeNet for pre-training. The pre-training scheme of CLIP2Point combines cross-modality learning to enforce the depth features for capturing expressive visual and textual features and intra-modality learning to enhance the invariance of depth aggregation. Additionally, we propose a novel Dual-Path Adapter (DPA) module, i.e., a dual-path structure with simplified adapters for few-shot learning. The dual-path structure allows the joint use of CLIP and CLIP2Point, and the simplified adapter can well fit few-shot tasks without post-search. Experimental results show that CLIP2Point is effective in transferring CLIP knowledge to 3D vision. Our CLIP2Point outperforms PointCLIP and other self-supervised 3D networks, achieving state-of-the-art results on zero-shot and few-shot classification.READ FULL TEXT VIEW PDF
official implementation of "CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth Pre-training"
Vision-language (V-L) pre-training has achieved great success in computer vision. Benefiting from large-scale data, V-L pre-trained models(Radford et al., 2021; Yao et al., 2021) transfer language knowledge to visual understanding, which can be fine-tuned to multiple downstream tasks. However, pre-training across 3D vision and language remains an open question, due to the lack of sufficient training data. For example, Contrastive Language-Image Pre-training (CLIP) (Radford et al., 2021) takes more than 400M image-text pairs as training data. In contrast, few studies have been given to pre-training across 3D vision and language. Moreover, even conventional 3D pre-training method PointContrast (Xie et al., 2020) is trained on ScanNet (Dai et al., 2017) with only 100k pairs of point clouds in 1,513 scenes. Due to the limitation of 3D pre-training, most existing 3D deep networks (Qi et al., 2017; Wang et al., 2019) are trained from scratch on specific downstream datasets.
One remedy is to leverage the existing successful V-L pre-trained model for 3D vision tasks. To this end, one may first convert the 3D point clouds to multi-view 2D depth maps (Su et al., 2015; Goyal et al., 2021; Wang et al., 2022). By simply treating 2D depth maps as images, PointCLIP (Zhang et al., 2022) applies CLIP to 3D tasks, providing zero-shot and few-shot settings in the point cloud classification with textual prompting. However, its results are still limited since the rendered depth maps are much different from the image domain of the CLIP training dataset. And the sparsity and disorder of point cloud data result in various depth distributions from multiple views, further confusing the aggregation of CLIP. Existing pre-training works focus on the domain gap (Afham et al., 2022) or multi-view consistency (Xie et al., 2020) of point clouds, while we intend to tackle similar issues based on depth maps. In addition, a solution of adapting pre-training knowledge to downstream tasks should be included in the V-L transfer.
In order to transfer CLIP to the 3D domain, we propose CLIP2Point, a pre-training scheme with two learning mechanisms: 1) cross-modality learning for the contrastive alignment of RGB image and depth map, 2) intra-modality learning in the depth modality to enhance the invariance of depth aggregation In particular, the image encoder is directly from CLIP weights and is frozen during pre-training. While the depth encoder is trained to 1) align depth features with CLIP image features in cross-modality learning and 2) encourage the depth aggregation to be invariant to view changes in intra-modality learning. With pre-training, the depth features can then be well aligned with the visual CLIP features. As for the training data, we do not adopt the depth maps in the existing RGB-D datasets as they are densely sampled and are contradicted to the sparsity of rendered depth maps. Instead, we reconstruct multi-view images and depth maps from 3D models directly. Specifically, we render 10 views of RGB images from ShapeNet (Chang et al., 2015), which covers 52,460 3D models for 55 object categories. Meanwhile, we generate corresponding depth maps, with a new rendering setting that forms a better visual effect for CLIP encoding. Experiments show that our CLIP2Point can significantly improve the performance of zero-shot point cloud classification.
To further adapt our CLIP2Point to few-shot learning, we propose a novel Dual-Path Adapter (DPA) module. Since our pre-training is to align the instance-level depth map, it can be complementary with CLIP pre-training knowledge that focuses on category-level discrimination. We propose a dual-path structure, where both our pre-trained depth encoder and the CLIP visual encoder
are utilized. A learnable simplified adapter is attached to each encoder to extract a global feature from multiple views. And the final logits can be calculated by the combination of two encoders.
To sum up, our main contributions can be summarized as:
We propose a CLIP2Point method by contrastive learning to transfer CLIP knowledge to the 3D domain. For the training data, we pre-process ShapeNet, reconstructing 52,460 pairs of rendered images and depth maps with a better depth rendering setting. Experiments show that CLIP2Point significantly improves the performance of zero-shot point cloud classification.
We propose a novel Dual-Path Adapter (DPA) module, a dual-path structure with a simplified adapter for extending CLIP2Point to few-shot classification.
Extensive experiments are conducted on ModelNet10, ModelNet40, and ScanobjectNN. In comparison to PointCLIP and self-supervised 3D networks, CLIP2Point achieves state-of-the-art results on both zero-shot and few-shot point cloud classification tasks.
Vision-language (V-L) pre-training has been a growing interest in multi-modal tasks. Pre-trained by large-scale image-text (Chen et al., 2020b) or video-text (Sun et al., 2019) pairs, those models can be applied to multiple downstream tasks, e.g.
, visual question answering, image/video captioning, and text-to-image generation. CLIP(Radford et al., 2021) further leverages V-L pre-training to transfer cross-modal knowledge, allowing natural language to understand visual concepts. Nonetheless, pre-training across 3D vision and language is restricted by insufficient 3D-text data pairs. And 3D downstream tasks like shape retrieval (Han et al., 2019) and text-guided shape generation (Liu et al., 2022) suffer from limited performance. Considering the vacancy between 3D vision and language, we attempt to transfer CLIP pre-trained knowledge to the 3D domain, making language applicable to point cloud classification.
Self-supervised pre-training has become an important issue in computer vision. Since task-related annotations are not required, it can leverage large-scale data and pretext tasks to learn general representation. In particular, contrastive learning (He et al., 2020; Chen et al., 2020a) and masked auto-encoding (He et al., 2022; Zhou et al., 2021; Devlin et al., 2018) are two popular self-supervised schemes. Instead of directly applying masked auto-encoding to 3D point completion (Yu et al., 2022; Pang et al., 2022), Li and Heizmann (2022) show that contrastive learning in 3D vision can vary from granularity (point/instance/scene) or modality (point/depth/image). In this work, we aim to adopt image-depth contrastive learning to bridge the domain gap between depth features and visual CLIP features, thereby allowing to transfer CLIP knowledge to the 3D domain.
Fine-tuning has been widely used in downstream tasks to fit pre-trained weights to specific training datasets (Zhai et al., 2019; Lin et al., 2014; Zhou et al., 2017). One common practice is to update the entire parameters during training, while it may be overfitted if the scale of training data is limited. Instead, partial tuning (Cai et al., 2020; Zhang et al., 2021) is a data-efficient way to fit downstream data. Recently, prompt tuning has been applied to language (Brown et al., 2020; Li and Liang, 2021) and vision (Dosovitskiy et al., 2020; Jia et al., 2022) models. Prompt tuning provides several learnable token sequences and specific task heads for the adaptation, without the full tuning of pre-trained parameters. Note that pre-trained models in 3D vision are still in early exploration, and existing deep networks in point cloud (Qi et al., 2017; Wang et al., 2019; Mohammadi et al., 2021) all follow a full tuning paradigm. In contrast, we propose a novel Dual-Path Adapter module for a lightweight fine-tuning. With CLIP textual prompts, a few-shot setting is available by tuning simplified adapters only.
CLIP (Radford et al., 2021) is a vision-language pre-training method that matches images and texts by contrastive learning. It contains two individual encoders: a visual encoder and a language encoder, to respectively extract image features and textual features . Here,
is the embedding dimension of encoders. For zero-shot transfer, the cosine similarity ofand implies the matching results. Taking a -category classification task as an example, textual prompts are generated with the category names and then encoded by CLIP, extracting a list of textual features . For each image feature , we can calculate the classification as follows,
denotes the predicted probability of the-th category.
PointCLIP (Zhang et al., 2022) applies CLIP to 3D point cloud data. It renders multi-view depth maps from point clouds, and then extracts the depth map features with the CLIP visual encoder, where is the number of views. Logits of the zero-shot classification can be calculated similarly to Eq. (1), while multi-view features are gathered with searched weights. PointCLIP also proposes an inter-view adapter for the few-shot classification. It adopts a residual form, which concatenates multi-view features for a global representation and then add back to extract adapted features . The adapter can be formulated as,
where denotes the concatenation on channel dimensions, and are two-layer MLPs, and and denote the view transformation and the summation weights of the -th view, respectively. , and are learnable during the few-shot learning, and is post-searched.
However, depth maps are representations of geometry information, which lack natural texture information. Therefore, it is inappropriate to directly apply CLIP visual encoder for the extraction of depth features, leaving some leeway for boosting zero-shot point cloud classification.
Instead of directly applying CLIP visual encoder to depth maps, we suggest to learn a depth encoder for aligning depth features with CLIP visual features. In other words, we expect the extracted features of a rendered depth map to be consistent with CLIP visual features of the corresponding image. Then, CLIP textual prompts can be directly adopted to match the depth features. Moreover, since depth maps are presented in multiple views, the consistency of depth distribution needs maintaining as well.
Contrastive learning is a self-supervised pre-training method that aligns features of each sample with its positive samples, and satisfies our expectations of minimizing the distance between image and depth features, as well as enhancing the consistency of multi-view depth features. We reconstruct a pre-training dataset from ShapeNet, which consists of pairs of rendered RGB images and corresponding depth maps. To generate depth maps in a better visual effect for CLIP encoding, a new depth rendering setting is adopted. We propose a self-supervised pre-training scheme with intra-modality and cross-modality contrastive learning. The pre-trained depth encoder can well adapt to CLIP prompts. In the following, we explain these modules in more detail.
To convert point cloud data into rendered depth images,we need to project 3D coordinates to 2D coordinates in a specific view. Here we choose rendering from the front view as an example to illustrate the projection. Specifically, a point at can match the corresponding pixel at by perspective projection. However, there are still two issues: 1) multiple points can be projected to the same pixel in a specific plane; 2) a large area of the rendered depth maps remains blank since no points are in the background. For the first issue, existing works (Goyal et al., 2021; Zhang et al., 2022) prefer weighted summation of multiple points,
where is the set of points matching , and denotes a minimal value, e.g., 112. We argue that the minimum depth value of those points is more intuitive in 2D vision, as we cannot watch an object perspectively with naked eyes. For the second issue, few pixels can be covered due to the sparsity of point clouds. We extend each point to its neighborhood pixels, in order that the visual continuity of the depth value can be refined. We set the dilation rate to 2, thus obtaining the final rendered value as follows:
where denotes the minimum value of the input set, and denotes the set of point clouds. We visualize the rendering process in the bottom of Fig. 1, where we take the value of the red point in the airplane as the depth in , but previous works additionally consider all the blue points.
As shown in Fig. 1, our pre-training network includes a depth encoder and an image encoder . Given the input dataset , where is the -th rendered image in a random camera view, we render the corresponding depth maps and in the same view angle but with different distances and . We first adopt a intra-modality aggregation among with , and then extract image features from with , enforcing to keep consistent with in a cross-modality aspect. and are both initialized with the weights of the visual encoder in CLIP. We freeze the parameters of on training, while is learnable.
Intra-modality Learning. Considering the sparsity and disorder of point clouds in the 3D space, distributions of depth values for different views vary a lot, even though we render depth maps at the same distance. To keep the invariance of distance aggregation in , intra-modality contrastive learning is adopted. For each input depth map , we randomly modify the distance of the camera view but keep the view angle, generating two augmented depth maps and . and are then fed into , extracting depth features , Following the NT-Xent loss in SimCLR (Chen et al., 2020a), the intra-modality contrastive loss can be formulated as,
where denotes the batch size, denotes the cosine similarity, and denotes the temperature coefficient. We set . And the final depth feature map is the mean of and .
Cross-modality Learning. For a set of rendered RGB-D data, cross-modality contrastive learning aims to minimize the distance between rendered images and depth maps in the same pair, while maximizing the distance of others. For each input image , we extract the image features , which is exactly the same as CLIP visual features. Together with depth features , we have the cross-modality contrastive loss as follows,
With newly rendered depth maps and a pre-trained depth encoder, we can obtain better performance of zero-shot classification with a similar pipeline in PointCLIP. And since we have narrowed the gap between depth maps and images after pre-training, depth features have a similar distribution to image features. We can simply use the prompt, i.e., “image of a [class name]” as the textual prompts. After extracting depth features , we calculate the average logits of all the views as follows,
Note that PointCLIP exploits post-search to find a set of view weights that achieves the highest accuracy. We argue that post-search is a time-consuming procedure, which is typically unfair for zero/few-shot tasks that require efficiency. Hence, we avoid post-search during training and evaluation, replacing it with the mean of multi-view logits.
Albeit zero-shot learning is an efficient transfer pipeline to downstream tasks, lightweight few-shot learning is also very useful for further refining the prediction accuracy. For example, PointCLIP improves the classification accuracy from 23.78% to 87.20% by a 16-shot learning. However, the few-shot pipeline in PointCLIP still depends on post-search. To avoid it, we simplify its adapter and propose a Dual-Path Adapter (DPA) module for few-shot learning. DPA allows a combination of pre-trained knowledge in CLIP and our pre-training, thereby enhancing the adaptation ability of CLIP2Point.
The cross-view adapter in PointCLIP adopts a residual structure, which can maintain the dimension of input multi-view depth features . However, the expansion of in Eq. (3) requires extra weights, which may easily be overfitted in few-shot learning. Besides, summation weights remain a problem since features exist in multiple views. Simply calculating the average logits of the multi-view depth features (Eq. (12)) is not competitive with post-search that we have dismissed ( Eq. (4)). Instead,
is the global feature of multiple views, which can be directly used to estimate a global logits vector. With such simplification, we reduce the learnable parameters and avoid post-search. The simplified adapter can be formulated as,
CLIP2Point has achieved a significant improvement on zero-shot point cloud classification, as our pre-training narrows the domain gap between depth maps and images. While in few-shot learning, lightweight adapters also help transfer domains in a more direct way somehow, focusing on minimizing the category-level distance. That is the reason why PointCLIP can enjoy a promising accuracy in few-shot classification. However, the domain transfer in our pre-training is based on instance-level discrimination, extracting and comparing global features. Thus, our pre-trained depth encoder and the CLIP visual encoder can be complementary, where the depth encoder can adjust to an appropriate feature domain, and the visual encoder can pay more attention to category selection. We design a dual-path structure with these two encoders. For each path, an independent adapter is attached to the encoder. Finally, DPA can be formulated as,
where the superscripts and of features/weights are related to CLIP and our pre-training, respectively. We use cross-entropy (De Boer et al., 2005) loss for supervision.
Pre-training Datasets. Numerous RGB-D datasets are available now, while depth images in those datasets cannot replace rendered depth maps, as they are densely annotated. To align images with sparsely marked depth maps, we have to directly convert 3D point clouds to depth maps. ShapeNet (Chang et al., 2015) is a large-scale dataset of 3D shape, with 52,460 3D models in 55 categories. Previous works (Xu et al., 2019; Choy et al., 2016) render a subset of ShapeNet in limited views. Instead, we render RGB images in 10 views with shapes and texture information from the complementary set of ShapeNet. The implementation follows MVTN (Hamdi et al., 2021) on Pytorch3D (Lassner and Zollhöfer, 2020). Meanwhile, we sample the farthest 1,024 points of corresponding 3D models, and then render those points to depth maps as Eq. (6). To access the CLIP representation, the size of rendered images and depth maps is . Following the separation of the classification benchmark on ShapeNet, we have 41,943 pairs for training and 10,517 pairs for validation. For each training sample in the batch, we randomly choose a view out of the ten views. To evaluate the rendering quality, we conduct zero-shot classification experiments. The accuracy of RGB images and depth maps in our validation set are 54.21% and 19.98%, respectively.
Downstream Datasets. Following PointCLIP, we evaluate zero-shot classification on ModelNet10 (Wu et al., 2015), ModelNet40 (Wu et al., 2015), and ScanObjectNN (Uy et al., 2019), 16-shot classification on ModelNet40. ModelNet is a synthetic indoor 3D dataset, where ModelNet10 and ModelNet40 are both its subsets for classification. In ModelNet10, there are 4,899 orientation-aligned CAD models from 10 categories, including 3,991 for training and 908 for testing. While ModelNet40 contains 12,311 CAD models from 40 categories, with 9,843 for training and 2,468 for testing. Since the original ModelNet40 is not aligned in orientation, we use the aligned version (Sedaghat et al., 2016). ScanObjectNN is a real-world dataset, which contains 2,902 samples of point cloud data from 15 categories. Different from clean CAD models in ModelNet, objects in ScanObjectNN are partially presented and attached with backgrounds. Thus, it is much harder than ModelNet. For all the three datasets, we sample 1,024 points of each model as the input point cloud.
We implement our framework on PyTorch(Paszke et al., 2019) and use the basic version of Vision Transformer (Dosovitskiy et al., 2020) with a patch size of 32 (namely ViT-B/32) as our visual encoders. In pre-training, we use LAMB (You et al., 2019) optimizer with a weight decay of and initialize the learning rate to
. Our pre-training takes 100 epochs with a batch size of 256. We choose the checkpoint with the highest accuracy in our evaluation set as the final weights for downstream tasks. In few-shot learning, we use AdamW(Loshchilov and Hutter, 2017) optimizer with a weight decay of and initialize the learning rate to . The training batch size is 32. Following PointCLIP, we use 6 orthogonal views: front, back, left, right, top, and bottom for zero-shot, and add four corner views for pre-training and few-shot learning. The view distance is initialized as 1, and the random range of distance in pre-training is .
To the best of our knowledge, PointCLIP is the only attempt to conduct zero-shot classification on the whole 3D dataset. Previous works (Cheraghian et al., 2019, 2022) divide 3D datasets into two parts: “seen” and “unseen” categories. Models are trained on the former and evaluated on the latter, which is easier than the zero-shot setting in PointCLIP. To evaluate the effectiveness of our depth rendering setting and pre-training transfer, we compare PointCLIP on ModelNet10, ModelNet40, and ScanobjectNN.
As shown in Tab. 1, even without pre-training, our method can still outperform PointCLIP, simply by using the newly rendered depth maps. Especially in ModelNet40, we almost have a 10% gain on accuracy. After pre-training, the accuracy is significantly improved in ModelNet10 and ModelNet40, by 36.12% and 19.67%. Nonetheless, a 5.14% gain can also be attained on ScanObjectNN. While the improvement on ScanObjectNN is relatively small. We think that is because we generate our pre-training dataset from ShapeNet, which is a clean synthetic dataset like ModelNet.
|Ours w/o pre-training||30.51||29.71||18.18|
|Ours w/ pre-training||66.63||49.38||23.32|
To further evaluate the transfer ability of our pre-training and verify our few-shot pipeline, we compare with PointCLIP, as well as two self-supervised pre-training methods: CrossPoint (Afham et al., 2022) and Point-MAE (Pang et al., 2022) in 16-shot classification. We choose a DGCNN (Wang et al., 2019) backbone for CrossPoint, and a 12-layer Transformer for Point-MAE. Although we only adopt ViT-B/32 as our encoder, PointCLIP in ResNet101 is included in the experiments.
We present the quantitative results of our few-shot experiments in Tab. 2. Initialized by CLIP weights, our few-shot pipeline w/o pre-training has already outperformed other methods, thanks to the simplified adapter and the dual-path structure. And our pre-trained version can reach an accuracy of 89.21%, which is very close to some traditional supervised networks such as PointNet++ (Qi et al., 2017).
|Method &||CrossPoint &||Point-MAE &||PointCLIP &||PointCLIP &||Ours &|
Depth Rendering. To analyze the depth rendering setting, we evaluate several settings for zero-shot classification in Tab. 3. “Weighted” and “Minimum” represent the depth values described in Eq. (5) and Eq. (6), respectively. “Dilation Rate” is in Eq. (6), in which 1, 2, 4 are selected for ablation studies. The range definition of in Eq. (5) is the same as Eq. (6) when adding a dilation rate.
As shown in Tab. 3, using the minimum depth value has much higher accuracy in CLIP zero-shot classification. We think that is because the visual effects can be close to CLIP pre-training images. While the larger is not the better in the setting of dilation rates. A too large dilation rate blurs depth maps, especially near the corners of objects. According to the results of zero-shot classification, we finally choose “Minimum” with a dilation rate of 2 as our depth rendering setting. The visualization in Fig. 3 further demonstrates that our setting has the best visual effect.
Intra-modality Learning. To evaluate the effectiveness of our intra-modality learning, we conduct a pre-training experiment with cross-modality only, in which the accuracy of zero-shot classification is only 38.29%. Regardless of random view distances, we simply extract the features of original depth maps as . The final loss can be formulated as Eq. 10. We keep the same pre-training setting, while the result of zero-shot classification in this version of pre-training is 11.09% lower than the version with intra-modality. Our intra-modality contrastive learning allows the depth encoder to keep a depth invariance among different camera views. Without randomized distances and corresponding contrastive restrictions, the encoder may easily fail when depth values vary a lot in different views.
Dual-Path Adapter. To evaluate the design of our Gual-Path Adapter module, we compare our simplified adapter, as well as the dual-path structure, with PointCLIP. For the experiments of the original adapter, we calculate the average logits, which is similar to Eq. (12).
As shown in Tab. 4, the simplified adapter surpasses the original one in PointCLIP. We have explained that extra weights in the original adapter make few-shot training much easier to overfit. The gathering of multi-view features is another problem for the original adapter since post-search is avoided in our evaluation. Additionally, the dual-path structure improves the performance as well, especially in our simplified adapter. The results demonstrate that our pre-trained encoder and the CLIP visual encoder are complementary. While the improvement in the original adapter is relatively small, we think multi-view gathering in a single encoder may disturb the combination of encoders.
In this paper, we propose CLIP2Point, which pre-trains a depth encoder for adapting CLIP knowledge to the 3D domain. We introduce a depth-image pre-training method, which consists of both intra-modality and cross-modality contrastive learning to bridge the domain gap between depth features by depth encoder and image features by CLIP visual encoder, and to maintain the invariance of multi-view depth distribution. For the pre-training data, we render 52,560 images from 3D models in ShapeNet, and meanwhile generate corresponding depth maps with a new depth rendering setting. After pre-training, the performance of zero-shot point cloud classification is significantly improved. To further adapt our pre-trained weights to 3D downstream tasks, we propose Dual-Path Adapter for few-shot classification. With a simplified adapter and a dual-path structure, we achieve state-of-the-art results in comparison with PointCLIP and other pre-trained 3D networks.
Although CLIP2Point successfully transfers CLIP knowledge to 3D vision, we observe that training data greatly influence the quality of our pre-training and the performance of downstream tasks, e.g., synthetic pre-training data has a limited improvement on real-world downstream datasets. In future, we will improve the rendering data and explore more real-world 3D tasks with CLIP2Point.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9902–9912. Cited by: §1, §5.4.
International conference on machine learning, pp. 1597–1607. Cited by: §2.2, §3.2.2.
Mvtn: multi-view transformation network for 3d shape recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1–11. Cited by: Appendix A, Appendix B, §5.1.
Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 126–133. Cited by: §2.1.
Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16000–16009. Cited by: §2.2.
Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §2.3.
Masked autoencoders for point cloud self-supervised learning. arXiv preprint arXiv:2203.06604. Cited by: §2.2, §5.4.
Pytorch: an imperative style, high-performance deep learning library. Advances in neural information processing systems 32, pp. 8026–8037. Cited by: §5.2.
Multi-view convolutional neural networks for 3d shape recognition. In Proceedings of the IEEE international conference on computer vision, pp. 945–953. Cited by: Appendix A, §1.
Scene parsing through ade20k dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 633–641. Cited by: §2.3.
CLIP2Point is a transfer learning paradigm in 3D vision, which achieves state-of-the-art results in zero-shot and few-shot downstream tasks. While it is available to supervised tasks with the same pipeline in few-shot learning. We conduct a supervised classification experiment on ModelNet40, comparing with 4 relative state-of-the-art networks: MVCNN (Su et al., 2015), SimpleView (Goyal et al., 2021), MVTN (Hamdi et al., 2021), and P2P Wang et al. (2022). Similar to CLIP2Point, these networks convert point cloud data to 2D depth maps, leveraging 2D pre-trained backbones to extract corresponding shape features.
As shown in Tab. 5, CLIP2Point achieves an equal accuracy to P2P (HorNet-L), but with much lower input requirements and evaluation computation costs. Since P2P infers a single view at one time, its evaluation cost needs to multiply the number of views 40. Additionally, our training only fine-tunes learnable adapters, which is more efficient than those full tuning methods.
|Methods||Data Type||Acc.(%)||Eval. MACs(G)||Tr. Param.(M)|
|P2P: ResNet-101||4,096, 40||93.1||11.96||0.25|
|P2P: ConvNeXt-L||4,096, 40||93.2||38.51||0.14|
|P2P: HorNet-L||4,096, 40||94.0||38.72||1.01|
|CLIP2Point (Ours)||1,024, 10||94.0||88.23||5.78|
Following MVTN (Hamdi et al., 2021), we render 3D models to RGB images with Pytorch3D (Lassner and Zollhöfer, 2020). We first load mesh objects with texture information from ShapeNetCore v2. We choose 10 views in a spherical configuration, and then use MeshRasterizer and HardPhongShader in Pytorch3D.render, with the colors of backgrounds and lights both white. We visualize ten views of an airplane in Fig. 4.