Primitive3D: 3D Object Dataset Synthesis from Randomly Assembled Primitives

05/25/2022
by   Xinke Li, et al.
0

Numerous advancements in deep learning can be attributed to the access to large-scale and well-annotated datasets. However, such a dataset is prohibitively expensive in 3D computer vision due to the substantial collection cost. To alleviate this issue, we propose a cost-effective method for automatically generating a large amount of 3D objects with annotations. In particular, we synthesize objects simply by assembling multiple random primitives. These objects are thus auto-annotated with part labels originating from primitives. This allows us to perform multi-task learning by combining the supervised segmentation with unsupervised reconstruction. Considering the large overhead of learning on the generated dataset, we further propose a dataset distillation strategy to remove redundant samples regarding a target dataset. We conduct extensive experiments for the downstream tasks of 3D object classification. The results indicate that our dataset, together with multi-task pretraining on its annotations, achieves the best performance compared to other commonly used datasets. Further study suggests that our strategy can improve the model performance by pretraining and fine-tuning scheme, especially for the dataset with a small scale. In addition, pretraining with the proposed dataset distillation method can save 86% of the pretraining time with negligible performance degradation. We expect that our attempt provides a new data-centric perspective for training 3D deep models.

READ FULL TEXT
research
08/12/2021

Billion-Scale Pretraining with Vision Transformers for Multi-Task Visual Representations

Large-scale pretraining of visual representations has led to state-of-th...
research
12/08/2020

LAMP: Label Augmented Multimodal Pretraining

Multi-modal representation learning by pretraining has become an increas...
research
06/08/2022

Towards Understanding Why Mask-Reconstruction Pretraining Helps in Downstream Tasks

For unsupervised pretraining, mask-reconstruction pretraining (MRP) appr...
research
07/26/2021

Improve Unsupervised Pretraining for Few-label Transfer

Unsupervised pretraining has achieved great success and many recent work...
research
01/24/2023

SMART: Self-supervised Multi-task pretrAining with contRol Transformers

Self-supervised pretraining has been extensively studied in language and...
research
06/08/2021

Multi-dataset Pretraining: A Unified Model for Semantic Segmentation

Collecting annotated data for semantic segmentation is time-consuming an...
research
11/23/2022

Multi-Environment Pretraining Enables Transfer to Action Limited Datasets

Using massive datasets to train large-scale models has emerged as a domi...

Please sign up or login with your details

Forgot password? Click here to reset