VPP: Efficient Conditional 3D Generation via Voxel-Point Progressive Representation

07/28/2023
by   Zekun Qi, et al.
0

Conditional 3D generation is undergoing a significant advancement, enabling the free creation of 3D content from inputs such as text or 2D images. However, previous approaches have suffered from low inference efficiency, limited generation categories, and restricted downstream applications. In this work, we revisit the impact of different 3D representations on generation quality and efficiency. We propose a progressive generation method through Voxel-Point Progressive Representation (VPP). VPP leverages structured voxel representation in the proposed Voxel Semantic Generator and the sparsity of unstructured point representation in the Point Upsampler, enabling efficient generation of multi-category objects. VPP can generate high-quality 8K point clouds within 0.2 seconds. Additionally, the masked generation Transformer allows for various 3D downstream tasks, such as generation, editing, completion, and pre-training. Extensive experiments demonstrate that VPP efficiently generates high-fidelity and diverse 3D shapes across different categories, while also exhibiting excellent representation transfer performance. Codes will be released on https://github.com/qizekun/VPP.

READ FULL TEXT
research
06/20/2022

Voxel-MAE: Masked Autoencoders for Pre-training Large-scale Point Clouds

Mask-based pre-training has achieved great success for self-supervised l...
research
07/01/2022

Masked Autoencoders for Self-Supervised Learning on Automotive Point Clouds

Masked autoencoding has become a successful pre-training paradigm for Tr...
research
08/23/2021

Voxel-based Network for Shape Completion by Leveraging Edge Generation

Deep learning technique has yielded significant improvements in point cl...
research
04/03/2022

POS-BERT: Point Cloud One-Stage BERT Pre-Training

Recently, the pre-training paradigm combining Transformer and masked lan...
research
05/19/2023

PointGPT: Auto-regressively Generative Pre-training from Point Clouds

Large language models (LLMs) based on the generative pre-training transf...
research
11/29/2022

One is All: Bridging the Gap Between Neural Radiance Fields Architectures with Progressive Volume Distillation

Neural Radiance Fields (NeRF) methods have proved effective as compact, ...
research
03/26/2023

Learning Versatile 3D Shape Generation with Improved AR Models

Auto-Regressive (AR) models have achieved impressive results in 2D image...

Please sign up or login with your details

Forgot password? Click here to reset