Few-shot 3D Shape Generation

05/19/2023
by   Jingyuan Zhu, et al.
0

Realistic and diverse 3D shape generation is helpful for a wide variety of applications such as virtual reality, gaming, and animation. Modern generative models, such as GANs and diffusion models, learn from large-scale datasets and generate new samples following similar data distributions. However, when training data is limited, deep neural generative networks overfit and tend to replicate training samples. Prior works focus on few-shot image generation to produce high-quality and diverse results using a few target images. Unfortunately, abundant 3D shape data is typically hard to obtain as well. In this work, we make the first attempt to realize few-shot 3D shape generation by adapting generative models pre-trained on large source domains to target domains using limited data. To relieve overfitting and keep considerable diversity, we propose to maintain the probability distributions of the pairwise relative distances between adapted samples at feature-level and shape-level during domain adaptation. Our approach only needs the silhouettes of few-shot target samples as training data to learn target geometry distributions and achieve generated shapes with diverse topology and textures. Moreover, we introduce several metrics to evaluate the quality and diversity of few-shot 3D shape generation. The effectiveness of our approach is demonstrated qualitatively and quantitatively under a series of few-shot 3D shape adaptation setups.

READ FULL TEXT

page 7

page 8

page 15

page 16

page 17

page 18

page 22

page 23

research
11/07/2022

Few-shot Image Generation with Diffusion Models

Denoising diffusion probabilistic models (DDPMs) have been proven capabl...
research
10/27/2022

Few-shot Image Generation via Masked Discrimination

Few-shot image generation aims to generate images of high quality and gr...
research
11/25/2022

Expanding Small-Scale Datasets with Guided Imagination

The power of Deep Neural Networks (DNNs) depends heavily on the training...
research
06/25/2023

DomainStudio: Fine-Tuning Diffusion Models for Domain-Driven Image Generation using Limited Data

Denoising diffusion probabilistic models (DDPMs) have been proven capabl...
research
04/04/2023

PODIA-3D: Domain Adaptation of 3D Generative Model Across Large Domain Gap Using Pose-Preserved Text-to-Image Diffusion

Recently, significant advancements have been made in 3D generative model...
research
11/19/2018

Learning to Generate the "Unseen" via Part Synthesis and Composition

Data-driven generative modeling has made remarkable progress by leveragi...
research
02/21/2022

T-METASET: Task-Aware Generation of Metamaterial Datasets by Diversity-Based Active Learning

Inspired by the recent success of deep learning in diverse domains, data...

Please sign up or login with your details

Forgot password? Click here to reset