PODIA-3D: Domain Adaptation of 3D Generative Model Across Large Domain Gap Using Pose-Preserved Text-to-Image Diffusion

04/04/2023
by   Gwanghyun Kim, et al.
0

Recently, significant advancements have been made in 3D generative models, however training these models across diverse domains is challenging and requires an huge amount of training data and knowledge of pose distribution. Text-guided domain adaptation methods have allowed the generator to be adapted to the target domains using text prompts, thereby obviating the need for assembling numerous data. Recently, DATID-3D presents impressive quality of samples in text-guided domain, preserving diversity in text by leveraging text-to-image diffusion. However, adapting 3D generators to domains with significant domain gaps from the source domain still remains challenging due to issues in current text-to-image diffusion models as following: 1) shape-pose trade-off in diffusion-based translation, 2) pose bias, and 3) instance bias in the target domain, resulting in inferior 3D shapes, low text-image correspondence, and low intra-domain diversity in the generated samples. To address these issues, we propose a novel pipeline called PODIA-3D, which uses pose-preserved text-to-image diffusion-based domain adaptation for 3D generative models. We construct a pose-preserved text-to-image diffusion model that allows the use of extremely high-level noise for significant domain changes. We also propose specialized-to-general sampling strategies to improve the details of the generated samples. Moreover, to overcome the instance bias, we introduce a text-guided debiasing method that improves intra-domain diversity. Consequently, our method successfully adapts 3D generators across significant domain gaps. Our qualitative results and user study demonstrates that our approach outperforms existing 3D text-guided domain adaptation methods in terms of text-image correspondence, realism, diversity of rendered images, and sense of depth of 3D shapes in the generated samples

READ FULL TEXT

page 1

page 5

page 6

page 7

page 8

page 15

page 16

page 17

research
11/29/2022

DATID-3D: Diversity-Preserved Domain Adaptation Using Text-to-Image Diffusion for 3D Generative Model

Recent 3D generative models have achieved remarkable performance in synt...
research
12/08/2022

Diffusion Guided Domain Adaptation of Image Generators

Can a text-to-image diffusion model be used as a training objective for ...
research
08/02/2021

StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators

Can a generative model be trained to produce images from a specific doma...
research
08/01/2023

Domain Adaptation based on Human Feedback for Enhancing Generative Model Denoising Abilities

How can we apply human feedback into generative model? As answer of this...
research
05/19/2023

Few-shot 3D Shape Generation

Realistic and diverse 3D shape generation is helpful for a wide variety ...
research
07/23/2023

Improving Out-of-Distribution Robustness of Classifiers via Generative Interpolation

Deep neural networks achieve superior performance for learning from inde...
research
08/21/2018

Text-to-image Synthesis via Symmetrical Distillation Networks

Text-to-image synthesis aims to automatically generate images according ...

Please sign up or login with your details

Forgot password? Click here to reset