MVDream: Multi-view Diffusion for 3D Generation

08/31/2023
by   Yichun Shi, et al.
0

We propose MVDream, a multi-view diffusion model that is able to generate geometrically consistent multi-view images from a given text prompt. By leveraging image diffusion models pre-trained on large-scale web datasets and a multi-view dataset rendered from 3D assets, the resulting multi-view diffusion model can achieve both the generalizability of 2D diffusion and the consistency of 3D data. Such a model can thus be applied as a multi-view prior for 3D generation via Score Distillation Sampling, where it greatly improves the stability of existing 2D-lifting methods by solving the 3D consistency problem. Finally, we show that the multi-view diffusion model can also be fine-tuned under a few shot setting for personalized 3D generation, i.e. DreamBooth3D application, where the consistency can be maintained after learning the subject identity.

READ FULL TEXT

page 8

page 9

page 10

page 11

page 12

page 16

page 17

page 18

research
05/30/2023

StyleAvatar3D: Leveraging Image-Text Diffusion Models for High-Fidelity 3D Avatar Generation

The recent advancements in image-text diffusion models have stimulated r...
research
08/27/2023

Sparse3D: Distilling Multiview-Consistent Diffusion for Object Reconstruction from Sparse Views

Reconstructing 3D objects from extremely sparse views is a long-standing...
research
08/22/2023

IT3D: Improved Text-to-3D Generation with Explicit View Synthesis

Recent strides in Text-to-3D techniques have been propelled by distillin...
research
07/07/2023

AutoDecoding Latent 3D Diffusion Models

We present a novel approach to the generation of static and articulated ...
research
05/30/2023

HiFA: High-fidelity Text-to-3D with Advanced Diffusion Guidance

Automatic text-to-3D synthesis has achieved remarkable advancements thro...
research
04/26/2023

Ray Conditioning: Trading Photo-consistency for Photo-realism in Multi-view Image Generation

Multi-view image generation attracts particular attention these days due...
research
06/15/2023

Edit-DiffNeRF: Editing 3D Neural Radiance Fields using 2D Diffusion Model

Recent research has demonstrated that the combination of pretrained diff...

Please sign up or login with your details

Forgot password? Click here to reset