Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction

04/13/2023
by   Hansheng Chen, et al.
1

3D-aware image synthesis encompasses a variety of tasks, such as scene generation and novel view synthesis from images. Despite numerous task-specific methods, developing a comprehensive model remains challenging. In this paper, we present SSDNeRF, a unified approach that employs an expressive diffusion model to learn a generalizable prior of neural radiance fields (NeRF) from multi-view images of diverse objects. Previous studies have used two-stage approaches that rely on pretrained NeRFs as real data to train diffusion models. In contrast, we propose a new single-stage training paradigm with an end-to-end objective that jointly optimizes a NeRF auto-decoder and a latent diffusion model, enabling simultaneous 3D reconstruction and prior learning, even from sparsely available views. At test time, we can directly sample the diffusion prior for unconditional generation, or combine it with arbitrary observations of unseen objects for NeRF reconstruction. SSDNeRF demonstrates robust results comparable to or better than leading task-specific methods in unconditional generation and single/sparse-view 3D reconstruction.

READ FULL TEXT

page 6

page 8

page 14

page 15

page 16

page 17

page 18

page 19

research
09/07/2023

SyncDreamer: Generating Multiview-consistent Images from a Single-view Image

In this paper, we present a novel diffusion model called that generates ...
research
03/13/2023

Parallel Vertex Diffusion for Unified Visual Grounding

Unified visual grounding pursues a simple and generic technical route to...
research
12/02/2022

DiffRF: Rendering-Guided 3D Radiance Field Diffusion

We introduce DiffRF, a novel approach for 3D radiance field synthesis ba...
research
09/14/2023

Viewpoint Textual Inversion: Unleashing Novel View Synthesis with Pretrained 2D Diffusion Models

Text-to-image diffusion models understand spatial relationship between o...
research
02/26/2022

Pix2NeRF: Unsupervised Conditional π-GAN for Single Image to Neural Radiance Fields Translation

We propose a pipeline to generate Neural Radiance Fields (NeRF) of an ob...
research
04/27/2023

Learning a Diffusion Prior for NeRFs

Neural Radiance Fields (NeRFs) have emerged as a powerful neural 3D repr...
research
06/13/2023

Viewset Diffusion: (0-)Image-Conditioned 3D Generative Models from 2D Data

We present Viewset Diffusion: a framework for training image-conditioned...

Please sign up or login with your details

Forgot password? Click here to reset