DiffRF: Rendering-Guided 3D Radiance Field Diffusion

12/02/2022
by   Norman Müller, et al.
0

We introduce DiffRF, a novel approach for 3D radiance field synthesis based on denoising diffusion probabilistic models. While existing diffusion-based methods operate on images, latent codes, or point cloud data, we are the first to directly generate volumetric radiance fields. To this end, we propose a 3D denoising model which directly operates on an explicit voxel grid representation. However, as radiance fields generated from a set of posed images can be ambiguous and contain artifacts, obtaining ground truth radiance field samples is non-trivial. We address this challenge by pairing the denoising formulation with a rendering loss, enabling our model to learn a deviated prior that favours good image quality instead of trying to replicate fitting errors like floating artifacts. In contrast to 2D-diffusion models, our model learns multi-view consistent priors, enabling free-view synthesis and accurate shape generation. Compared to 3D GANs, our diffusion-based approach naturally enables conditional generation such as masked completion or single-view 3D synthesis at inference time.

READ FULL TEXT

page 1

page 7

page 8

page 13

page 14

page 15

research
04/08/2021

3D Shape Generation and Completion through Point-Voxel Diffusion

We propose a novel approach for probabilistic generative modeling of 3D ...
research
04/13/2023

Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction

3D-aware image synthesis encompasses a variety of tasks, such as scene g...
research
10/24/2022

Learning Neural Radiance Fields from Multi-View Geometry

We present a framework, called MVG-NeRF, that combines classical Multi-V...
research
04/13/2023

Learning Controllable 3D Diffusion Models from Single-view Images

Diffusion models have recently become the de-facto approach for generati...
research
12/01/2022

SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction

We propose SparseFusion, a sparse view 3D reconstruction approach that u...
research
06/10/2022

Image Generation with Multimodal Priors using Denoising Diffusion Probabilistic Models

Image synthesis under multi-modal priors is a useful and challenging tas...
research
03/21/2023

Vox-E: Text-guided Voxel Editing of 3D Objects

Large scale text-guided diffusion models have garnered significant atten...

Please sign up or login with your details

Forgot password? Click here to reset