GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis

07/05/2020
by   Katja Schwarz, et al.
33

While 2D generative adversarial networks have enabled high-resolution image synthesis, they largely lack an understanding of the 3D world and the image formation process. Thus, they do not provide precise control over camera viewpoint or object pose. To address this problem, several recent approaches leverage intermediate voxel-based representations in combination with differentiable rendering. However, existing methods either produce low image resolution or fall short in disentangling camera and scene properties, e.g., the object identity may vary with the viewpoint. In this paper, we propose a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene. In contrast to voxel-based representations, radiance fields are not confined to a coarse discretization of the 3D space, yet allow for disentangling camera and scene properties while degrading gracefully in the presence of reconstruction ambiguity. By introducing a multi-scale patch-based discriminator, we demonstrate synthesis of high-resolution images while training our model from unposed 2D images alone. We systematically analyze our approach on several challenging synthetic and real-world datasets. Our experiments reveal that radiance fields are a powerful representation for generative image synthesis, leading to 3D consistent models that render with high fidelity.

READ FULL TEXT

page 5

page 7

page 8

research
06/15/2022

VoxGRAF: Fast 3D-Aware Image Synthesis with Sparse Voxel Grids

State-of-the-art 3D-aware generative models rely on coordinate-based MLP...
research
05/13/2021

High-Resolution Complex Scene Synthesis with Transformers

The use of coarse-grained layouts for controllable synthesis of complex ...
research
03/24/2023

UrbanGIRAFFE: Representing Urban Scenes as Compositional Generative Neural Feature Fields

Generating photorealistic images with controllable camera pose and scene...
research
02/01/2018

3D Object Dense Reconstruction from a Single Depth View

In this paper, we propose a novel approach, 3D-RecGAN++, which reconstru...
research
07/04/2022

LaTeRF: Label and Text Driven Object Radiance Fields

Obtaining 3D object representations is important for creating photo-real...
research
12/11/2019

Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis

In recent years, Generative Adversarial Networks have achieved impressiv...
research
06/24/2021

GaussiGAN: Controllable Image Synthesis with 3D Gaussians from Unposed Silhouettes

We present an algorithm that learns a coarse 3D representation of object...

Please sign up or login with your details

Forgot password? Click here to reset