A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis

10/29/2021
by   Xingang Pan, et al.
3

The advancement of generative radiance fields has pushed the boundary of 3D-aware image synthesis. Motivated by the observation that a 3D object should look realistic from multiple viewpoints, these methods introduce a multi-view constraint as regularization to learn valid 3D radiance fields from 2D images. Despite the progress, they often fall short of capturing accurate 3D shapes due to the shape-color ambiguity, limiting their applicability in downstream tasks. In this work, we address this ambiguity by proposing a novel shading-guided generative implicit model that is able to learn a starkly improved shape representation. Our key insight is that an accurate 3D shape should also yield a realistic rendering under different lighting conditions. This multi-lighting constraint is realized by modeling illumination explicitly and performing shading with various lighting conditions. Gradients are derived by feeding the synthesized images to a discriminator. To compensate for the additional computational burden of calculating surface normals, we further devise an efficient volume rendering strategy via surface tracking, reducing the training and inference time by 24 datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis while capturing accurate underlying 3D shapes. We demonstrate improved performance of our approach on 3D shape reconstruction against existing methods, and show its applicability on image relighting. Our code will be released at https://github.com/XingangPan/ShadeGAN.

READ FULL TEXT

page 2

page 7

page 8

page 9

page 16

page 17

page 18

research
04/24/2023

TensoIR: Tensorial Inverse Rendering

We propose TensoIR, a novel inverse rendering approach based on tensor f...
research
11/01/2021

Generative Occupancy Fields for 3D Surface-Aware Image Synthesis

The advent of generative radiance fields has significantly promoted the ...
research
03/13/2023

SDF-3DGAN: A 3D Object Generative Method Based on Implicit Signed Distance Function

In this paper, we develop a new method, termed SDF-3DGAN, for 3D object ...
research
03/31/2023

VDN-NeRF: Resolving Shape-Radiance Ambiguity via View-Dependence Normalization

We propose VDN-NeRF, a method to train neural radiance fields (NeRFs) fo...
research
03/20/2023

DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction using Neural Radiance Fields

Neural radiance fields (NeRFs) have demonstrated state-of-the-art perfor...
research
06/30/2023

MARF: The Medial Atom Ray Field Object Representation

We propose Medial Atom Ray Fields (MARFs), a novel neural object represe...
research
06/18/2022

GAN2X: Non-Lambertian Inverse Rendering of Image GANs

2D images are observations of the 3D physical world depicted with the ge...

Please sign up or login with your details

Forgot password? Click here to reset