Image GANs meet Differentiable Rendering for Inverse Graphics and Interpretable 3D Neural Rendering

10/18/2020
by   Yuxuan Zhang, et al.
10

Differentiable rendering has paved the way to training neural networks to perform "inverse graphics" tasks such as predicting 3D geometry from monocular photographs. To train high performing models, most of the current approaches rely on multi-view imagery which are not readily available in practice. Recent Generative Adversarial Networks (GANs) that synthesize images, in contrast, seem to acquire 3D knowledge implicitly during training: object viewpoints can be manipulated by simply manipulating the latent codes. However, these latent codes often lack further physical interpretation and thus GANs cannot easily be inverted to perform explicit 3D reasoning. In this paper, we aim to extract and disentangle 3D knowledge learned by generative models by utilizing differentiable renderers. Key to our approach is to exploit GANs as a multi-view data generator to train an inverse graphics network using an off-the-shelf differentiable renderer, and the trained inverse graphics network as a teacher to disentangle the GAN's latent code into interpretable 3D properties. The entire architecture is trained iteratively using cycle consistency losses. We show that our approach significantly outperforms state-of-the-art inverse graphics networks trained on existing datasets, both quantitatively and via user studies. We further showcase the disentangled GAN as a controllable 3D "neural renderer", complementing traditional graphics renderers.

READ FULL TEXT

page 8

page 13

page 14

page 16

page 17

page 18

page 19

page 22

research
04/04/2022

Differentiable Rendering for Synthetic Aperture Radar Imagery

There is rising interest in integrating signal and image processing pipe...
research
11/19/2019

SimVAE: Simulator-Assisted Training forInterpretable Generative Models

This paper presents a simulator-assisted training method (SimVAE) for va...
research
02/28/2020

Inverse Graphics GAN: Learning to Generate 3D Shapes from Unstructured 2D Data

Recent work has shown the ability to learn generative models for 3D shap...
research
03/22/2023

NeRF-GAN Distillation for Memory-Efficient 3D-Aware Generation with Convolutions

Pose-conditioned convolutional generative models struggle with high-qual...
research
03/11/2015

Deep Convolutional Inverse Graphics Network

This paper presents the Deep Convolution Inverse Graphics Network (DC-IG...
research
08/28/2018

3D-Aware Scene Manipulation via Inverse Graphics

We aim to obtain an interpretable, expressive and disentangled scene rep...
research
11/06/2020

Modular Primitives for High-Performance Differentiable Rendering

We present a modular differentiable renderer design that yields performa...

Please sign up or login with your details

Forgot password? Click here to reset