Disentangled3D: Learning a 3D Generative Model with Disentangled Geometry and Appearance from Monocular Images

03/29/2022
by   Ayush Tewari, et al.
10

Learning 3D generative models from a dataset of monocular images enables self-supervised 3D reasoning and controllable synthesis. State-of-the-art 3D generative models are GANs which use neural 3D volumetric representations for synthesis. Images are synthesized by rendering the volumes from a given camera. These models can disentangle the 3D scene from the camera viewpoint in any generated image. However, most models do not disentangle other factors of image formation, such as geometry and appearance. In this paper, we design a 3D GAN which can learn a disentangled model of objects, just from monocular observations. Our model can disentangle the geometry and appearance variations in the scene, i.e., we can independently sample from the geometry and appearance spaces of the generative model. This is achieved using a novel non-rigid deformable scene formulation. A 3D volume which represents an object instance is computed as a non-rigidly deformed canonical 3D volume. Our method learns the canonical volume, as well as its deformations, jointly during training. This formulation also helps us improve the disentanglement between the 3D scene and the camera viewpoints using a novel pose regularization loss defined on the 3D deformation field. In addition, we further model the inverse deformations, enabling the computation of dense correspondences between images generated by our model. Finally, we design an approach to embed real images into the latent space of our disentangled generative model, enabling editing of real images.

READ FULL TEXT

page 1

page 6

page 7

page 8

page 13

page 14

page 15

page 16

research
12/22/2020

Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video

We present Non-Rigid Neural Radiance Fields (NR-NeRF), a reconstruction ...
research
11/14/2022

Controllable GAN Synthesis Using Non-Rigid Structure-from-Motion

In this paper, we present an approach for combining non-rigid structure-...
research
02/02/2023

SceneDreamer: Unbounded 3D Scene Generation from 2D Image Collections

In this work, we present SceneDreamer, an unconditional generative model...
research
07/07/2023

AutoDecoding Latent 3D Diffusion Models

We present a novel approach to the generation of static and articulated ...
research
12/02/2022

ObjectStitch: Generative Object Compositing

Object compositing based on 2D images is a challenging problem since it ...
research
10/03/2022

SinGRAV: Learning a Generative Radiance Volume from a Single Natural Scene

We present a 3D generative model for general natural scenes. Lacking nec...
research
08/30/2021

Equine Pain Behavior Classification via Self-Supervised Disentangled Pose Representation

Timely detection of horse pain is important for equine welfare. Horses e...

Please sign up or login with your details

Forgot password? Click here to reset