Pix2Shape – Towards Unsupervised Learning of 3D Scenes from Images using a View-based Representation

03/23/2020
by   Sai Rajeswar, et al.
0

We infer and generate three-dimensional (3D) scene information from a single input image and without supervision. This problem is under-explored, with most prior work relying on supervision from, e.g., 3D ground-truth, multiple images of a scene, image silhouettes or key-points. We propose Pix2Shape, an approach to solve this problem with four components: (i) an encoder that infers the latent 3D representation from an image, (ii) a decoder that generates an explicit 2.5D surfel-based reconstruction of a scene from the latent code (iii) a differentiable renderer that synthesizes a 2D image from the surfel representation, and (iv) a critic network trained to discriminate between images generated by the decoder-renderer and those from a training distribution. Pix2Shape can generate complex 3D scenes that scale with the view-dependent on-screen resolution, unlike representations that capture world-space resolution, i.e., voxels or meshes. We show that Pix2Shape learns a consistent scene representation in its encoded latent space and that the decoder can then be applied to this latent representation in order to synthesize the scene from a novel viewpoint. We evaluate Pix2Shape with experiments on the ShapeNet dataset as well as on a novel benchmark we developed, called 3D-IQTT, to evaluate models based on their ability to enable 3d spatial reasoning. Qualitative and quantitative evaluation demonstrate Pix2Shape's ability to solve scene reconstruction, generation, and understanding tasks.

READ FULL TEXT
research
12/18/2019

SynSin: End-to-end View Synthesis from a Single Image

Single image view synthesis allows for the generation of new views of a ...
research
07/30/2020

Unsupervised Continuous Object Representation Networks for Novel View Synthesis

Novel View Synthesis (NVS) is concerned with the generation of novel vie...
research
11/17/2022

RenderDiffusion: Image Diffusion for 3D Reconstruction, Inpainting and Generation

Diffusion models currently achieve state-of-the-art performance for both...
research
07/21/2019

Validation of Modulation Transfer Functions and Noise Power Spectra from Natural Scenes

The Modulation Transfer Function (MTF) and the Noise Power Spectrum (NPS...
research
07/24/2018

Learning to Generate and Reconstruct 3D Meshes with only 2D Supervision

We present a unified framework tackling two problems: class-specific 3D ...
research
03/21/2023

Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models

We present Text2Room, a method for generating room-scale textured 3D mes...
research
06/14/2020

Structural Autoencoders Improve Representations for Generation and Transfer

We study the problem of structuring a learned representation to signific...

Please sign up or login with your details

Forgot password? Click here to reset