Learning single-image 3D reconstruction by generative modelling of shape, pose and shading

01/19/2019
by   Paul Henderson, et al.
0

We present a unified framework tackling two problems: class-specific 3D reconstruction from a single image, and generation of new 3D shape samples. These tasks have received considerable attention recently; however, most existing approaches rely on 3D supervision, annotation of 2D images with keypoints or poses, and/or training with multiple views of each object instance. Our framework is very general: it can be trained in similar settings to existing approaches, while also supporting weaker supervision. Importantly, it can be trained purely from 2D images, without pose annotations, and with only a single view per instance. We employ meshes as an output representation, instead of voxels used in most prior work. This allows us to reason over lighting parameters and exploit shading information during training, which previous 2D-supervised methods cannot. Thus, our method can learn to generate and reconstruct concave object classes. We evaluate our approach in various settings, showing that: (i) it learns to disentangle shape from pose and lighting; (ii) using shading in the loss improves performance compared to just silhouettes; (iii) when using a standard single white light, our model outperforms state-of-the-art 2D-supervised methods, both with and without pose supervision, thanks to exploiting shading cues; (iv) performance improves further when using multiple coloured lights, even approaching that of state-of-the-art 3D-supervised methods; (v) shapes produced by our model capture smooth surfaces and fine details better than voxel-based approaches; and (vi) our approach supports concave classes such as bathtubs and sofas, which methods based on silhouettes cannot learn.

READ FULL TEXT
research
07/24/2018

Learning to Generate and Reconstruct 3D Meshes with only 2D Supervision

We present a unified framework tackling two problems: class-specific 3D ...
research
03/22/2020

Self-Supervised 2D Image to 3D Shape Translation with Disentangled Representations

We present a framework to translate between 2D image views and 3D object...
research
03/05/2018

Spectral reflectance estimation from one RGB image using self-interreflections in a concave object

Light interreflections occurring in a concave object generate a color gr...
research
12/28/2018

Learning to Reconstruct Shapes from Unseen Classes

From a single image, humans are able to perceive the full 3D shape of an...
research
04/21/2022

Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency

Approaches for single-view reconstruction typically rely on viewpoint an...
research
04/29/2018

Matryoshka Networks: Predicting 3D Geometry via Nested Shape Layers

In this paper, we develop novel, efficient 2D encodings for 3D geometry,...
research
03/26/2019

Pix2Vex: Image-to-Geometry Reconstruction using a Smooth Differentiable Renderer

We present a novel approach to 3D object reconstruction from its 2D proj...

Please sign up or login with your details

Forgot password? Click here to reset