Deep disentangled representations for volumetric reconstruction

10/12/2016
by   Edward Grant, et al.
0

We introduce a convolutional neural network for inferring a compact disentangled graphical description of objects from 2D images that can be used for volumetric reconstruction. The network comprises an encoder and a twin-tailed decoder. The encoder generates a disentangled graphics code. The first decoder generates a volume, and the second decoder reconstructs the input image using a novel training regime that allows the graphics code to learn a separate representation of the 3D object and a description of its lighting and pose conditions. We demonstrate this method by generating volumes and disentangled graphical descriptions from images and videos of faces and chairs.

READ FULL TEXT

page 8

page 9

page 13

page 14

research
03/11/2015

Deep Convolutional Inverse Graphics Network

This paper presents the Deep Convolution Inverse Graphics Network (DC-IG...
research
04/20/2022

Human-Object Interaction Detection via Disentangled Transformer

Human-Object Interaction Detection tackles the problem of joint localiza...
research
06/01/2017

Learning Disentangled Representations with Semi-Supervised Deep Generative Models

Variational autoencoders (VAEs) learn representations of data by jointly...
research
02/07/2016

Disentangled Representations in Neural Models

Representation learning is the foundation for the recent success of neur...
research
01/21/2019

Spatial Broadcast Decoder: A Simple Architecture for Learning Disentangled Representations in VAEs

We present a simple neural rendering architecture that helps variational...
research
04/24/2021

Adaptive Appearance Rendering

We propose an approach to generate images of people given a desired appe...
research
01/22/2020

Towards A Controllable Disentanglement Network

This paper addresses two crucial problems of learning disentangled image...

Please sign up or login with your details

Forgot password? Click here to reset