Factoring Shape, Pose, and Layout from the 2D Image of a 3D Scene

12/05/2017
by   Shubham Tulsiani, et al.
0

The goal of this paper is to take a single 2D image of a scene and recover the 3D structure in terms of a small set of factors: a layout representing the enclosing surfaces as well as a set of objects represented in terms of shape and pose. We propose a convolutional neural network-based approach to predict this representation and benchmark it on a large dataset of indoor scenes. Our experiments evaluate a number of practical design questions, demonstrate that we can infer this representation, and quantitatively and qualitatively demonstrate its merits compared to alternate representations.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 7

page 8

research
03/11/2021

Holistic 3D Scene Understanding from a Single Image with Implicit Representation

We present a new pipeline for holistic 3D scene understanding from a sin...
research
06/11/2019

Clouds of Oriented Gradients for 3D Detection of Objects, Surfaces, and Indoor Scene Layouts

We develop new representations and algorithms for three-dimensional (3D)...
research
04/09/2015

Predicting Complete 3D Models of Indoor Scenes

One major goal of vision is to infer physical models of objects, surface...
research
06/18/2015

A Spatial Layout and Scale Invariant Feature Representation for Indoor Scene Classification

Unlike standard object classification, where the image to be classified ...
research
10/08/2020

Semi-Supervised Learning of Multi-Object 3D Scene Representations

Representing scenes at the granularity of objects is a prerequisite for ...
research
08/29/2018

PanoRoom: From the Sphere to the 3D Layout

We propose a novel FCN able to work with omnidirectional images that out...
research
04/04/2018

Representing Videos based on Scene Layouts for Recognizing Agent-in-Place Actions

We address the recognition of agent-in-place actions, which are associat...

Please sign up or login with your details

Forgot password? Click here to reset