3D Semantic Scene Completion from a Single Depth Image using Adversarial Training

05/15/2019
by   Yueh-Tung Chen, et al.
0

We address the task of 3D semantic scene completion, i.e. , given a single depth image, we predict the semantic labels and occupancy of voxels in a 3D grid representing the scene. In light of the recently introduced generative adversarial networks (GAN), our goal is to explore the potential of this model and the efficiency of various important design choices. Our results show that using conditional GANs outperforms the vanilla GAN setup. We evaluate these architecture designs on several datasets. Based on our experiments, we demonstrate that GANs are able to outperform the performance of a baseline 3D CNN in case of clean annotations, but they suffer from poorly aligned annotations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/28/2016

Semantic Scene Completion from a Single Depth Image

This paper focuses on semantic scene completion, a task for producing a ...
research
09/21/2017

Class-Splitting Generative Adversarial Networks

Generative Adversarial Networks (GANs) produce systematically better qua...
research
04/01/2021

Linear Semantics in Generative Adversarial Networks

Generative Adversarial Networks (GANs) are able to generate high-quality...
research
11/29/2020

Multi-task GANs for Semantic Segmentation and Depth Completion with Cycle Consistency

Semantic segmentation and depth completion are two challenging tasks in ...
research
10/05/2022

ciDATGAN: Conditional Inputs for Tabular GANs

Conditionality has become a core component for Generative Adversarial Ne...
research
10/16/2020

In Depth Bayesian Semantic Scene Completion

This work studies Semantic Scene Completion which aims to predict a 3D s...
research
10/25/2018

Adversarial Semantic Scene Completion from a Single Depth Image

We propose a method to reconstruct, complete and semantically label a 3D...

Please sign up or login with your details

Forgot password? Click here to reset