SynthCam3D: Semantic Understanding With Synthetic Indoor Scenes

05/01/2015
by   Ankur Handa, et al.
0

We are interested in automatic scene understanding from geometric cues. To this end, we aim to bring semantic segmentation in the loop of real-time reconstruction. Our semantic segmentation is built on a deep autoencoder stack trained exclusively on synthetic depth data generated from our novel 3D scene library, SynthCam3D. Importantly, our network is able to segment real world scenes without any noise modelling. We present encouraging preliminary results.

READ FULL TEXT

page 1

page 2

page 3

page 5

research
04/01/2018

Real-time Progressive 3D Semantic Segmentation for Indoor Scene

The widespread adoption of autonomous systems such as drones and assista...
research
10/30/2019

Multi Modal Semantic Segmentation using Synthetic Data

Semantic understanding of scenes in three-dimensional space (3D) is a qu...
research
02/17/2022

Shift-Memory Network for Temporal Scene Segmentation

Semantic segmentation has achieved great accuracy in understanding spati...
research
10/13/2015

SemanticPaint: A Framework for the Interactive Segmentation of 3D Scenes

We present an open-source, real-time implementation of SemanticPaint, a ...
research
11/25/2021

NeSF: Neural Semantic Fields for Generalizable Semantic Segmentation of 3D Scenes

We present NeSF, a method for producing 3D semantic fields from posed RG...
research
06/18/2019

Active Scene Understanding via Online Semantic Reconstruction

We propose a novel approach to robot-operated active understanding of un...
research
04/29/2019

Casting Geometric Constraints in Semantic Segmentation as Semi-Supervised Learning

We propose a simple yet effective method to learn to segment new indoor ...

Please sign up or login with your details

Forgot password? Click here to reset