Configurable, Photorealistic Image Rendering and Ground Truth Synthesis by Sampling Stochastic Grammars Representing Indoor Scenes

04/01/2017
by   Chenfanfu Jiang, et al.
0

We propose the configurable rendering of massive quantities of photorealistic images with ground truth for the purposes of training, benchmarking, and diagnosing computer vision models. In contrast to the conventional (crowd-sourced) manual labeling of ground truth for a relatively modest number of RGB-D images captured by Kinect-like sensors, we devise a non-trivial configurable pipeline of algorithms capable of generating a potentially infinite variety of indoor scenes using a stochastic grammar, specifically, one represented by an attributed spatial And-Or graph. We employ physics-based rendering to synthesize photorealistic RGB images while automatically synthesizing detailed, per-pixel ground truth data, including visible surface depth and normal, object identity and material information, as well as illumination. Our pipeline is configurable inasmuch as it enables the precise customization and control of important attributes of the generated scenes. We demonstrate that our generated scenes achieve a performance similar to the NYU v2 Dataset on pre-trained deep learning models. By modifying pipeline components in a controllable manner, we furthermore provide diagnostics on common scene understanding tasks; eg., depth and surface normal prediction, semantic segmentation, etc.

READ FULL TEXT

page 18

page 19

page 20

page 22

page 23

page 26

page 28

page 29

research
12/22/2016

Physically-Based Rendering for Indoor Scene Understanding Using Convolutional Neural Networks

Indoor scene understanding is central to applications such as robot navi...
research
12/15/2016

SceneNet RGB-D: 5M Photorealistic Images of Synthetic Indoor Trajectories with Ground Truth

We introduce SceneNet RGB-D, expanding the previous work of SceneNet to ...
research
12/20/2019

IRS: A Large Synthetic Indoor Robotics Stereo Dataset for Disparity and Surface Normal Estimation

Indoor robotics localization, navigation and interaction heavily rely on...
research
12/16/2021

Sim2Real Docs: Domain Randomization for Documents in Natural Scenes using Ray-traced Rendering

In the past, computer vision systems for digitized documents could rely ...
research
09/16/2019

360^o Surface Regression with a Hyper-Sphere Loss

Omnidirectional vision is becoming increasingly relevant as more efficie...
research
08/19/2020

MineNav: An Expandable Synthetic Dataset Based on Minecraft for Aircraft Visual Navigation

We propose a simply method to generate high quality synthetic dataset ba...
research
03/15/2022

Active Exploration for Neural Global Illumination of Variable Scenes

Neural rendering algorithms introduce a fundamentally new approach for p...

Please sign up or login with your details

Forgot password? Click here to reset