Log In Sign Up

Configurable, Photorealistic Image Rendering and Ground Truth Synthesis by Sampling Stochastic Grammars Representing Indoor Scenes

by   Chenfanfu Jiang, et al.

We propose the configurable rendering of massive quantities of photorealistic images with ground truth for the purposes of training, benchmarking, and diagnosing computer vision models. In contrast to the conventional (crowd-sourced) manual labeling of ground truth for a relatively modest number of RGB-D images captured by Kinect-like sensors, we devise a non-trivial configurable pipeline of algorithms capable of generating a potentially infinite variety of indoor scenes using a stochastic grammar, specifically, one represented by an attributed spatial And-Or graph. We employ physics-based rendering to synthesize photorealistic RGB images while automatically synthesizing detailed, per-pixel ground truth data, including visible surface depth and normal, object identity and material information, as well as illumination. Our pipeline is configurable inasmuch as it enables the precise customization and control of important attributes of the generated scenes. We demonstrate that our generated scenes achieve a performance similar to the NYU v2 Dataset on pre-trained deep learning models. By modifying pipeline components in a controllable manner, we furthermore provide diagnostics on common scene understanding tasks; eg., depth and surface normal prediction, semantic segmentation, etc.


page 18

page 19

page 20

page 22

page 23

page 26

page 28

page 29


Physically-Based Rendering for Indoor Scene Understanding Using Convolutional Neural Networks

Indoor scene understanding is central to applications such as robot navi...

SceneNet RGB-D: 5M Photorealistic Images of Synthetic Indoor Trajectories with Ground Truth

We introduce SceneNet RGB-D, expanding the previous work of SceneNet to ...

IRS: A Large Synthetic Indoor Robotics Stereo Dataset for Disparity and Surface Normal Estimation

Indoor robotics localization, navigation and interaction heavily rely on...

MineNav: An Expandable Synthetic Dataset Based on Minecraft for Aircraft Visual Navigation

We propose a simply method to generate high quality synthetic dataset ba...

Sim2Real Docs: Domain Randomization for Documents in Natural Scenes using Ray-traced Rendering

In the past, computer vision systems for digitized documents could rely ...

360^o Surface Regression with a Hyper-Sphere Loss

Omnidirectional vision is becoming increasingly relevant as more efficie...

Hypersim: A Photorealistic Synthetic Dataset for Holistic Indoor Scene Understanding

For many fundamental scene understanding tasks, it is difficult or impos...