Adversarially Tuned Scene Generation

01/02/2017
by   V S R Veeravasarapu, et al.
0

Generalization performance of trained computer vision systems that use computer graphics (CG) generated data is not yet effective due to the concept of 'domain-shift' between virtual and real data. Although simulated data augmented with a few real world samples has been shown to mitigate domain shift and improve transferability of trained models, guiding or bootstrapping the virtual data generation with the distributions learnt from target real world domain is desired, especially in the fields where annotating even few real images is laborious (such as semantic labeling, and intrinsic images etc.). In order to address this problem in an unsupervised manner, our work combines recent advances in CG (which aims to generate stochastic scene layouts coupled with large collections of 3D object models) and generative adversarial training (which aims train generative models by measuring discrepancy between generated and real data in terms of their separability in the space of a deep discriminatively-trained classifier). Our method uses iterative estimation of the posterior density of prior distributions for a generative graphical model. This is done within a rejection sampling framework. Initially, we assume uniform distributions as priors on the parameters of a scene described by a generative graphical model. As iterations proceed the prior distributions get updated to distributions that are closer to the (unknown) distributions of target data. We demonstrate the utility of adversarially tuned scene generation on two real-world benchmark datasets (CityScapes and CamVid) for traffic scene semantic labeling with a deep convolutional net (DeepLab). We realized performance improvements by 2.28 and 3.14 points (using the IoU metric) between the DeepLab models trained on simulated sets prepared from the scene generation models before and after tuning to CityScapes and CamVid respectively.

READ FULL TEXT

page 4

page 6

research
12/10/2018

3D Scene Parsing via Class-Wise Adaptation

We propose the method that uses only computer graphics datasets to parse...
research
05/31/2016

Model-driven Simulations for Deep Convolutional Neural Networks

The use of simulated virtual environments to train deep convolutional ne...
research
08/04/2017

Augmented Reality Meets Computer Vision : Efficient Data Generation for Urban Driving Scenes

The success of deep learning in computer vision is based on availability...
research
03/03/2023

Quantifying the LiDAR Sim-to-Real Domain Shift: A Detailed Investigation Using Object Detectors and Analyzing Point Clouds at Target-Level

LiDAR object detection algorithms based on neural networks for autonomou...
research
04/06/2017

Generate To Adapt: Aligning Domains using Generative Adversarial Networks

Domain Adaptation is an actively researched problem in Computer Vision. ...
research
10/18/2018

Unsupervised Domain Adaptation for Learning Eye Gaze from a Million Synthetic Images: An Adversarial Approach

With contemporary advancements of graphics engines, recent trend in deep...
research
12/04/2015

Model Validation for Vision Systems via Graphics Simulation

Rapid advances in computation, combined with latest advances in computer...

Please sign up or login with your details

Forgot password? Click here to reset