VAE-Info-cGAN: Generating Synthetic Images by Combining Pixel-level and Feature-level Geospatial Conditional Inputs

12/08/2020
by   Xuerong Xiao, et al.
0

Training robust supervised deep learning models for many geospatial applications of computer vision is difficult due to dearth of class-balanced and diverse training data. Conversely, obtaining enough training data for many applications is financially prohibitive or may be infeasible, especially when the application involves modeling rare or extreme events. Synthetically generating data (and labels) using a generative model that can sample from a target distribution and exploit the multi-scale nature of images can be an inexpensive solution to address scarcity of labeled data. Towards this goal, we present a deep conditional generative model, called VAE-Info-cGAN, that combines a Variational Autoencoder (VAE) with a conditional Information Maximizing Generative Adversarial Network (InfoGAN), for synthesizing semantically rich images simultaneously conditioned on a pixel-level condition (PLC) and a macroscopic feature-level condition (FLC). Dimensionally, the PLC can only vary in the channel dimension from the synthesized image and is meant to be a task-specific input. The FLC is modeled as an attribute vector in the latent space of the generated image which controls the contributions of various characteristic attributes germane to the target distribution. An interpretation of the attribute vector to systematically generate synthetic images by varying a chosen binary macroscopic feature is explored. Experiments on a GPS trajectories dataset show that the proposed model can accurately generate various forms of spatio-temporal aggregates across different geographic locations while conditioned only on a raster representation of the road network. The primary intended application of the VAE-Info-cGAN is synthetic data (and label) generation for targeted data augmentation for computer vision-based modeling of problems relevant to geospatial analysis and remote sensing.

READ FULL TEXT

page 8

page 9

research
09/11/2021

Conditional Generation of Synthetic Geospatial Images from Pixel-level and Feature-level Inputs

Training robust supervised deep learning models for many geospatial appl...
research
11/15/2017

Zero-Shot Learning via Class-Conditioned Deep Generative Models

We present a deep generative model for learning to predict classes not s...
research
01/13/2018

Deep learning for topology optimization design

Generative modeling techniques are being rapidly developed in the field ...
research
02/23/2020

Assembling Semantically-Disentangled Representations for Predictive-Generative Models via Adaptation from Synthetic Domain

Deep neural networks can form high-level hierarchical representations of...
research
08/01/2022

A Deep Generative Model for Feasible and Diverse Population Synthesis

An ideal synthetic population, a key input to activity-based models, mim...
research
07/07/2023

Synthesizing Forestry Images Conditioned on Plant Phenotype Using a Generative Adversarial Network

Plant phenology and phenotype prediction using remote sensing data is in...
research
05/29/2023

Autoencoding Conditional Neural Processes for Representation Learning

Conditional neural processes (CNPs) are a flexible and efficient family ...

Please sign up or login with your details

Forgot password? Click here to reset