Bias Correction of Learned Generative Models using Likelihood-Free Importance Weighting

06/23/2019
by   Aditya Grover, et al.
8

A learned generative model often produces biased statistics relative to the underlying data distribution. A standard technique to correct this bias is importance sampling, where samples from the model are weighted by the likelihood ratio under model and true distributions. When the likelihood ratio is unknown, it can be estimated by training a probabilistic classifier to distinguish samples from the two distributions. In this paper, we employ this likelihood-free importance weighting framework to correct for the bias in state-of-the-art deep generative models. We find that this technique consistently improves standard goodness-of-fit metrics for evaluating the sample quality of state-of-the-art generative models, suggesting reduced bias. Finally, we demonstrate its utility on representative applications in a) data augmentation for classification using generative adversarial networks, and b) model-based policy evaluation using off-policy data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/14/2016

On the Quantitative Analysis of Decoder-Based Generative Models

The past several years have seen remarkable progress in generative model...
research
09/19/2022

Sequence-to-Set Generative Models

In this paper, we propose a sequence-to-set method that can transform an...
research
02/20/2018

Actively Avoiding Nonsense in Generative Models

A generative model may generate utter nonsense when it is fit to maximiz...
research
06/08/2020

Rethinking Importance Weighting for Deep Learning under Distribution Shift

Under distribution shift (DS) where the training data distribution diffe...
research
06/07/2018

Importance weighted generative networks

Deep generative networks can simulate from a complex target distribution...
research
09/14/2016

Sampling Generative Networks

We introduce several techniques for sampling and visualizing the latent ...
research
08/19/2021

Efficient remedies for outlier detection with variational autoencoders

Deep networks often make confident, yet incorrect, predictions when test...

Please sign up or login with your details

Forgot password? Click here to reset