Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy

11/14/2016
by   Dougal J. Sutherland, et al.
0

We propose a method to optimize the representation and distinguishability of samples from two probability distributions, by maximizing the estimated power of a statistical test based on the maximum mean discrepancy (MMD). This optimized MMD is applied to the setting of unsupervised learning by generative adversarial networks (GAN), in which a model attempts to generate realistic samples, and a discriminator attempts to tell these apart from data samples. In this context, the MMD may be used in two roles: first, as a discriminator, either directly on the samples, or on features of the samples. Second, the MMD can be used to evaluate the performance of a generative model, by testing the model's samples against a reference data set. In the latter role, the optimized MMD is particularly helpful, as it gives an interpretable indication of how the model and data distributions differ, even in cases where individual model samples are not easily distinguished either by eye or by classifier.

READ FULL TEXT
research
02/24/2021

Self-Diagnosing GAN: Diagnosing Underrepresented Samples in Generative Adversarial Networks

Despite remarkable performance in producing realistic samples, Generativ...
research
03/05/2023

A Semi-Bayesian Nonparametric Hypothesis Test Using Maximum Mean Discrepancy with Applications in Generative Adversarial Networks

A classic inferential problem in statistics is the two-sample hypothesis...
research
07/03/2018

New Losses for Generative Adversarial Learning

Generative Adversarial Networks (Goodfellow et al., 2014), a major break...
research
11/18/2016

Associative Adversarial Networks

We propose a higher-level associative memory for learning adversarial ne...
research
06/17/2021

A Simple Generative Network

Generative neural networks are able to mimic intricate probability distr...
research
05/14/2015

Training generative neural networks via Maximum Mean Discrepancy optimization

We consider training a deep neural network to generate samples from an u...
research
05/22/2016

Interpretable Distribution Features with Maximum Testing Power

Two semimetrics on probability distributions are proposed, given as the ...

Please sign up or login with your details

Forgot password? Click here to reset