DeepAI AI Chat
Log In Sign Up

Proper losses for discrete generative models

by   Rafael Frongillo, et al.
University of Colorado Boulder

We initiate the study of proper losses for evaluating generative models in the discrete setting. Unlike traditional proper losses, we treat both the generative model and the target distribution as black-boxes, only assuming ability to draw i.i.d. samples. We define a loss to be black-box proper if the generative distribution that minimizes expected loss is equal to the target distribution. Using techniques from statistical estimation theory, we give a general construction and characterization of black-box proper losses: they must take a polynomial form, and the number of draws from the model and target distribution must exceed the degree of the polynomial. The characterization rules out a loss whose expectation is the cross-entropy between the target distribution and the model. By extending the construction to arbitrary sampling schemes such as Poisson sampling, however, we show that one can construct such a loss.


page 1

page 2

page 3

page 4


Truncated Variational Sampling for "Black Box" Optimization of Generative Models

We investigate the optimization of two generative models with binary hid...

xGEMs: Generating Examplars to Explain Black-Box Models

This work proposes xGEMs or manifold guided exemplars, a framework to un...

Provable Copyright Protection for Generative Models

There is a growing concern that learned conditional generative models ma...

Adversarial Prompting for Black Box Foundation Models

Prompting interfaces allow users to quickly adjust the output of generat...

Adversarial Likelihood-Free Inference on Black-Box Generator

Generative Adversarial Network (GAN) can be viewed as an implicit estima...

Generating the support with extreme value losses

When optimizing against the mean loss over a distribution of predictions...