Generating the support with extreme value losses

02/08/2019
by   Nicholas Guttenberg, et al.
0

When optimizing against the mean loss over a distribution of predictions in the context of a regression task, then even if there is a distribution of targets the optimal prediction distribution is always a delta function at a single value. Methods of constructing generative models need to overcome this tendency. We consider a simple method of summarizing the prediction error, such that the optimal strategy corresponds to outputting a distribution of predictions with a support that matches the support of the distribution of targets --- optimizing against the minimal value of the loss given a set of samples from the prediction distribution, rather than the mean. We show that models trained against this loss learn to capture the support of the target distribution and, when combined with an auxiliary classifier-like prediction task, can be projected via rejection sampling to reproduce the full distribution of targets. The resulting method works well compared to other generative modeling approaches particularly in low dimensional spaces with highly non-trivial distributions, due to mode collapse solutions being globally suboptimal with respect to the extreme value loss. However, the method is less suited to high-dimensional spaces such as images due to the scaling of the number of samples needed in order to accurately estimate the extreme value loss when the dimension of the data manifold becomes large.

READ FULL TEXT

page 7

page 9

page 10

research
02/17/2021

Deep Extreme Value Copulas for Estimation and Sampling

We propose a new method for modeling the distribution function of high d...
research
02/18/2022

Minimax Rate of Distribution Estimation on Unknown Submanifold under Adversarial Losses

Statistical inference from high-dimensional data with low-dimensional st...
research
03/23/2023

Kullback-Leibler divergence for the Fréchet extreme-value distribution

We derive a closed-form solution for the Kullback-Leibler divergence bet...
research
11/13/2019

Dynamic Connected Neural Decision Classifier and Regressor with Dynamic Softing Pruning

In the regression problem, L1, L2 are the most commonly-used loss functi...
research
02/10/2016

High Dimensional Inference with Random Maximum A-Posteriori Perturbations

This paper presents a new approach, called perturb-max, for high-dimensi...
research
11/07/2022

Proper losses for discrete generative models

We initiate the study of proper losses for evaluating generative models ...
research
01/16/2014

Learning to Make Predictions In Partially Observable Environments Without a Generative Model

When faced with the problem of learning a model of a high-dimensional en...

Please sign up or login with your details

Forgot password? Click here to reset