
High Dimensional Inference with Random Maximum APosteriori Perturbations
This paper presents a new approach, called perturbmax, for highdimensi...
02/10/2016 ∙ by Tamir Hazan, et al. ∙ 0 ∙ shareread it

Learning Generative Models across Incomparable Spaces
Generative Adversarial Networks have shown remarkable success in learnin...
05/14/2019 ∙ by Charlotte Bunne, et al. ∙ 1 ∙ shareread it

Learning to Make Predictions In Partially Observable Environments Without a Generative Model
When faced with the problem of learning a model of a highdimensional en...
01/16/2014 ∙ by Erik Talvitie, et al. ∙ 0 ∙ shareread it

Learning Generative Models with Sinkhorn Divergences
The ability to compare two degenerate probability distributions (i.e. tw...
06/01/2017 ∙ by Aude Genevay, et al. ∙ 0 ∙ shareread it

Importance weighted generative networks
Deep generative networks can simulate from a complex target distribution...
06/07/2018 ∙ by Maurice Diesendruck, et al. ∙ 2 ∙ shareread it

A sequential sampling strategy for extreme event statistics in nonlinear dynamical systems
We develop a method for the evaluation of extreme event statistics assoc...
04/19/2018 ∙ by Mustafa A. Mohamad, et al. ∙ 2 ∙ shareread it

Efficient Robust Mean Value Calculation of 1D Features
A robust mean value is often a good alternative to the standard mean val...
01/29/2016 ∙ by Erik Jonsson, et al. ∙ 0 ∙ shareread it
Generating the support with extreme value losses
When optimizing against the mean loss over a distribution of predictions in the context of a regression task, then even if there is a distribution of targets the optimal prediction distribution is always a delta function at a single value. Methods of constructing generative models need to overcome this tendency. We consider a simple method of summarizing the prediction error, such that the optimal strategy corresponds to outputting a distribution of predictions with a support that matches the support of the distribution of targets  optimizing against the minimal value of the loss given a set of samples from the prediction distribution, rather than the mean. We show that models trained against this loss learn to capture the support of the target distribution and, when combined with an auxiliary classifierlike prediction task, can be projected via rejection sampling to reproduce the full distribution of targets. The resulting method works well compared to other generative modeling approaches particularly in low dimensional spaces with highly nontrivial distributions, due to mode collapse solutions being globally suboptimal with respect to the extreme value loss. However, the method is less suited to highdimensional spaces such as images due to the scaling of the number of samples needed in order to accurately estimate the extreme value loss when the dimension of the data manifold becomes large.
READ FULL TEXT