Rethinking Generative Coverage: A Pointwise Guaranteed Approach

02/13/2019
by   Peilin Zhong, et al.
12

All generative models have to combat missing modes. The conventional wisdom is by reducing a statistical distance (such as f-divergence) between the generated distribution and the provided data distribution through training. We defy this wisdom. We show that even a small statistical distance does not imply a plausible mode coverage, because this distance measures a global similarity between two distributions, but not their similarity in local regions--which is needed to ensure a complete mode coverage. From a starkly different perspective, we view the battle against missing modes as a two-player game, between a player choosing a data point and an adversary choosing a generator aiming to cover that data point. Enlightened by von Neumann's minimax theorem, we see that if a generative model can approximate a data distribution moderately well under a global statistical distance measure, then we should be able to find a mixture of generators which collectively covers every data point and thus every mode with a lower-bounded probability density. A constructive realization of this minimax duality--that is, our proposed algorithm of finding the mixture of generators--is connected to a multiplicative weights update rule. We prove the pointwise coverage guarantee of our algorithm, and our experiments on real and synthetic data confirm better mode coverage over recent approaches that also use a mixture of generators but focus on global statistical distances.

READ FULL TEXT

page 1

page 2

page 4

page 5

page 6

page 9

page 15

page 16

research
02/13/2019

Rethinking Generative Coverage: A Pointwise Guaranteed Approac

All generative models have to combat missing modes. The conventional wis...
research
08/08/2017

Multi-Generator Generative Adversarial Nets

We propose a new approach to train the Generative Adversarial Nets (GANs...
research
05/19/2018

BourGAN: Generative Networks with Metric Embeddings

This paper addresses the mode collapse for generative adversarial networ...
research
04/06/2021

Leverage Score Sampling for Complete Mode Coverage in Generative Adversarial Networks

Commonly, machine learning models minimize an empirical expectation. As ...
research
04/24/2023

Towards Mode Balancing of Generative Models via Diversity Weights

Large data-driven image models are extensively used to support creative ...
research
08/17/2015

A Generative Model for Multi-Dialect Representation

In the era of deep learning several unsupervised models have been develo...
research
01/18/2022

Minimax Optimality (Probably) Doesn't Imply Distribution Learning for GANs

Arguably the most fundamental question in the theory of generative adver...

Please sign up or login with your details

Forgot password? Click here to reset