-
Flow-GAN: Bridging implicit and prescribed learning in generative models
Evaluating the performance of generative models for unsupervised learnin...
read it
-
Selective Sampling and Mixture Models in Generative Adversarial Networks
In this paper, we propose a multi-generator extension to the adversarial...
read it
-
Training Generative Adversarial Networks via Primal-Dual Subgradient Methods: A Lagrangian Perspective on GAN
We relate the minimax game of generative adversarial networks (GANs) to ...
read it
-
Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behave as Gaussian Mixtures
This paper shows that deep learning (DL) representations of data produce...
read it
-
Learning Generative Adversarial RePresentations (GAP) under Fairness and Censoring Constraints
We present Generative Adversarial rePresentations (GAP) as a data-driven...
read it
-
Generative Adversarial Privacy
We present a data-driven framework called generative adversarial privacy...
read it
-
Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile
Owing to their connection with generative adversarial networks (GANs), s...
read it
GAT-GMM: Generative Adversarial Training for Gaussian Mixture Models
Generative adversarial networks (GANs) learn the distribution of observed samples through a zero-sum game between two machine players, a generator and a discriminator. While GANs achieve great success in learning the complex distribution of image, sound, and text data, they perform suboptimally in learning multi-modal distribution-learning benchmarks including Gaussian mixture models (GMMs). In this paper, we propose Generative Adversarial Training for Gaussian Mixture Models (GAT-GMM), a minimax GAN framework for learning GMMs. Motivated by optimal transport theory, we design the zero-sum game in GAT-GMM using a random linear generator and a softmax-based quadratic discriminator architecture, which leads to a non-convex concave minimax optimization problem. We show that a Gradient Descent Ascent (GDA) method converges to an approximate stationary minimax point of the GAT-GMM optimization problem. In the benchmark case of a mixture of two symmetric, well-separated Gaussians, we further show this stationary point recovers the true parameters of the underlying GMM. We numerically support our theoretical findings by performing several experiments, which demonstrate that GAT-GMM can perform as well as the expectation-maximization algorithm in learning mixtures of two Gaussians.
READ FULL TEXT
Comments
There are no comments yet.