GAT-GMM: Generative Adversarial Training for Gaussian Mixture Models

06/18/2020
by   Farzan Farnia, et al.
35

Generative adversarial networks (GANs) learn the distribution of observed samples through a zero-sum game between two machine players, a generator and a discriminator. While GANs achieve great success in learning the complex distribution of image, sound, and text data, they perform suboptimally in learning multi-modal distribution-learning benchmarks including Gaussian mixture models (GMMs). In this paper, we propose Generative Adversarial Training for Gaussian Mixture Models (GAT-GMM), a minimax GAN framework for learning GMMs. Motivated by optimal transport theory, we design the zero-sum game in GAT-GMM using a random linear generator and a softmax-based quadratic discriminator architecture, which leads to a non-convex concave minimax optimization problem. We show that a Gradient Descent Ascent (GDA) method converges to an approximate stationary minimax point of the GAT-GMM optimization problem. In the benchmark case of a mixture of two symmetric, well-separated Gaussians, we further show this stationary point recovers the true parameters of the underlying GMM. We numerically support our theoretical findings by performing several experiments, which demonstrate that GAT-GMM can perform as well as the expectation-maximization algorithm in learning mixtures of two Gaussians.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2017

Flow-GAN: Bridging implicit and prescribed learning in generative models

Evaluating the performance of generative models for unsupervised learnin...
research
02/02/2018

Selective Sampling and Mixture Models in Generative Adversarial Networks

In this paper, we propose a multi-generator extension to the adversarial...
research
01/30/2023

Adversarially Slicing Generative Networks: Discriminator Slices Feature for One-Dimensional Optimal Transport

Generative adversarial networks (GANs) learn a target probability distri...
research
09/27/2019

Learning Generative Adversarial RePresentations (GAP) under Fairness and Censoring Constraints

We present Generative Adversarial rePresentations (GAP) as a data-driven...
research
01/21/2020

Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behave as Gaussian Mixtures

This paper shows that deep learning (DL) representations of data produce...
research
01/27/2019

Deconstructing Generative Adversarial Networks

We deconstruct the performance of GANs into three components: 1. Formu...
research
07/13/2018

Generative Adversarial Privacy

We present a data-driven framework called generative adversarial privacy...

Please sign up or login with your details

Forgot password? Click here to reset