SGD Learns One-Layer Networks in WGANs

10/15/2019
by   Qi Lei, et al.
44

Generative adversarial networks (GANs) are a widely used framework for learning generative models. Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a minmax optimization problem to global optimality, but are in practice successfully trained using stochastic gradient descent-ascent. In this paper, we show that, when the generator is a one-layer network, stochastic gradient descent-ascent converges to a global solution with polynomial time and sample complexity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/01/2020

Convergence and Sample Complexity of SGD in GANs

We provide theoretical convergence guarantees on training Generative Adv...
research
02/15/2021

WGAN with an Infinitely Wide Generator Has No Spurious Stationary Points

Generative adversarial networks (GAN) are a widely used class of deep ge...
research
11/23/2018

Kernel-Based Training of Generative Networks

Generative adversarial networks (GANs) are designed with the help of min...
research
09/25/2019

A Mean-Field Theory for Kernel Alignment with Random Features in Generative Adversarial Networks

We propose a novel supervised learning method to optimize the kernel in ...
research
10/09/2022

Dissecting adaptive methods in GANs

Adaptive methods are a crucial component widely used for training genera...
research
07/12/2021

Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions

Generative Adversarial Networks (GANs) are commonly used for modeling co...
research
07/07/2019

Fast and Provable ADMM for Learning with Generative Priors

In this work, we propose a (linearized) Alternating Direction Method-of-...

Please sign up or login with your details

Forgot password? Click here to reset