Convergence and Sample Complexity of SGD in GANs
We provide theoretical convergence guarantees on training Generative Adversarial Networks (GANs) via SGD. We consider learning a target distribution modeled by a 1-layer Generator network with a non-linear activation function ϕ(·) parametrized by a d × d weight matrix 𝐖_*, i.e., f_*(𝐱) = ϕ(𝐖_* 𝐱). Our main result is that by training the Generator together with a Discriminator according to the Stochastic Gradient Descent-Ascent iteration proposed by Goodfellow et al. yields a Generator distribution that approaches the target distribution of f_*. Specifically, we can learn the target distribution within total-variation distance ϵ using Õ(d^2/ϵ^2) samples which is (near-)information theoretically optimal. Our results apply to a broad class of non-linear activation functions ϕ, including ReLUs and is enabled by a connection with truncated statistics and an appropriate design of the Discriminator network. Our approach relies on a bilevel optimization framework to show that vanilla SGDA works.
READ FULL TEXT