Gradient Layer: Enhancing the Convergence of Adversarial Training for Generative Models

01/07/2018
by   Atsushi Nitanda, et al.
0

We propose a new technique that boosts the convergence of training generative adversarial networks. Generally, the rate of training deep models reduces severely after multiple iterations. A key reason for this phenomenon is that a deep network is expressed using a highly non-convex finite-dimensional model, and thus the parameter gets stuck in a local optimum. Because of this, methods often suffer not only from degeneration of the convergence speed but also from limitations in the representational power of the trained network. To overcome this issue, we propose an additional layer called the gradient layer to seek a descent direction in an infinite-dimensional space. Because the layer is constructed in the infinite-dimensional space, we are not restricted by the specific model structure of finite-dimensional models. As a result, we can get out of the local optima in finite-dimensional models and move towards the global optimal function more directly. In this paper, this phenomenon is explained from the functional gradient method perspective of the gradient layer. Interestingly, the optimization procedure using the gradient layer naturally constructs the deep structure of the network. Moreover, we demonstrate that this procedure can be regarded as a discretization method of the gradient flow that naturally reduces the objective function. Finally, the method is tested using several numerical experiments, which show its fast convergence.

READ FULL TEXT

page 3

page 14

research
05/19/2019

Mean-Field Langevin Dynamics and Energy Landscape of Neural Networks

We present a probabilistic analysis of the long-time behaviour of the no...
research
05/14/2023

Local Convergence of Gradient Descent-Ascent for Training Generative Adversarial Networks

Generative Adversarial Networks (GANs) are a popular formulation to trai...
research
02/20/2023

Infinite-Dimensional Diffusion Models for Function Spaces

We define diffusion-based generative models in infinite dimensions, and ...
research
05/25/2021

Practical Convex Formulation of Robust One-hidden-layer Neural Network Training

Recent work has shown that the training of a one-hidden-layer, scalar-ou...
research
02/15/2021

WGAN with an Infinitely Wide Generator Has No Spurious Stationary Points

Generative adversarial networks (GAN) are a widely used class of deep ge...
research
12/14/2017

Stochastic Particle Gradient Descent for Infinite Ensembles

The superior performance of ensemble methods with infinite models are we...
research
11/27/2019

Your Local GAN: Designing Two Dimensional Local Attention Mechanisms for Generative Models

We introduce a new local sparse attention layer that preserves two-dimen...

Please sign up or login with your details

Forgot password? Click here to reset