DeepAI AI Chat
Log In Sign Up

Kernel-Based Training of Generative Networks

by   Kalliopi Basioti, et al.
University of Patras

Generative adversarial networks (GANs) are designed with the help of min-max optimization problems that are solved with stochastic gradient-type algorithms which are known to be non-robust. In this work we revisit a non-adversarial method based on kernels which relies on a pure minimization problem and propose a simple stochastic gradient algorithm for the computation of its solution. Using simplified tools from Stochastic Approximation theory we demonstrate that batch versions of the algorithm or smoothing of the gradient do not improve convergence. These observations allow for the development of a training algorithm that enjoys reduced computational complexity and increased robustness while exhibiting similar synthesis characteristics as classical GANs.


page 1

page 3

page 4

page 6

page 7

page 8

page 9

page 14


SGD Learns One-Layer Networks in WGANs

Generative adversarial networks (GANs) are a widely used framework for l...

Towards Better Understanding of Adaptive Gradient Algorithms in Generative Adversarial Nets

Adaptive gradient algorithms perform gradient-based updates using the hi...

Designing GANs: A Likelihood Ratio Approach

We are interested in the design of generative adversarial networks. The ...

Generative Adversarial Method Based on Neural Tangent Kernels

The recent development of Generative adversarial networks (GANs) has dri...

Reducing Noise in GAN Training with Variance Reduced Extragradient

Using large mini-batches when training generative adversarial networks (...

Minimax Optimization with Smooth Algorithmic Adversaries

This paper considers minimax optimization min_x max_y f(x, y) in the cha...

Stabilizing Adversarial Nets With Prediction Methods

Adversarial neural networks solve many important problems in data scienc...