Training generative networks using random discriminators

04/22/2019
by   Babak Barazandeh, et al.
0

In recent years, Generative Adversarial Networks (GANs) have drawn a lot of attentions for learning the underlying distribution of data in various applications. Despite their wide applicability, training GANs is notoriously difficult. This difficulty is due to the min-max nature of the resulting optimization problem and the lack of proper tools of solving general (non-convex, non-concave) min-max optimization problems. In this paper, we try to alleviate this problem by proposing a new generative network that relies on the use of random discriminators instead of adversarial design. This design helps us to avoid the min-max formulation and leads to an optimization problem that is stable and could be solved efficiently. The performance of the proposed method is evaluated using handwritten digits (MNIST) and Fashion products (Fashion-MNIST) data sets. While the resulting images are not as sharp as adversarial training, the use of random discriminator leads to a much faster algorithm as compared to the adversarial counterpart. This observation, at the minimum, illustrates the potential of the random discriminator approach for warm-start in training GANs.

READ FULL TEXT
research
06/10/2021

A Decentralized Adaptive Momentum Method for Solving a Class of Min-Max Optimization Problems

Min-max saddle point games have recently been intensely studied, due to ...
research
02/03/2020

Designing GANs: A Likelihood Ratio Approach

We are interested in the design of generative adversarial networks. The ...
research
04/12/2021

Understanding Overparameterization in Generative Adversarial Networks

A broad class of unsupervised deep learning methods such as Generative A...
research
04/26/2021

Solving a class of non-convex min-max games using adaptive momentum methods

Adaptive momentum methods have recently attracted a lot of attention for...
research
09/29/2021

On the One-sided Convergence of Adam-type Algorithms in Non-convex Non-concave Min-max Optimization

Adam-type methods, the extension of adaptive gradient methods, have show...
research
10/07/2020

Training GANs with predictive projection centripetal acceleration

Although remarkable successful in practice, training generative adversar...
research
07/13/2017

Stable Distribution Alignment Using the Dual of the Adversarial Distance

Methods that align distributions by minimizing an adversarial distance b...

Please sign up or login with your details

Forgot password? Click here to reset