Training Generative Adversarial Networks with Weights

11/06/2018
by   Yannis Pantazis, et al.
0

The impressive success of Generative Adversarial Networks (GANs) is often overshadowed by the difficulties in their training. Despite the continuous efforts and improvements, there are still open issues regarding their convergence properties. In this paper, we propose a simple training variation where suitable weights are defined and assist the training of the Generator. We provide theoretical arguments why the proposed algorithm is better than the baseline training in the sense of speeding up the training process and of creating a stronger Generator. Performance results showed that the new algorithm is more accurate in both synthetic and image datasets resulting in improvements ranging between 5

READ FULL TEXT
research
10/17/2020

Training Generative Adversarial Networks via stochastic Nash games

Generative adversarial networks (GANs) are a class of generative models ...
research
02/28/2021

Training Generative Adversarial Networks in One Stage

Generative Adversarial Networks (GANs) have demonstrated unprecedented s...
research
01/16/2019

LHC analysis-specific datasets with Generative Adversarial Networks

Using generative adversarial networks (GANs), we investigate the possibi...
research
06/23/2021

Alias-Free Generative Adversarial Networks

We observe that despite their hierarchical convolutional nature, the syn...
research
12/06/2017

SGAN: An Alternative Training of Generative Adversarial Networks

The Generative Adversarial Networks (GANs) have demonstrated impressive ...
research
10/01/2020

Evaluating a Generative Adversarial Framework for Information Retrieval

Recent advances in Generative Adversarial Networks (GANs) have resulted ...
research
03/21/2023

CoopInit: Initializing Generative Adversarial Networks via Cooperative Learning

Numerous research efforts have been made to stabilize the training of th...

Please sign up or login with your details

Forgot password? Click here to reset