Unbalanced GANs: Pre-training the Generator of Generative Adversarial Network using Variational Autoencoder

02/06/2020
by   Hyungrok Ham, et al.
0

We propose Unbalanced GANs, which pre-trains the generator of the generative adversarial network (GAN) using variational autoencoder (VAE). We guarantee the stable training of the generator by preventing the faster convergence of the discriminator at early epochs. Furthermore, we balance between the generator and the discriminator at early epochs and thus maintain the stabilized training of GANs. We apply Unbalanced GANs to well known public datasets and find that Unbalanced GANs reduce mode collapses. We also show that Unbalanced GANs outperform ordinary GANs in terms of stabilized learning, faster convergence and better image quality at early epochs.

READ FULL TEXT

page 6

page 8

research
11/28/2018

Metropolis-Hastings Generative Adversarial Networks

We introduce the Metropolis-Hastings generative adversarial network (MH-...
research
11/30/2018

Lipizzaner: A System That Scales Robust Generative Adversarial Network Training

GANs are difficult to train due to convergence pathologies such as mode ...
research
06/04/2022

An Unpooling Layer for Graph Generation

We propose a novel and trainable graph unpooling layer for effective gra...
research
07/17/2020

Unsupervised Controllable Generation with Self-Training

Recent generative adversarial networks (GANs) are able to generate impre...
research
05/23/2019

CASS: Cross Adversarial Source Separation via Autoencoder

This paper introduces a cross adversarial source separation (CASS) frame...
research
01/31/2019

VAE-GANs for Probabilistic Compressive Image Recovery: Uncertainty Analysis

Recovering high-quality images from limited sensory data is a challengin...
research
12/31/2021

on the effectiveness of generative adversarial network on anomaly detection

Identifying anomalies refers to detecting samples that do not resemble t...

Please sign up or login with your details

Forgot password? Click here to reset