Perturbative GAN: GAN with Perturbation Layers

02/05/2019
by   Yuma Kishi, et al.
0

Perturbative GAN, which replaces convolution layers of existing convolutional GANs (DCGAN, WGAN-GP, BIGGAN, etc.) with perturbation layers that adds a fixed noise mask, is proposed. Compared with the convolu-tional GANs, the number of parameters to be trained is smaller, the convergence of training is faster, the incep-tion score of generated images is higher, and the overall training cost is reduced. Algorithmic generation of the noise masks is also proposed, with which the training, as well as the generation, can be boosted with hardware acceleration. Perturbative GAN is evaluated using con-ventional datasets (CIFAR10, LSUN, ImageNet), both in the cases when a perturbation layer is adopted only for Generators and when it is introduced to both Generator and Discriminator.

READ FULL TEXT

page 2

page 3

page 5

page 6

page 8

research
05/07/2019

FCC-GAN: A Fully Connected and Convolutional Net Architecture for GANs

Generative Adversarial Networks (GANs) are a powerful class of generativ...
research
01/13/2018

Which Training Methods for GANs do actually Converge?

Recent work has shown local convergence of GAN training for absolutely c...
research
01/13/2018

On the convergence properties of GAN training

Recent work has shown local convergence of GAN training for absolutely c...
research
07/25/2017

Linear Discriminant Generative Adversarial Networks

We develop a novel method for training of GANs for unsupervised and clas...
research
08/08/2022

Inflating 2D Convolution Weights for Efficient Generation of 3D Medical Images

The generation of three-dimensional (3D) medical images can have great a...
research
02/06/2023

Private GANs, Revisited

We show that the canonical approach for training differentially private ...
research
03/02/2023

3D generation on ImageNet

Existing 3D-from-2D generators are typically designed for well-curated s...

Please sign up or login with your details

Forgot password? Click here to reset