Generative Adversarial Networks with Inverse Transformation Unit

09/27/2017
by   Zhifeng Kong, et al.
0

In this paper we introduce a new structure to Generative Adversarial Networks by adding an inverse transformation unit behind the generator. We present two theorems to claim the convergence of the model, and two conjectures to nonideal situations when the transformation is not bijection. A general survey on models with different transformations was done on the MNIST dataset and the Fashion-MNIST dataset, which shows the transformation does not necessarily need to be bijection. Also, with certain transformations that blurs an image, our model successfully learned to sharpen the images and recover blurred images, which was additionally verified by our measurement of sharpness.

READ FULL TEXT
research
03/13/2021

Unsupervised Image Transformation Learning via Generative Adversarial Networks

In this work, we study the image transformation problem by learning the ...
research
08/09/2020

Intervention Generative Adversarial Networks

In this paper we propose a novel approach for stabilizing the training p...
research
01/14/2018

Non-Parametric Transformation Networks

ConvNets have been very effective in many applications where it is requi...
research
12/12/2019

COEGAN: Evaluating the Coevolution Effect in Generative Adversarial Networks

Generative adversarial networks (GAN) present state-of-the-art results i...
research
09/27/2018

Morpho-MNIST: Quantitative Assessment and Diagnostics for Representation Learning

Revealing latent structure in data is an active field of research, havin...
research
07/07/2020

3D Topology Transformation with Generative Adversarial Networks

Generation and transformation of images and videos using artificial inte...
research
05/04/2020

Transforming and Projecting Images into Class-conditional Generative Networks

We present a method for projecting an input image into the space of a cl...

Please sign up or login with your details

Forgot password? Click here to reset