Inverting The Generator Of A Generative Adversarial Network

11/17/2016
by   Antonia Creswell, et al.
0

Generative adversarial networks (GANs) learn to synthesise new samples from a high-dimensional distribution by passing samples drawn from a latent space through a generative network. When the high-dimensional distribution describes images of a particular data set, the network should learn to generate visually similar image samples for latent variables that are close to each other in the latent space. For tasks such as image retrieval and image classification, it may be useful to exploit the arrangement of the latent space by projecting images into it, and using this as a representation for discriminative tasks. GANs often consist of multiple layers of non-linear computations, making them very difficult to invert. This paper introduces techniques for projecting image samples into the latent space using any pre-trained GAN, provided that the computational graph is available. We evaluate these techniques on both MNIST digits and Omniglot handwritten characters. In the case of MNIST digits, we show that projections into the latent space maintain information about the style and the identity of the digit. In the case of Omniglot characters, we show that even characters from alphabets that have not been seen during training may be projected well into the latent space; this suggests that this approach may have applications in one-shot learning.

READ FULL TEXT

page 5

page 7

research
02/15/2018

Inverting The Generator Of A Generative Adversarial Network (II)

Generative adversarial networks (GANs) learn a deep generative model tha...
research
06/01/2019

GANchors: Realistic Image Perturbation Distributions for Anchors Using Generative Models

We extend and improve the work of Model Agnostic Anchors for explanation...
research
10/31/2017

Parametrizing filters of a CNN with a GAN

It is commonly agreed that the use of relevant invariances as a good sta...
research
02/09/2021

Using Deep LSD to build operators in GANs latent space with meaning in real space

Generative models rely on the key idea that data can be represented in t...
research
05/10/2021

Learning High-Dimensional Distributions with Latent Neural Fokker-Planck Kernels

Learning high-dimensional distributions is an important yet challenging ...
research
07/06/2023

A Privacy-Preserving Walk in the Latent Space of Generative Models for Medical Applications

Generative Adversarial Networks (GANs) have demonstrated their ability t...
research
06/03/2019

Encoder-Powered Generative Adversarial Networks

We present an encoder-powered generative adversarial network (EncGAN) th...

Please sign up or login with your details

Forgot password? Click here to reset