Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space

11/30/2016
by   Anh Nguyen, et al.
0

Generating high-resolution, photo-realistic images has been a long-standing goal in machine learning. Recently, Nguyen et al. (2016) showed one interesting way to synthesize novel images by performing gradient ascent in the latent space of a generator network to maximize the activations of one or multiple neurons in a separate classifier network. In this paper we extend this method by introducing an additional prior on the latent code, improving both sample quality and sample diversity, leading to a state-of-the-art generative model that produces high quality images at higher resolutions (227x227) than previous generative models, and does so for all 1000 ImageNet categories. In addition, we provide a unified probabilistic interpretation of related activation maximization methods and call the general class of models "Plug and Play Generative Networks". PPGNs are composed of 1) a generator network G that is capable of drawing a wide range of image types and 2) a replaceable "condition" network C that tells the generator what to draw. We demonstrate the generation of images conditioned on a class (when C is an ImageNet or MIT Places classification network) and also conditioned on a caption (when C is an image captioning network). Our method also improves the state of the art of Multifaceted Feature Visualization, which generates the set of synthetic inputs that activate a neuron in order to better understand how deep neural networks operate. Finally, we show that our model performs reasonably well at the task of image inpainting. While image models are used in this paper, the approach is modality-agnostic and can be applied to many types of data.

READ FULL TEXT

page 21

page 22

page 23

page 29

page 30

page 31

page 33

page 36

research
12/08/2021

InvGAN: Invertible GANs

Generation of photo-realistic images, semantic editing and representatio...
research
05/30/2016

Synthesizing the preferred inputs for neurons in neural networks via deep generator networks

Deep neural networks (DNNs) have demonstrated state-of-the-art results o...
research
03/31/2020

Learning from Small Data Through Sampling an Implicit Conditional Generative Latent Optimization Model

We revisit the long-standing problem of learning from small sample. In r...
research
03/26/2021

Synthesize-It-Classifier: Learning a Generative Classifier through RecurrentSelf-analysis

In this work, we show the generative capability of an image classifier n...
research
03/28/2022

Cycle-Consistent Counterfactuals by Latent Transformations

CounterFactual (CF) visual explanations try to find images similar to th...
research
11/25/2022

ToothInpaintor: Tooth Inpainting from Partial 3D Dental Model and 2D Panoramic Image

In orthodontic treatment, a full tooth model consisting of both the crow...
research
09/17/2018

Feature2Mass: Visual Feature Processing in Latent Space for Realistic Labeled Mass Generation

This paper deals with a method for generating realistic labeled masses. ...

Please sign up or login with your details

Forgot password? Click here to reset