Generative adversarial networks (GANs) radford2015unsupervised; goodfellow2014generative are a class of generative model which are able to generate realistic looking images of faces, digits and street numbers radford2015unsupervised. GANs involve training two networks: a generator, , and a discriminator, . The generator,
, is trained to generate images from a random vectordrawn from a prior distribution,
. The prior is often chosen to be a normal or uniform distribution.
Radford et al. radford2015unsupervised demonstrated that generative adversarial networks (GANs) learn a “rich linear structure" meaning that algebraic operations in -space often lead to meaningful generations in image space. Since images represented in -space are often meaningful, direct access to a for a given image, may be useful for discriminative tasks such as retrieval or classification. Recently, it has also become desirable to be able to access -space in order to manipulate original images zhu2016generative. Further, inverting the generator may provide interesting insights to highlight what the GAN model learns. Thus, there are many reasons that we may want to invert the generator.
Mapping from image space, , to -space is non-trivial, as it requires inversion of the generator, which is often a many layered, non-linear model radford2015unsupervised; goodfellow2014generative; chen2016infogan. Dumoulin et al. dumoulin2016adversarially and Donahue et al. donahue2016adversarial proposed learning a third, decoder network alongside the generator and discriminator to map image samples back to
-space. Collectively, they demonstrated results on MNIST, ImageNet, CIFAR-10 and SVHN and CelebA. However, reconstructions of inversions are often poor. Specifically, reconstructions of inverted MNIST digits using methods of Donahue et al.donahue2015long, often fail to preserve style and character class. Drawbacks to this approach include the need to train a third network which increases the number of parameters that have to be learned, increasing the chances of over fitting. The need to train an extra network also means that inversion cannot be performed on pre-trained networks.
We propose an alternative approach to generator inversion which makes the following improvements:
We infer -space representations for images that when passed through the generator produce samples that are visually similar to those from which they were inferred. For the case of MNIST digits, our proposed inversion technique ensures that digits generated from inferred ’s maintain both the style and character class of the image from which the was inferred, better than those in previous work donahue2016adversarial.
Our approach can be applied to a pre-trained generator provided that the computational graph for the network is available.
We also show that batches of samples can be inferred from batches of image samples, which improves the efficiency of the inversion process by allowing multiple images to be inverted in parallel. In the case where a network is trained using batch normalisation, it may also be necessary to invert a batch of samples.
Inversion is achieved by finding a vector which when passed through the generator produces an image that is very similar to the target image.
2 Method: Inverting The Generator
For an image we want to infer the -space representation, , which when passed through the trained generator produces an image very similar to . We refer to the process of inferring from as inversion. This can be formulated as a minimisation problem:
Provided that the computational graph for is known, can be calculated via gradient descent methods, taking the gradient of w.r.t. . This is detailed in Algorithm 1.
Provided that the generator is deterministic, each value maps to a single image, . A single value cannot map to multiple images. However, it is possible that a single value may map to several representations, particularly if the generator collapses salimans2016improved. This suggests that there may be multiple possible values to describe a single image. This is very different to a discriminative model, where multiple images, may often be described by the same representation vector mahendran2015understanding, particularly when a discriminative model learns representations tolerant to variations.
The approach of Alg. 1 is similar in spirit to that of Mahendran et al. mahendran2015understanding, however instead of inverting a representation to obtain the image that was responsible for it, we invert an image to discover the latent representation that generated it.
2.1 Effects Of Batch Normalisation
GAN training is non-trivial because the optimal solution is a saddle point rather than a minimum salimans2016improved. It is suggested by Radford et al. radford2015unsupervised that to achieve more stable GAN training it is necessary to use batch normalisation ioffe2015batch
. Batch normalisation involves calculating a mean and standard deviation over a batch of outputs from a convolutional layer and adjusting the mean and standard deviation using learned weights. If a singlevalue is passed through a batch normalisation layer, the output of the layer may be meaningless. To prevent this problem, it would be ideal to use virtual batch normalisation salimans2016improved, where statistics are calculated over a separate batch. However, we want to allow this technique to be applied to any pre-trained network - where virtual batch normalisation may not have been employed. To counteract the effects of batch normalisation, we propose inverting a mixed batch of image samples at a time. This not only has the desired effect of dealing with problems caused when using batch normalisation, but also allows multiple image samples to be inverted in parallel.
2.2 Inverting A Batch Of Samples
Not only does inverting a batch of samples make sense when networks use batch normalisation, it is also a practical way to invert many images at once. We will now show that this approach is a legitimate way to update many values in one go.
Let z, z be a batch of samples of . This will map to a batch of image samples x, x. For each pair , , a loss , may be calculated. The update for would then be
If reconstruction loss is calculated over a batch, then the batch reconstruction loss would be , and the update would be:
Each reconstruction loss depends only on , so depends only on , which means , for all . Note that this may not strictly be true when batch normalisation is applied to outputs of convolutional layers in the generative model, since batch statistics are used to normalise these outputs. However, provided that the size of the batch is sufficiently large we assume that the statistics of a batch are approximately constant parameters for the dataset, rather than being dependant on the specific values in the batch. This shows that is updated only by reconstruction loss , and the other losses do not contribute to the update of , making batch updates a valid approach.
2.3 Using Prior Knowledge Of P(Z)
A GAN is trained to generate samples from a where the distribution over is a chosen prior distribution, . is often a Gaussian or uniform distribution. If is a uniform distribution, , then after updating , it can be clipped to be between . This ensures that
lies in the probable regions of. If
is a Gaussian Distribution,, regularisation terms may be added to the cost function, penalising samples that have statistics that are not consistent with .
is a vector of length . If each of the values in are drawn independently and from identical distributions, and provided that is sufficiently large, we may be able to use statistics of values in
to add regularisation terms to the loss function. For instance, ifis a distribution with mean, and standard deviation , we get the new loss function:
where is the mean value of elements in , is the standard deviation of elements in and are weights.
Since is often quite small (e.g. radford2015unsupervised), it is unrealistic to expect the statistics of a single to match those of the prescribed prior. However, since we are able to update a batch of samples at a time, we can calculate and over many samples in a batch to get more meaningful statistics.
3 Relation to Previous Work
This approach of inferring from bears similarities to work of Zhu et al. zhu2016generative; we now highlight the differences between the two approaches and the benefits of our approach over that of Zhu et al. zhu2016generative. Primarily, we address issues related to batch normalisation by showing that a mixed batch of image samples can be inverted to obtain latent encodings. Potential problems encountered when using batch normalisation are not discussed by Zhu et al. zhu2016generative.
The generator of a GAN is trained to generate image samples from a drawn from a prior distribution . This means that some values are more probable that other values. It makes sense, then, that the inferred ’s are also from (or at least near) . We introduce hard and soft constraints to be used during the optimisation process, to encourage inferred ’s to be likely under the prior distribution
. Two common priors often used when training GAN’s are the uniform and normal distribution; we show that our method copes with both of these priors.
Specifically, Zhu et al. zhu2016generative calculate reconstruction loss, by comparing the features of and extracted from layers of AlexNet, a CNN trained on natural scenes. This approach is likely to fail if generated samples are not of natural scenes (e.g. MNIST digits). Our approach considers pixel-wise loss, providing an approach that is generic to the dataset. Further, if our intention is to use the inversion to better understand the GAN model, it may be essential not to incorporate information from other pre-trained networks in the inversion process.
4 “Pre-trained” Models
We train four models on two different datasets, MNIST and Omniglot lake2015human. In order to compare the effects of regularisation or clipping when using a normal or uniform prior distribution respectively, we train networks on each dataset, using each prior - totalling four models.
The MNIST dataset consists of k samples of hand written digits, to . The dataset is split into k samples for training and k samples for testing. Both the training and testing dataset contains examples of digits to .
The generator and discriminator networks for learning MNIST digits are detailed in Table 1. The networks were trained for iterations with batch size , learning rate using Adam updates. The networks are trained on k MNIST training samples, covering all categories.
fully connected 1024 units + batch norm + relu
|conv 64,5,5 + upsample + batch norm + leaky relu(0.2)|
|fully connected 6272 units + batch norm + relu||conv 128,5,5 + upsample + batch norm + leaky relu(0.2)|
|conv 64,5,5 + down-sample + batch norm + relu||fully connected 1024 units + leaky relu(0.2)|
|conv 1,5,5 + down-sample + batch norm + relu||fully connected 1 unit + leaky relu(0.2)|
Fig. 1 shows examples of random generations for MNIST networks trained using uniform and normal distributions.
The Omniglot dataset lake2015human consists of characters from different alphabets, where each alphabet has at least different characters. The Omniglot dataset has a background dataset, used for training and a test dataset. The background set consists of characters from writing systems, while the test dataset consists of characters from the other . Note, characters in the training and testing dataset come from different writing systems. The generator and discriminator networks for learning Omniglot characters lake2015human are the same as those used in previous work creswell2016task. The network is trained only on the background dataset, for iterations with random batches of size , using Adam updates with learning rate . The latent encoding has dimension, .
These experiments are designed to evaluate the proposed inversion process. A valid inversion process should map an image sample, to a , such that when is passed through the generative part of the GAN, it produces an image, , that is close to the original image, .
In our experiments, we selected a random batch of images, , and applied inversion to the generator network using this batch. We performed inversion on four generators: Two trained to generate MNIST digits and two trained to generate Omniglot digits. For each case, the networks were trained to generate from with being either a uniform or a normal distribution.
To invert a batch of image samples, we minimised the cost function described by Eqn. 1. In these experiments we examined the necessity of regularisation or clipping in the minimisation process. If samples may be inverted without the need for regularisation or clipping, then this technique may be considered general to the latent prior, , used to train the GAN.
Minimising Binary Cross Entropy: We performed inversion where the cost function consisted only of minimising the binary cross entropy between the image sample and the reconstruction. For this approach to be general to the noise process used for latent space, we would hope that image samples may be inverted well by only minimising binary cross entropy and not using any hard or soft constraints on the inferred ’s.
Regularisation and Clipping: GANs are trained to generate images from a prior distribution, . Therefore it may make sense to place some constraints on ’s inferred in the inversion process. However, the constraints needed depend on the distribution of the noise source. These experiments deal with two distributions, commonly used when training GANs, the uniform and Gaussian distributions. For generators trained using a uniform distribution we compare inversion with and with out clipping. For generators trained using a Gaussian distribution we compare inversion with and with out regularisation as described by Eqn. 5, using .
5.1 Evaluation Methods
To quantitatively evaluated the quality of image reconstruction by taking the mean absolute pixel error across all reconstructions for each of the reconstruction methods. For qualitative evaluation, we show pairs of and their reconstruction, . By visualising the inversions, we can assess to what extent the the digit or character identity is preserved. Also, with the MNIST dataset, we can visually assess whether digit style is also preserved.
Each MNIST digit is drawn in a unique style; a successful inversion of MNIST digits should preserve both the style and the identity of the digit. In Fig. 2, we show a random set of pairs of original images, , and their reconstructions, . In general, the inversions preserve both style and identity well. Using visual inspection alone, it is not clear whether regularisation methods improve the inversion process or not. Table 2 records the absolute, mean reconstruction error. Results suggest that the regularisation techniques that we employed did not improve the inversion. This is a positive result, as this suggests that inversion may be possible without regularisation, meaning that the inversion process can be independent of the noise process used. This also suggests that regions just outside of may also be able to produce meaningful examples from the data distribution.
|Uniform prior||Normal prior|
|Without clipping||With clipping||Without regularisation||With regularisation|
The Omniglot inversions are particularly challenging, as we are trying to find a set of ’s for a set of characters, , from alphabets that were not in the training data. This is challenging the inversion process to invert samples from alphabets that it has not seen before, using information about alphabets that it has seen. The original and reconstructed samples are shown in Fig 3. In general, the reconstructions are sharp and able to capture fine details like small circles and edges. There is one severe fail case in Fig 3 (b), where the top example has failed to invert the sample. A comparison of reconstruction error with and without regularisation is shown in Table 3. These results suggest that regularisation does not improve inversion, and good inversion is possible without regularisation.
|Uniform prior||Normal prior|
|Without clipping||With clipping||Without regularisation||With regularisation|
The generator of a GAN learns the mapping . It has been shown that values that are close in -space produce images that are visually similar in image space, radford2015unsupervised. It has also been shown that images along projections in -space also have visual similarities radford2015unsupervised. To exploit the structure of for discriminative tasks, it is necessary to invert this process, to obtain a latent encoding for an image . Inverting the generator also reveals interesting properties of the learned generative model.
We suggest a process for inverting the generator of any pre-trained GAN, obtaining a latent encoding for image samples, provided that the computational graph for the GAN is available. We presented candidate regularisation methods than can be used depending on the prior distribution over the latent space. However, we found that for the MNIST and Omniglot datasets that it is not necessary to use regularisation to perform the inversion, which means that this approach may be more generally applied.
For GANs trained using batch normalisation, where only the gain and shift are learned but the mean and standard deviation are calculated on the go, it may not be possible to invert single image samples. If this is the case, it is necessary to invert batches of image samples. We show that it is indeed possible invert batches of image samples. Under reasonable assumptions, batch inversion is sensible because the gradient used to update a latent samples only depends on the reconstruction error of the latent sample that it is updating. Inverting batches may also make the inversion process more computationally efficient.
Our inversion results for the MNIST and Omniglot dataset provide interesting insight into the latent representation. For example, the MNIST dataset consists of handwritten digits, where each digit is written in a unique style. Our results also suggest that both the identity and the style of the digit is preserved in the inversion process we propose here, indicating that the latent space preserves both these properties. These results suggest that latent encodings may be useful for applications beyond digit classification. Results using the Omniglot dataset show that even handwritten characters from alphabets never seen during training of a GAN can be projected into the latent space, with good reconstructions. This may have implications for one-shot learning.
We like to acknowledge the Engineering and Physical Sciences Research Council for funding through a Doctoral Training studentship.