1 Introduction
A Generative Adversarial Network (GAN) [8] is a generative model that can produce realistic samples from random vectors drawn from a known distribution. A GAN consists of a generator and a discriminator
, both of which are usually implemented as deep neural networks. The training of a GAN involves an adversarial game between the generator and the discriminator. In the context of images, the generator maps lowdimensional vectors from latent space to image space, creating images that are intended to come from the same distribution as the training data; the discriminator tries to classify between images generated by the generator (trying to assign score
) and real images from training data (trying to assign score ). Ideally, the distribution of the images generated from the generator become indistinguishable from the distribution of real images in the training set, and the discriminator assigns to both generated and real images. In practice, this is hard to achieve and there is usually a gap between the learned distribution by generator and the real distribution.A conditional Generative Adversarial Network, sometimes called a cGAN [17, 6]
is an extension from GAN which allows for generating samples conditioned on certain external information. Such an extension takes the form of feeding the conditional vector into both the generator and the discriminator during the training process. After training, the generator can generate samples dictated by the condition from a random vector together with a conditional vector. The ability to control certain attributes of the generated samples is of crucial importance for a lot of applications, such as image inpainting
[18], image manipulation [23], style transfer [12], future frame prediction [16], texttoimage [20]and imagetoimage translation in general
[9, 24].Recovering the latent vector as well as the conditional vector from an image can be useful. It is known that vectors that are close in latent and conditional space generates visually similar images, and algebraic operations in latent vector space often lead to meaningful corresponding operations in image space [19]. For a given image, being able to access the latent and conditional vector allow us to perform many tasks such as realistic image editing, data augmentation, inferences, retrieval, compression, and other insights of what the networks see and learn which can be significant to debugging, diagnosis, and other security related issues. The original cGAN framework and GAN framework in general do not provide a straightforward way of reverse back from an image to latent and conditional vector. We cover some of the previous work on recovering latent vector of a GAN in Section 2. In this work, we show that it is also possible to recover the conditional vector from a cGAN for a known generator. While the recovery of latent vectors may become unreliable under the effect of mode collapse [2, 21] when different latent vectors are mapped to a single image, the recovery of conditional vectors are usually robust, since it is rare for a successfully trained cGAN to map different conditional vectors to the same image.
A very important point to make here is that it is not the same to recover from an image generated by the generator, and from a real image. Recovering the latent and conditional vector from generated image can be considered an reverse operation which the forward operation does exist. However, when recovering from a real image we are treating it as if it was generated by the generator whereas in fact such a mapping may not exist. Thus, it is more like a projection of a real image onto the manifold learned by the generator. Besides recovering from generated images, this more interesting question of whether sensible conditional information can be recovered from real images is investigated in this work.
2 Related Work
It is out of the scope of this work to conduct a comprehensive literature review of GAN, we point the readers to a good summary given in [7]. Next we discuss some closely related work on recovery/inverting from image domain to vector domain.
The problem of recovering input of a deep neural networks is nontrivial due to the nonlinearity, multilayers and highdimensional space of a deep neural network. In [15]
they proposed to invert a convolutional neural network (CNN) to gain insights of the hidden layers of the network. In
[4] and [5], both groups proposed to learn an auxiliary network during the training of GAN in order to map the generated images back to their latent vectors. In [23] images are projected back to the manifold learned by the generator by learning a deep network that minimizes loss based on further extracted CNN features, which is suitable for natural scenes. Such methods of utilizing an auxiliary network to map images back to latent space have advantage of fast mapping, however, requires training an extra network during the training of and networks, and cannot always achieve robust precision.A gradientbased approach is proposed by [3]. The evaluation is done in image domain using reconstruction loss, with no report of reconstruction of latent vectors. In fact, we find out later in our experiments that it can take much longer for two latent vectors to become almost identical than it takes for their generated images to become visually indistinguishable. Recent work by [13] proposed to recover latent vector using a gradientbased method with “stochastic clipping”, and achieve successful recovery
of time given a certain residual threshold. The idea of “stochastic clipping” is based on the fact that latent vectors are continuous and have close to zero probability of landing on the boundary values, which doesn’t always generalize to conditional vectors in a cGAN framework.
Our work is build on [3] and [13], showing it is possible to recover conditional vector in a cGAN. The recovery process does not involve simultaneously training an auxiliary network coupled with the original cGAN, which makes it more flexible and possible to apply on trained cGANs. Moreover, we examine the effect of such recovery on real images besides generated images, which is less addressed in previous works.
3 Recovery Approach
In a nonconditional GAN setting, the generator takes a latent vector from a known distribution (usually uniform or Gaussian) as input and generates a sample . Here and are dimension of the latent vector and image, respectively. To recover from , a probe vector is randomly initialized. The goal is to find a such that the generated from it is identical as . Ideally, this will be the recovery of . Following [13], this process can be formulated as an optimization shown in Eq. 1.
(1) 
This is optimized using a gradientbased method, with a stochastic clipping method introduced in [13]. The idea is to randomly assign a value to if the dimension of is outside of the range of allowed value (e.g., assuming
is drawn from a uniform distribution in
) during optimization. Another intuition for doing so is that the probability of a randomly drawn value falls right on the boundary is close to zero. This in our opinion is similar to random reinitialization potentially multiple times to get out of impossible value ranges.Under the conditional GAN setting, a conditional vector , where is the dimension of conditional vector, is feed into the generator together with the latent vector (Fig. 1). Following the same logic, now two probe vectors and are randomly initialized and optimized iteratively so that approaches . Notice that the latent vector and the conditional vector needs to be optimized simultaneously, updating without updating can lead to incorrect solution. Eq. 1 can be modified for the conditional setting into Eq. 2 shown below:
(2) 
This can be optimized using the same approach as above only if also takes continuous value like
. However, in most cases the conditional vectors takes discrete integer values and are fed into networks in onehot encoding
[17]. Here we specifically consider the solution for onehot encoding for two reasons: firstly, because one can always easily convert conditional vectors that serve as (multidimensional) discrete labels into onehot encoding; and secondly it helps avoid the time consuming branchandbound approach in a typical mix integer programming (MIP) problem. To this end, we formulate our optimization problem as Eq. 3:(3) 
We relax the constraint of taking only integer values (0 and 1 in onehot encoding). To still reach the desired onehot encoding solution, a regularizer is added to the objective function. is a constant multiplier. The norm is used to pursue sparsity which is the case in onehot encoding. The absolute difference between norm of and is to enforce the norm be as close as 1. The entire function is minimized when is exactly onehot encoded. Later we will see that this regularization while not having significant impact on recovery from generated images, is important for recovery from real images to obtain reliable results. Again, and should be optimized together, optimizing one without the other may lead to incorrect combination of and . During optimization, the “stochastic clipping” is applied to after gradient descent, and a “projected gradient descent” is applied to . More specifically, any value less than 0 is mapped to 0, and any value greater than 1 is mapped to 1. In practise, we find it is better to initialize as a zero vector instead of a random onehot vector, so that the algorithm is not initialized with a false prior information. The overall process is detailed in Algorithm 1. and are the gradients with respect to and . Notice that the final will be reported as , since we know the true is onehot encoded.
As mentioned in the Introduction Section, it is different to recover from a generated image than from a real image. Since the forward operation for a generated image does exist, one expects that the conditional vector, being a dominant factor towards generated images, can be recovered even using Eq. 2 without any constraint on , at least after the operation. However, for real images, it is highly likely for a generator to unable to generate their identical copies. It is possible that after projecting an real image onto the learned manifold, it falls onto a spot outside of the defined domain of . In this case, it is important to have the regularizer as in Eq. 3 so that the real images are projected onto spots that are semantically explainable in conditional domain.
4 Experiments
4.1 Experiment Setups
Experiments are conducted on two public dataset, MNIST [11] and CelebA [14]. For MNIST dataset, the cGAN is trained conditioned on digit classes , making a 10dimensional vector. For CelebA dataset, we picked two attributes from groundtruth as a proofofconcept, namely Female/Male and WithGlasses/WithoutGlasses. The combination of these two attributes is converted to onehot encoding of 4 classes (0: FemaleWithoutGlasses, 1: MaleWithoutGlasses, 2: FemaleWithGlasses, 3: MaleWithGlasses), leads to a 4dimensional to train the cGAN. has 100 dimensions and is drawn from uniform distribution
for both datasets. For MNIST dataset the images are zeropadded to a resolution of
with single channel, for CelebA dataset the images are centercropped and resized to a resolution of with 3 channels. All pixel values are shift and scaled to .The cGAN used in the experiments is a conditional version of DCGAN [19, 10], as shown in Fig. 1. The conditional vectors are concatenated with latent vectors
as the input for the generator, and are shaped into the image resolution (still in onehot encoding) and concatenated with generated or real images along depth dimension as the input for the discriminator. No other skip connections are made for the conditional vectors. The discriminator here is the “vanilla” binary one instead of a multiclass one that’s usually seen in semisupervised learning. The batch size for MNIST and CelebA datasets are
and , respectively. For recovering, , and and is reduced to after iterations.4.2 Recovery from Generated Images
The recovery process from initialized probe vector and towards the true vector and is visualized by generated from them, during the iterations of optimization process. In Fig. (a)a and (b)b, after initialization, 10 iterations, 100 iterations, 1,000 iterations and 10,000 iterations are shown, together with the generated image from true and . The true conditional vectors in Fig. (a)a and (b)b are and in onehot encoding. We can see that when initialized, the looks completely different from the . The initialized images are also of visually bad quality due to the fact that is initialized as a zero vector instead of a valid onehot encoded vector. As the number of iterations increases, becomes more and more visually similar to . After iterations, is visually indistinguishable from .
Reconstruction loss is defined as the mean squared error per pixel of the reconstructed image, with value scaled to . A successful recovery of and should generate a small reconstruction loss. The recovery error of is defined as the Euclidean distance between the true and the probe . The recovery accuracy of is calculated after taking , i.e., the index of the maximum value in onehot encoded vector is reported as final recovered conditional label. The first iterations of one batch are plotted for these values in Fig. 3 and 4 with (Eq. 3) and without regularization (Eq. 2). We can see that image reconstruction loss and recovery error of decrease rapidly in the first few hundreds of iterations, and continue to decrease slowly afterwards. The accuracy of recovery conditional vector approaches rather quickly and steadily. In MNIST dataset, the reconstruction loss with and without the regularization are similar; the recovery error of is lower when the regularization is applied; the recovery accuracy of is marginally higher with regularization. In CelebA dataset, they are almost identical. This shows that for generated images, the recovery of conditional vector can be achieved with high accuracy using a gradient based method with or without extra regularization.
4.3 Recovery from Real Images
We further apply the recovery operation on real images. It is interesting to study the effect of conditional vector recovery, when the images are real and not generated by the generator. The same experiments as in previous subsection are repeated this time for real images.
Again, we first examine the recovery process visually through reconstructed images. In Fig. (a)a and (b)b, the real images and the process of approaching these real images with generated images are illustrated. The images transfer from the randomly initialized ones to the ones that are visually similar to the target images. However, it can still be observed that they are not exactly the same even visually. This is more obvious in CelebA dataset. In Fig. (a)a, an example of incorrectly recovered conditional vector is shown, which is the row. The true is of label “”, while the recovered is of label “”. It is interesting to observe how the network manages to produce an image that is very close to digit “” given the condition “”. Actually, the fact that the reconstruction loss can be low even with the incorrect and is the main challenge we encountered.
The reconstruct loss and recovery accuracy of for the first iterations of one batch are plotted in Fig. 6 and 7. Notice this time there isn’t a true of a real image for us to compute recovery error of . Compared with recovery from generated images, the reconstruction loss is greater and the recovery accuracy of is lower when recovering from real images. Especially the recovery of becomes less stable, frequently toggling between two possible values back and forth for some images. Very importantly, for real images, there exists a quite significant gap () between with and without regularization for the recovery accuracy of , in both MNIST and CelebA datasets, showing that the regularization improves the recovery accuracy of conditional vector for real images.
3 Dataset  Recovered From  Reconstruction Loss  Recovery Accuracy of 
2 MNIST  
MNIST  
CelebA  
CelebA  
2 
4.4 Converged Results
The optimization is considered converged after iterations (most of the time it takes less iterations) from empirically observation. The results after running the optimization for iterations with the proposed method are listed in Table 1. represents the generated images and represents the real images. In Reconstruction Loss column, numbers in the brackets represent the initial losses. It shows that it is easy to recover the conditional vectors from generated images, and the reconstruction loss can be very low. On the other hand, for real images, the recovery is not always successful and the original images can not always be reconstructed exactly, which means it is often impossible to generate certain real images. This can always happen when the underlying data distribution modeled by the generator is not perfect.
5 Discussion
We noticed that a better recovered does not necessarily result in better reconstruction loss (Fig. (a)a and (b)b). Also a much better recovery of does not translate to equal amount of advantage in reconstruction loss (Fig. 6 and 7). One possible explanation is that, for one image, there are multiple (potentially infinite) combinations and values of and
from which the it can be generated. Another point could be that the reconstruction loss is not the most appropriate evaluation metric. The objective function used in this work is based on reconstruction loss, which evaluates per pixel differences in image domain. It would be interesting to see if other losses, for example, mean squared error of discriminative CNN features, can produce better gradient thus lead to better results.
While the recovery of conditional vectors has similar performance across the two datasets, the recovery of latent vectors differs a lot. The recovery error of reduces much slower on MNIST than CelebA dataset. It is possible that is utilized to a “greater extent” in CelebA because of much more complex content compared with MNIST (color faces vs. gray scale digits). We suspect that in MNIST dataset, some mapped to the same image, or some dimensions of become basically irrelevant. More investigation of how different dimensions of and impact the recovery could be worthy. Again, the cGAN used in this experiment is a simple DCGAN structure, with more recent advancement in the training of GAN such as [1, 2, 22], the performance of recover is expected to improve.
The ability to access the latent and conditional vector of a given generated image from a cGAN could potentially be used for tasks such as debugging and diagnosis of the network. Even though we could not calculate the recovery error for for real images, we do get consistent for the same image. It remains interesting to see if this could be applied to detect adversarial attacks. An adversarial image could have different behaviour in terms of and when being recovered.
6 Conclusion
In this work, we show that it is possible to recover the latent vector as well as the conditional vector from a conditional generative adversarial network. The approach could potentially enable a wide spectrum of applications ranging from image manipulation for entertaining purposes to diagnosis of the neural networks for security purposes. The method minimizes a regularized reconstruction loss using projected gradient descent and stochastic clipping. The regularizer is designed for conditional vector being discrete labels. The recovery method is evaluated on two public datasets for both generated images and real images. We see that the conditional vector can be recovered with high accuracy from generated images, and to a lesser extent from real images. The result is promising, and how to close the gap between recovery from generated images and real images will be our future direction.
References
 [1] Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. In International Conference on Learning Representations (ICLR), 2017.
 [2] Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized generative adversarial networks. In International Conference on Learning Representations (ICLR), 2017.
 [3] Antonia Creswell and Anil Anthony Bharath. Inverting the generator of a generative adversarial network. In Workshop on Adversarial Training, NIPS, 2016.
 [4] Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. In International Conference on Learning Representations (ICLR), 2017.
 [5] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. In International Conference on Learning Representations (ICLR), 2017.
 [6] Jon Gauthier. Conditional generative adversarial nets for convolutional face generation. Class Project for Stanford CS231N, 2014.
 [7] Ian Goodfellow. Nips 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160, 2016.
 [8] Ian Goodfellow, Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems (NIPS), pages 2672–2680, 2014.

[9]
Phillip Isola, JunYan Zhu, Tinghui Zhou, and Alexei A Efros.
Imagetoimage translation with conditional adversarial networks.
InProceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR)
, 2017. 
[10]
Taehoon Kim.
A tensorflow implementation of “deep convolutional generative adversarial networks”.
https://github.com/carpedm20/DCGANtensorflow, 2016.  [11] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
 [12] Chuan Li and Michael Wand. Precomputed realtime texture synthesis with markovian generative adversarial networks. In European Conference on Computer Vision (ECCV), pages 702–716. Springer, 2016.
 [13] Zachary C Lipton and Subarna Tripathi. Precise recovery of latent vectors from generative adversarial networks. In International Conference on Learning Representations (ICLR) Workshop Track, 2017.
 [14] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015.
 [15] Aravindh Mahendran and Andrea Vedaldi. Understanding deep image representations by inverting them. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), pages 5188–5196, 2015.
 [16] Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multiscale video prediction beyond mean square error. In International Conference on Learning Representations (ICLR), 2016.
 [17] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
 [18] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2536–2544, 2016.
 [19] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In International Conference on Learning Representations (ICLR) Workshop Track, 2016.

[20]
Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and
Honglak Lee.
Generative adversarial text to image synthesis.
In
International Conference on Machine Learning (ICML)
, 2016.  [21] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems (NIPS), pages 2234–2242, 2016.
 [22] David WardeFarley and Yoshua Bengio. Improving generative adversarial networks with denoising feature matching. In International Conference on Learning Representations (ICLR), 2017.
 [23] JunYan Zhu, Philipp Krähenbühl, Eli Shechtman, and Alexei A Efros. Generative visual manipulation on the natural image manifold. In European Conference on Computer Vision (ECCV), pages 597–613. Springer, 2016.
 [24] JunYan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired imagetoimage translation using cycleconsistent adversarial networks. In IEEE International Conference on Computer Vision (ICCV), 2017.
Comments
There are no comments yet.