Towards Recovery of Conditional Vectors from Conditional Generative Adversarial Networks

12/06/2017 ∙ by Sihao Ding, et al. ∙ Volvo 0

A conditional Generative Adversarial Network allows for generating samples conditioned on certain external information. Being able to recover latent and conditional vectors from a condi- tional GAN can be potentially valuable in various applications, ranging from image manipulation for entertaining purposes to diagnosis of the neural networks for security purposes. In this work, we show that it is possible to recover both latent and conditional vectors from generated images given the generator of a conditional generative adversarial network. Such a recovery is not trivial due to the often multi-layered non-linearity of deep neural networks. Furthermore, the effect of such recovery applied on real natural images are investigated. We discovered that there exists a gap between the recovery performance on generated and real images, which we believe comes from the difference between generated data distribution and real data distribution. Experiments are conducted to evaluate the recovered conditional vectors and the reconstructed images from these recovered vectors quantitatively and qualitatively, showing promising results.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A Generative Adversarial Network (GAN) [8] is a generative model that can produce realistic samples from random vectors drawn from a known distribution. A GAN consists of a generator and a discriminator

, both of which are usually implemented as deep neural networks. The training of a GAN involves an adversarial game between the generator and the discriminator. In the context of images, the generator maps low-dimensional vectors from latent space to image space, creating images that are intended to come from the same distribution as the training data; the discriminator tries to classify between images generated by the generator (trying to assign score

) and real images from training data (trying to assign score ). Ideally, the distribution of the images generated from the generator become indistinguishable from the distribution of real images in the training set, and the discriminator assigns to both generated and real images. In practice, this is hard to achieve and there is usually a gap between the learned distribution by generator and the real distribution.

A conditional Generative Adversarial Network, sometimes called a cGAN [17, 6]

is an extension from GAN which allows for generating samples conditioned on certain external information. Such an extension takes the form of feeding the conditional vector into both the generator and the discriminator during the training process. After training, the generator can generate samples dictated by the condition from a random vector together with a conditional vector. The ability to control certain attributes of the generated samples is of crucial importance for a lot of applications, such as image inpainting 

[18], image manipulation [23], style transfer [12], future frame prediction [16], text-to-image [20]

and image-to-image translation in general 

[9, 24].

Recovering the latent vector as well as the conditional vector from an image can be useful. It is known that vectors that are close in latent and conditional space generates visually similar images, and algebraic operations in latent vector space often lead to meaningful corresponding operations in image space [19]. For a given image, being able to access the latent and conditional vector allow us to perform many tasks such as realistic image editing, data augmentation, inferences, retrieval, compression, and other insights of what the networks see and learn which can be significant to debugging, diagnosis, and other security related issues. The original cGAN framework and GAN framework in general do not provide a straightforward way of reverse back from an image to latent and conditional vector. We cover some of the previous work on recovering latent vector of a GAN in Section 2. In this work, we show that it is also possible to recover the conditional vector from a cGAN for a known generator. While the recovery of latent vectors may become unreliable under the effect of mode collapse [2, 21] when different latent vectors are mapped to a single image, the recovery of conditional vectors are usually robust, since it is rare for a successfully trained cGAN to map different conditional vectors to the same image.

A very important point to make here is that it is not the same to recover from an image generated by the generator, and from a real image. Recovering the latent and conditional vector from generated image can be considered an reverse operation which the forward operation does exist. However, when recovering from a real image we are treating it as if it was generated by the generator whereas in fact such a mapping may not exist. Thus, it is more like a projection of a real image onto the manifold learned by the generator. Besides recovering from generated images, this more interesting question of whether sensible conditional information can be recovered from real images is investigated in this work.

2 Related Work

It is out of the scope of this work to conduct a comprehensive literature review of GAN, we point the readers to a good summary given in [7]. Next we discuss some closely related work on recovery/inverting from image domain to vector domain.

The problem of recovering input of a deep neural networks is non-trivial due to the non-linearity, multi-layers and high-dimensional space of a deep neural network. In [15]

they proposed to invert a convolutional neural network (CNN) to gain insights of the hidden layers of the network. In 

[4] and [5], both groups proposed to learn an auxiliary network during the training of GAN in order to map the generated images back to their latent vectors. In [23] images are projected back to the manifold learned by the generator by learning a deep network that minimizes loss based on further extracted CNN features, which is suitable for natural scenes. Such methods of utilizing an auxiliary network to map images back to latent space have advantage of fast mapping, however, requires training an extra network during the training of and networks, and cannot always achieve robust precision.

A gradient-based approach is proposed by [3]. The evaluation is done in image domain using reconstruction loss, with no report of reconstruction of latent vectors. In fact, we find out later in our experiments that it can take much longer for two latent vectors to become almost identical than it takes for their generated images to become visually indistinguishable. Recent work by [13] proposed to recover latent vector using a gradient-based method with “stochastic clipping”, and achieve successful recovery

of time given a certain residual threshold. The idea of “stochastic clipping” is based on the fact that latent vectors are continuous and have close to zero probability of landing on the boundary values, which doesn’t always generalize to conditional vectors in a cGAN framework.

Our work is build on [3] and [13], showing it is possible to recover conditional vector in a cGAN. The recovery process does not involve simultaneously training an auxiliary network coupled with the original cGAN, which makes it more flexible and possible to apply on trained cGANs. Moreover, we examine the effect of such recovery on real images besides generated images, which is less addressed in previous works.

3 Recovery Approach

In a non-conditional GAN setting, the generator takes a latent vector from a known distribution (usually uniform or Gaussian) as input and generates a sample . Here and are dimension of the latent vector and image, respectively. To recover from , a probe vector is randomly initialized. The goal is to find a such that the generated from it is identical as . Ideally, this will be the recovery of . Following [13], this process can be formulated as an optimization shown in Eq. 1.

(1)

This is optimized using a gradient-based method, with a stochastic clipping method introduced in [13]. The idea is to randomly assign a value to if the dimension of is outside of the range of allowed value (e.g., assuming

is drawn from a uniform distribution in

) during optimization. Another intuition for doing so is that the probability of a randomly drawn value falls right on the boundary is close to zero. This in our opinion is similar to random re-initialization potentially multiple times to get out of impossible value ranges.

Under the conditional GAN setting, a conditional vector , where is the dimension of conditional vector, is feed into the generator together with the latent vector (Fig. 1). Following the same logic, now two probe vectors and are randomly initialized and optimized iteratively so that approaches . Notice that the latent vector and the conditional vector needs to be optimized simultaneously, updating without updating can lead to incorrect solution. Eq. 1 can be modified for the conditional setting into Eq. 2 shown below:

(2)

This can be optimized using the same approach as above only if also takes continuous value like

. However, in most cases the conditional vectors takes discrete integer values and are fed into networks in one-hot encoding 

[17]. Here we specifically consider the solution for one-hot encoding for two reasons: firstly, because one can always easily convert conditional vectors that serve as (multi-dimensional) discrete labels into one-hot encoding; and secondly it helps avoid the time consuming branch-and-bound approach in a typical mix integer programming (MIP) problem. To this end, we formulate our optimization problem as Eq. 3:

(3)

We relax the constraint of taking only integer values (0 and 1 in one-hot encoding). To still reach the desired one-hot encoding solution, a regularizer is added to the objective function. is a constant multiplier. The norm is used to pursue sparsity which is the case in one-hot encoding. The absolute difference between norm of and is to enforce the norm be as close as 1. The entire function is minimized when is exactly one-hot encoded. Later we will see that this regularization while not having significant impact on recovery from generated images, is important for recovery from real images to obtain reliable results. Again, and should be optimized together, optimizing one without the other may lead to incorrect combination of and . During optimization, the “stochastic clipping” is applied to after gradient descent, and a “projected gradient descent” is applied to . More specifically, any value less than 0 is mapped to 0, and any value greater than 1 is mapped to 1. In practise, we find it is better to initialize as a zero vector instead of a random one-hot vector, so that the algorithm is not initialized with a false prior information. The overall process is detailed in Algorithm 1. and are the gradients with respect to and . Notice that the final will be reported as , since we know the true is one-hot encoded.

  function Recover()
    
    
    while not converged do
      
      
      
       if  or  do
       if  do
       if  do
    end while
    return ,
  end function
Algorithm 1 Recovering latent and conditional vector from conditional GAN

As mentioned in the Introduction Section, it is different to recover from a generated image than from a real image. Since the forward operation for a generated image does exist, one expects that the conditional vector, being a dominant factor towards generated images, can be recovered even using Eq. 2 without any constraint on , at least after the operation. However, for real images, it is highly likely for a generator to unable to generate their identical copies. It is possible that after projecting an real image onto the learned manifold, it falls onto a spot outside of the defined domain of . In this case, it is important to have the regularizer as in Eq. 3 so that the real images are projected onto spots that are semantically explainable in conditional domain.

4 Experiments

4.1 Experiment Setups

Experiments are conducted on two public dataset, MNIST [11] and CelebA [14]. For MNIST dataset, the cGAN is trained conditioned on digit classes , making a 10-dimensional vector. For CelebA dataset, we picked two attributes from ground-truth as a proof-of-concept, namely Female/Male and WithGlasses/WithoutGlasses. The combination of these two attributes is converted to one-hot encoding of 4 classes (0: Female-WithoutGlasses, 1: Male-WithoutGlasses, 2: Female-WithGlasses, 3: Male-WithGlasses), leads to a 4-dimensional to train the cGAN. has 100 dimensions and is drawn from uniform distribution

for both datasets. For MNIST dataset the images are zero-padded to a resolution of

with single channel, for CelebA dataset the images are center-cropped and resized to a resolution of with 3 channels. All pixel values are shift and scaled to .

Figure 1: The cGAN model used in the experiments. The conditional vector are feeded into both the Generator and the Discriminator using one-hot encoding. It is reshaped (maintaining one-hot encoding) in order to concatenate with noise vector (for Generator) and input image (for Discriminator) along depth channel.

The cGAN used in the experiments is a conditional version of DCGAN [19, 10], as shown in Fig. 1. The conditional vectors are concatenated with latent vectors

as the input for the generator, and are shaped into the image resolution (still in one-hot encoding) and concatenated with generated or real images along depth dimension as the input for the discriminator. No other skip connections are made for the conditional vectors. The discriminator here is the “vanilla” binary one instead of a multi-class one that’s usually seen in semi-supervised learning. The batch size for MNIST and CelebA datasets are

and , respectively. For recovering, , and and is reduced to after iterations.

4.2 Recovery from Generated Images

The recovery process from initialized probe vector and towards the true vector and is visualized by generated from them, during the iterations of optimization process. In Fig. (a)a and (b)b, after initialization, 10 iterations, 100 iterations, 1,000 iterations and 10,000 iterations are shown, together with the generated image from true and . The true conditional vectors in Fig. (a)a and (b)b are and in one-hot encoding. We can see that when initialized, the looks completely different from the . The initialized images are also of visually bad quality due to the fact that is initialized as a zero vector instead of a valid one-hot encoded vector. As the number of iterations increases, becomes more and more visually similar to . After iterations, is visually indistinguishable from .

(a) MNIST
(b) CelebA
Figure 2: Recovery (from generated images) process visualization for (a) MNIST dataset, and (b) CelebA dataset. For both figures from left to right columns showing: from true and , from probe and after initialization, iterations, iterations, iterations and iterations.

Reconstruction loss is defined as the mean squared error per pixel of the reconstructed image, with value scaled to . A successful recovery of and should generate a small reconstruction loss. The recovery error of is defined as the Euclidean distance between the true and the probe . The recovery accuracy of is calculated after taking , i.e., the index of the maximum value in one-hot encoded vector is reported as final recovered conditional label. The first iterations of one batch are plotted for these values in Fig. 3 and 4 with (Eq. 3) and without regularization (Eq. 2). We can see that image reconstruction loss and recovery error of decrease rapidly in the first few hundreds of iterations, and continue to decrease slowly afterwards. The accuracy of recovery conditional vector approaches rather quickly and steadily. In MNIST dataset, the reconstruction loss with and without the regularization are similar; the recovery error of is lower when the regularization is applied; the recovery accuracy of is marginally higher with regularization. In CelebA dataset, they are almost identical. This shows that for generated images, the recovery of conditional vector can be achieved with high accuracy using a gradient based method with or without extra regularization.

(a)
(b)
(c)
Figure 3: In MNIST dataset: (a) reconstruction loss; (b) recovery error of ; (c) recovery accuracy of . The red solid line is with regularization while the blue dashed line is without regularization.
(a)
(b)
(c)
Figure 4: In CelebA dataset: (a) reconstruction loss; (b) recovery error of ; (c) recovery accuracy of . The red solid line is with regularization while the blue dashed line is without regularization.

4.3 Recovery from Real Images

We further apply the recovery operation on real images. It is interesting to study the effect of conditional vector recovery, when the images are real and not generated by the generator. The same experiments as in previous subsection are repeated this time for real images.

Again, we first examine the recovery process visually through reconstructed images. In Fig. (a)a and (b)b, the real images and the process of approaching these real images with generated images are illustrated. The images transfer from the randomly initialized ones to the ones that are visually similar to the target images. However, it can still be observed that they are not exactly the same even visually. This is more obvious in CelebA dataset. In Fig. (a)a, an example of incorrectly recovered conditional vector is shown, which is the row. The true is of label “”, while the recovered is of label “”. It is interesting to observe how the network manages to produce an image that is very close to digit “” given the condition “”. Actually, the fact that the reconstruction loss can be low even with the incorrect and is the main challenge we encountered.

(a) MNIST
(b) CelebA
Figure 5: Recovery (from real images) process visualization for (a) MNIST dataset, and (b) CelebA dataset. For both figures from left to right columns showing: real images, from probe and after initialization, 10 iterations, 100 iterations, 1,000 iterations and 10,000 iterations.

The reconstruct loss and recovery accuracy of for the first iterations of one batch are plotted in Fig. 6 and 7. Notice this time there isn’t a true of a real image for us to compute recovery error of . Compared with recovery from generated images, the reconstruction loss is greater and the recovery accuracy of is lower when recovering from real images. Especially the recovery of becomes less stable, frequently toggling between two possible values back and forth for some images. Very importantly, for real images, there exists a quite significant gap () between with and without regularization for the recovery accuracy of , in both MNIST and CelebA datasets, showing that the regularization improves the recovery accuracy of conditional vector for real images.

(a)
(b)
Figure 6: In MNIST dataset: (a) reconstruction loss; (b) recovery accuracy of . The red solid line is with regularization while the blue dashed line is without regularization.
(a)
(b)
Figure 7: In CelebA dataset: (a) reconstruction loss; (b) recovery accuracy of . The red solid line is with regularization while the blue dashed line is without regularization.
3 Dataset Recovered From Reconstruction Loss Recovery Accuracy of
2 MNIST
MNIST
CelebA
CelebA
2
Table 1: Results after iterations.

4.4 Converged Results

The optimization is considered converged after iterations (most of the time it takes less iterations) from empirically observation. The results after running the optimization for iterations with the proposed method are listed in Table 1. represents the generated images and represents the real images. In Reconstruction Loss column, numbers in the brackets represent the initial losses. It shows that it is easy to recover the conditional vectors from generated images, and the reconstruction loss can be very low. On the other hand, for real images, the recovery is not always successful and the original images can not always be reconstructed exactly, which means it is often impossible to generate certain real images. This can always happen when the underlying data distribution modeled by the generator is not perfect.

5 Discussion

We noticed that a better recovered does not necessarily result in better reconstruction loss (Fig. (a)a and (b)b). Also a much better recovery of does not translate to equal amount of advantage in reconstruction loss (Fig. 6 and 7). One possible explanation is that, for one image, there are multiple (potentially infinite) combinations and values of and

from which the it can be generated. Another point could be that the reconstruction loss is not the most appropriate evaluation metric. The objective function used in this work is based on reconstruction loss, which evaluates per pixel differences in image domain. It would be interesting to see if other losses, for example, mean squared error of discriminative CNN features, can produce better gradient thus lead to better results.

While the recovery of conditional vectors has similar performance across the two datasets, the recovery of latent vectors differs a lot. The recovery error of reduces much slower on MNIST than CelebA dataset. It is possible that is utilized to a “greater extent” in CelebA because of much more complex content compared with MNIST (color faces vs. gray scale digits). We suspect that in MNIST dataset, some mapped to the same image, or some dimensions of become basically irrelevant. More investigation of how different dimensions of and impact the recovery could be worthy. Again, the cGAN used in this experiment is a simple DCGAN structure, with more recent advancement in the training of GAN such as [1, 2, 22], the performance of recover is expected to improve.

The ability to access the latent and conditional vector of a given generated image from a cGAN could potentially be used for tasks such as debugging and diagnosis of the network. Even though we could not calculate the recovery error for for real images, we do get consistent for the same image. It remains interesting to see if this could be applied to detect adversarial attacks. An adversarial image could have different behaviour in terms of and when being recovered.

6 Conclusion

In this work, we show that it is possible to recover the latent vector as well as the conditional vector from a conditional generative adversarial network. The approach could potentially enable a wide spectrum of applications ranging from image manipulation for entertaining purposes to diagnosis of the neural networks for security purposes. The method minimizes a regularized reconstruction loss using projected gradient descent and stochastic clipping. The regularizer is designed for conditional vector being discrete labels. The recovery method is evaluated on two public datasets for both generated images and real images. We see that the conditional vector can be recovered with high accuracy from generated images, and to a lesser extent from real images. The result is promising, and how to close the gap between recovery from generated images and real images will be our future direction.

References

  • [1] Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. In International Conference on Learning Representations (ICLR), 2017.
  • [2] Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized generative adversarial networks. In International Conference on Learning Representations (ICLR), 2017.
  • [3] Antonia Creswell and Anil Anthony Bharath. Inverting the generator of a generative adversarial network. In Workshop on Adversarial Training, NIPS, 2016.
  • [4] Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. In International Conference on Learning Representations (ICLR), 2017.
  • [5] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. In International Conference on Learning Representations (ICLR), 2017.
  • [6] Jon Gauthier. Conditional generative adversarial nets for convolutional face generation. Class Project for Stanford CS231N, 2014.
  • [7] Ian Goodfellow. Nips 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160, 2016.
  • [8] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems (NIPS), pages 2672–2680, 2014.
  • [9] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros.

    Image-to-image translation with conditional adversarial networks.

    In

    Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR)

    , 2017.
  • [10] Taehoon Kim.

    A tensorflow implementation of “deep convolutional generative adversarial networks”.

    https://github.com/carpedm20/DCGAN-tensorflow, 2016.
  • [11] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • [12] Chuan Li and Michael Wand. Precomputed real-time texture synthesis with markovian generative adversarial networks. In European Conference on Computer Vision (ECCV), pages 702–716. Springer, 2016.
  • [13] Zachary C Lipton and Subarna Tripathi. Precise recovery of latent vectors from generative adversarial networks. In International Conference on Learning Representations (ICLR) Workshop Track, 2017.
  • [14] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015.
  • [15] Aravindh Mahendran and Andrea Vedaldi. Understanding deep image representations by inverting them. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), pages 5188–5196, 2015.
  • [16] Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. In International Conference on Learning Representations (ICLR), 2016.
  • [17] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
  • [18] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2536–2544, 2016.
  • [19] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In International Conference on Learning Representations (ICLR) Workshop Track, 2016.
  • [20] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In

    International Conference on Machine Learning (ICML)

    , 2016.
  • [21] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems (NIPS), pages 2234–2242, 2016.
  • [22] David Warde-Farley and Yoshua Bengio. Improving generative adversarial networks with denoising feature matching. In International Conference on Learning Representations (ICLR), 2017.
  • [23] Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman, and Alexei A Efros. Generative visual manipulation on the natural image manifold. In European Conference on Computer Vision (ECCV), pages 597–613. Springer, 2016.
  • [24] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE International Conference on Computer Vision (ICCV), 2017.