1 Introduction
Generative adversarial networks (GAN) [11] are one of the main groups of methods used to learn generative models from complicated realworld data. As well as using a generator to synthesize semantically meaningful data from standard signal distributions, GANs (GAN and its variants) train a discriminator to distinguish real samples in the training dataset from fake samples synthesized by the generator. As the confronter, the generator aims to deceive the discriminator by producing ever more realistic samples. The training procedure continues until the generator wins the adversarial game; that is, the discriminator cannot make a better decision than randomly guessing whether a particular sample is fake or real. GANs have recently been successfully applied to image generation [4, 22, 9, 37], image editing [13, 31, 40, 32], video prediction[29, 8, 30], and many other tasks [38, 28, 17].
Although GANs already produce visually appealing samples in various applications, they are often difficult to train. If the data distribution and the generated distribution do not substantially overlap (usually at the beginning of training), the generator gradients can point to more or less random directions or even result in the vanishing gradient issue. GANs also suffer from mode collapse, i.e
., the generator assigns all its probability mass to a small region in the space
[3]. In addition, appropriate hyperparameters (e.g., learning rate and updating steps) and network architectures are critical configurations in GANs. Unsuitable settings reduce GAN’s performance or even fail to produce any reasonable results.Many recent efforts on GANs have focused on overcoming these training difficulties by developing various adversarial training objectives. Typically, assuming the optimal discriminator for the given generator is learned, different objective functions of the generator aim to measure the distance between the data distribution and the generated distribution under different metrics. The original GAN uses JensenShannon divergence as the metric. A number of metrics have been introduced to improve GAN’s performance, such as leastsquares [18], absolute deviation [39]
[24, 23], and Wasserstein distance [2]. However, according to both theoretical analyses and experimental results, minimizing each distance has its own pros and cons. For example, although measuring KullbackLeibler divergence largely eliminates the vanishing gradient issue, it easily results in mode collapse [24, 1]. Likewise, Wasserstein distance greatly improves training stability but can have nonconvergent limit cycles near equilibrium [21].To exploit the advantages and suppress the weaknesses of different metrics (i.e., GAN objectives), we devise a framework that utilizes different metrics to jointly optimize the generator. In doing so, we improve both the training stability and generative performance. We build an evolutionary generative adversarial network (EGAN), which treats the adversarial training procedure as an evolutionary problem. Specifically, a discriminator acts as the environment (i.e
., provides adaptive loss functions) and a
population of generators evolve in response to the environment. During each adversarial (or evolutionary) iteration, the discriminator is still trained to recognize real and fake samples. However, in our method, acting as parents, generators undergo different mutations to produce offspring to adapt to the environment. Different adversarial objective functions aim to minimize different distances between the generated distribution and the data distribution, leading to different mutations. Meanwhile, given the current optimal discriminator, we measure the quality and diversity of samples generated by the updated offspring. Finally, according to the principle of “survival of the fittest”, poorlyperforming offspring are removed and the remaining wellperforming offspring (i.e., generators) are preserved and used for further training.Based on the evolutionary paradigm to optimize GANs, the proposed EGAN overcomes the inherent limitations in the individual adversarial training objectives and always preserves the best offspring produced by different training objectives (i.e., mutations). In this way, we contribute to progress in and the success of GANs. Experiments on several datasets demonstrate the advantages of integrating different adversarial training objectives and EGAN’s convincing performance for image generation.
2 Related Works
In this section, we first review some previous GANs devoted to reducing training instability and improving the generative performance. We then briefly summarize some evolutionary algorithms on deep neural networks.
2.1 Generative Adversarial Networks
Generative adversarial networks (GAN) provides an excellent framework for learning deep generative models, which aim to capture probability distributions over the given data. Compared to other generative models, GAN is easily trained by alternately updating a generator and a discriminator using the backpropagation algorithm. In many generative tasks, GANs (GAN and its variants) produce better samples than other generative models
[10].However, some problems still exist in the GANs training process. In the original GAN, training the generator was equal to minimizing the JensenShannon divergence between the data distribution and the generated distribution, which easily resulted in the vanishing gradient problem. To solve this issue, a nonsaturating heuristic objective (
i.e., ‘ trick’) replaced the minimax objective function to penalize the generator [11]. Then, [24] and [26] designed specified network architectures (DCGAN) and proposed several heuristic tricks (e.g., feature matching, oneside label smoothing, virtual batch normalization) to improve training stability. Meanwhile, energybased GAN
[39] and leastsquares GAN [18] improved training stability by employing different training objectives. Although these methods partly enhanced training stability, in practice, the network architectures and training procedure still required careful design to maintain the discriminatorgenerator balance. More recently, Wasserstein GAN (WGAN) [2] and its variant WGANGP [12] were proposed to minimize the Wasserstein1 distance between the generated and data distributions. Since the Wasserstein1 distance is continuous everywhere and differentiable almost everywhere under only minimal assumptions [2], these two methods convincingly reduce training instability. However, to measure the Wasserstein1 distance between the generated distribution and the data distribution, they are asked to enforce the Lipschitz constraint on the discriminator (aka critic), which may restrict critic capability and result in some optimization difficulties [12].2.2 Evolutionary Algorithms
Over the last twenty years, evolutionary algorithms have achieved considerable success across a wide range of computational tasks including modeling, optimization and design [7, 5]. Inspired by natural evolution, the essence of an evolutionary algorithm is to equate possible solutions to individuals in a population, produce offspring through variations, and select appropriate solutions according to fitness [6].
Recently, evolutionary algorithms have been introduced to solve deep learning problems. To minimize human participation in designing deep algorithms and automatically discover such configurations, there have been many attempts to optimize deep learning hyperparameters and design deep network architectures through an evolutionary search
[35, 20, 25]. Evolutionary algorithms have also demonstrated their capacity to optimize deep neural networks [15, 34]. Moreover, [27]proposed a novel evolutionary strategy as an alternative to the popular MDPbased reinforcement learning (RL) techniques, achieving strong performance on RL benchmarks. Last but not least, an evolutionary algorithm was proposed to compress deep learning models by automatically eliminating redundant convolution filters
[33].3 Method
In this section, we first review the GAN formulation. Then, we introduce the proposed EGAN algorithm. By illustrating EGAN’s mutations and evaluation mechanism, we further discuss the advantage of the proposed framework. Finally, we conclude with the entire EGAN training process.
3.1 Generative Adversarial Networks
GAN, first proposed in [11], studies a twoplayer minimax game between a discriminative network and a generative network . Taking noisy sample
(sampled from a uniform or normal distribution) as the input, the generative network
outputs new data , whose distribution is supposed to be close to that of the data distribution . Meanwhile, the discriminative network is employed to distinguish the true data sample and the generated sample . In the original GAN, this adversarial training process was formulated as:(1) 
The adversarial procedure is illustrated in Fig. 1 (a). Most existing GANs perform a similar adversarial procedure in different adversarial objective functions.
3.2 Evolutionary Algorithm
In contrast to conventional GANs, which alternately update a generator and a discriminator, we devise an evolutionary algorithm that evolves a population of generator(s) in a given environment (i.e., the discriminator ). In this population, each individual represents a possible solution in the parameter space of the generative network . During the evolutionary process, we expect that the population gradually adapts to its environment, which means that the evolved generator(s) can generate ever more realistic samples and eventually learn the realworld data distribution. As shown in Fig. 1 (b), during evolution, each step consists of three substages:

Variation: Given an individual in the population, we utilize the variation operators to produce its offspring . Specifically, several copies of each individual—or parent—are created, each of which are modified by different mutations. Then, each modified copy is regarded as one child.

Evaluation: For each child, its performance—or individual’s quality—is evaluated by a fitness function that depends on the current environment (i.e., discriminator ).

Selection: All children will be selected according to their fitness value, and the worst part is removed—that is, they are killed. The rest remain alive (i.e., free to act as parents), and evolve to the next iteration.
After each evolutionary step, the discriminative network (i.e., the environment) is updated to further distinguish real samples and fake samples generated by the evolved generator(s), i.e.,
(2) 
Thus, the discriminative network (i.e., the environment) can continually provide the adaptive losses to drive the population of generator(s) evolving to produce better solutions. Next, we illustrate and discuss the proposed variation (or mutation) and evaluation operators in detail.
3.3 Mutations
We employ asexual reproduction with different mutations to produce the next generation’s individuals (i.e., children). Specifically, these mutation operators correspond to different training objectives, which attempt to narrow the distances between the generated distribution and the data distribution from different perspectives. In this section, we introduce the mutations used in this work^{1}^{1}1More mutation operations were tested, but the mutation approaches described already delivered a convincing performance.. To analyze the corresponding properties of these mutations, we suppose that, for each evolutionary step, the optimal discriminator , according to Eq. (2), has already been learned [11].
3.3.1 Minimax mutation
The minimax mutation corresponds to the minimax objective function in the original GAN:
(3) 
According to the theoretical analysis in [11], given the optimal discriminator , the minimax mutation aims to minimize the JensenShannon divergence (JSD) between the data distribution and the generated distribution. Although the minimax game is easy to explain and theoretically analyze, its performance in practice is disappointing, a primary problem being the generator’s vanishing gradient. If the support of two distributions lies in two manifolds, the JSD will be a constant, leading to the vanishing gradient [1]. This problem is also illustrated in Fig. 2. When the discriminator rejects generated samples with high confidence (i.e., ), the gradient tends to vanishing. However, if the generated distribution overlaps with the data distribution, meaning that the discriminator cannot completely distinguish real from fake samples, the minimax mutation provides effective gradients and continually narrows the gap between the data distribution and the generated distribution.
3.3.2 Heuristic mutation
Unlike the minimax mutation, which minimizes the log probability of the discriminator being correct, the heuristic mutation aims to maximize the log probability of the discriminator being mistaken, i.e.,
(4) 
Compared to the minimax mutation, the heuristic mutation will not saturate when the discriminator rejects the generated samples. Thus, the heuristic mutation avoids vanishing gradient and provides useful generator updates (Fig. 2). However, according to [1], given the optimal discriminator , minimizing the heuristic mutation is equal to minimizing , i.e., inverted KL minus two JSDs. Intuitively, the JSD sign is negative, which means pushing these two distributions away from each other. In practice, this may lead to training instability and generative quality fluctuations [12].
3.3.3 Leastsquares mutation
The leastsquares mutation is inspired by LSGAN [18], where the leastsquares objectives are utilized to penalize its generator to deceive the discriminator. In this work, we formulate the leastsquares mutation as:
(5) 
As shown in Fig. 2, the leastsquares mutation is nonsaturating when the discriminator can recognize the generated sample (i.e., ). When the discriminator output grows, the leastsquares mutation saturates, eventually approaching zero. Therefore, similar to the heuristic mutation, the leastsquares mutation can avoid vanishing gradient when the discriminator has a significant advantage over the generator. Meanwhile, compared to the heuristic mutation, although the leastsquares mutation will not assign an extremely high cost to generate fake samples, it will also not assign an extremely low cost to mode dropping^{2}^{2}2[1] demonstrated that the heuristic objective suffers from mode collapse since assigns a high cost to generating fake samples but an extremely low cost to mode dropping., which partly avoids mode collapse [18].
Note that, different from GANminimax and GANheuristic, LSGAN employs a different loss (‘leastsquares’) from ours (Eq. (2)) to optimize the discriminator. Yet, as shown in the Supplementary Material, the optimal discriminator of LSGAN is equivalent to ours. Therefore, although we employ only one discriminator as the environment to distinguish real and generated samples, it is sufficient to provide adaptive losses for mutations described above.
3.4 Evaluation
In an evolutionary algorithm, evaluation is the operation of measuring the quality of individuals. To determine the evolutionary direction (i.e., individuals’ selection), we devise an evaluation (or fitness) function to measure the performance of evolved individuals (i.e., children). Typically, we focus on two generator properties: 1) the quality and 2) the diversity of generated samples. First, we simply feed generator produced images into the discriminator and observe the average value of the output, which we name the quality fitness score:
(6) 
Note that discriminator is constantly upgraded to be optimal during the training process, reflecting the quality of generators at each evolutionary (or adversarial) step. If a generator obtains a relatively high quality score, its generated samples can deceive the discriminator and the generated distribution is further approximate to the data distribution.
Besides generative quality, we also pay attention to the diversity of generated samples and attempt to overcome the mode collapse issue in GAN optimization. Recently, [21] proposed a gradientbased regularization term to stabilize the GAN optimization and suppress mode collapse. Through their observation, when the generator collapses to a small region, the discriminator will subsequently label collapsed points as fake with obvious countermeasure (i.e., big gradients).
We employ a similar principle to evaluate generator optimization stability and generative diversity. Formally, the diversity fitness score is defined as:
(7) 
The log gradient value of updating is utilized to measure the diversity of generated samples. If the updated generator obtains a relatively high diversity score, which corresponds to small discriminator gradients, its generated samples tend to spread out enough, to avoid the discriminator has obvious countermeasures. Thus, the mode collapse issue can be suppressed and the discriminator will change smoothly, which helps to improve the training stability.
Based on the aforementioned two fitness scores, we can finally give the evaluation (or fitness) function of the proposed evolutionary algorithm:
(8) 
where balances two measurements: generative quality and diversity. Overall, a relatively high fitness score , leads to higher training efficiency and better generative performance.
3.5 EGan
Having introduced the proposed evolutionary algorithm and corresponding mutations and evaluation criteria, the complete EGAN training process is concluded in Algorithm 1. Overall, in EGAN, generators are regarded as an evolutionary population and discriminator acts as an environment. For each evolutionary step, generators are updated with different objectives (or mutations) to accommodate the current environment. According to the principle of “survival of the fittest”, only wellperforming children will survive and participate in future adversarial training. Unlike the twoplayer game with a fixed and static adversarial training objective in conventional GANs, EGAN allows the algorithm to integrate the merits of different adversarial objectives and generate the most competitive solution. Thus, during training, the evolutionary algorithm not only largely suppresses the limitations (vanishing gradient, mode collapse, etc.) of individual adversarial objectives, but it also harnesses their advantages to search for a better solution.
4 Experiments
To evaluate the proposed EGAN, in this section, we run and analyze experiments on several generation tasks.
4.1 Implementation Details
We evaluate EGAN on two synthetic datasets and three image datasets: CIFAR10 [14], LSUN bedroom [36], and CelebA [16]. For all of these tasks, the network architectures are based on DCGAN [24] and are briefly introduced here, more details can be found in the Supplementary Material. We use the default hyperparameter values listed in Algorithm 1 for all experiments. Note that the number of parents is set as 1, which means only one (i.e., the best) child is retained in each evolutionary step. On the one hand, this reduces EGAN’s computational cost, thereby accelerating training. On the other, our experiments show that EGAN already achieves impressive performance and stability even with only one survivor at each step. Furthermore, all experiments were trained on Nvidia GTX 1080Ti GPUs. To train a model for images using the DCGAN architecture cost around 30 hours on a single GPU.
4.2 Synthetic Datasets and Mode Collapse
In the first experiment, we adopt the experimental design proposed in [19], which trains GANs on 2D Gaussian mixture distributions. The mode collapse issue can be accurately measured on these synthetic datasets, since we can clearly observe the data distribution and the generated distribution. As shown in Fig. 3, we employ two challenging distributions to evaluate EGAN, a mixture of 8 Gaussians arranged in a circle and a mixture of 25 Gaussians arranged in a grid.^{3}^{3}3We obtain both 2D distributions and network architectures from the code provided in [12].
We first compare the proposed evolutionary adversarial training framework with one using an individual adversarial objective (i.e., conventional GANs). We train each method 50K iterations and report the KDE plots in Fig. 3. The results show that all of the individual adversarial objectives suffer from mode collapse to a greater or lesser degree. However, by combining different objectives in our evolution framework, model performance is largely improved and can accurately fit the target distributions. This further demonstrates, during the evolutionary procedure, the proposed evaluation mechanism can recognize wellperforming updatings (i.e., offspring), and promote the population to a better evolutionary direction.
4.3 CIFAR10 and Inception Score
When evaluating a GAN model, sample quality and convergence speed are two important criteria. We train different GANs on CIFAR10 and plot inception scores [26] over the course of training (Fig. 4left, middle). The same network architecture based on DCGAN is used in all methods.
As shown in Fig. 4left, EGAN can get higher inception score with less training steps. Meanwhile, EGAN also shows comparable stability when it goes to convergence. By comparison, conventional GANs expose their different limitations, such as instability at convergence (GANHeuristic), slow convergence (GANLeast square) and invalid (GANminimax). As mentioned above, different objectives aim to measure the distance between the generated and data distributions under different metrics which have different pros and cons. Here, utilizing the evolutionary framework, EGAN not only overcomes the limitations of these individual adversarial objectives, but it also outperforms other GANs (the WGAN and its improved variation WGANGP). Furthermore, although EGAN takes more time for each iteration, it achieves comparable convergence speed in terms of wallclock time (Fig. 4middle).
During training EGAN, we recorded the selected objective in each evolutionary step (Fig. 4right). At the beginning of training, the heuristic mutation and the leastsquare mutation are selected more frequently than the minimax mutation. It may due to the fact that the minimax mutation is hard to provide effective gradients (i.e., vanishing gradient) when the discriminator can easily recognize generated samples. Along with the generator approaching convergence (after 20K steps), ever more minimax mutations are employed, yet the number of selected heuristic mutations is falling. As aforementioned, the minus JSDs of the heuristic mutation may tend to push the generated distribution away from data distribution and lead to training instability. However, in EGAN, beyond the heuristic mutation, we have other options of mutation, which improves the stability at convergence.
4.4 LSUN and Architecture Robustness
The architecture robustness is another advantage of EGAN. To demonstrate the training stability of our method, we train different network architectures on the LSUN bedroom dataset [36] and compare with several existing works. In addition to the baseline DCGAN architecture, we choose three additional architectures corresponding to different training challenges: (1) limiting the recognition capability of the discriminator , i.e., 2Conv1FC LeakyReLU discriminator; (2) limiting the expression capability of the generator , i.e., no batchnorm and a constant number of filters in the generator; (3) reducing the network capability of the generator and discriminator together, i.e., remove the BN in both the generator and discriminator . For each architecture, we test five different methods: DCGAN, LSGAN, standard WGAN (with weight clipping), WGANGP (with gradient penalty) ,and our EGAN. For each method, we used the default configurations recommended in the respective studies (these methods are summarized in [12]) and train each model for 200K iterations. As shown in Fig. 6, EGAN generates reasonable results even when other methods are failed. Furthermore, based on the DCGAN architecture, we train EGAN to generate bedroom images^{4}^{4}4We remove batchnorm layers in the generator. The detailed architecture and more generated images are reported in the Supplementary Material. (Fig. 5). Observing generated images, we demonstrate that EGAN can be trained to generate diversity and highquality images from the target data distribution.
4.5 CelebA and Space Continuity
Since humans excel at identifying facial flaws, generating highquality human face images is challenging. Similar to generating bedrooms, we employ the same architectures to generate RGB human face images (Fig. 7
). In addition, given a welltrained generator, we evaluate the performance of the embedding in the latent space of noisy vectors
. In Fig. 8, we first select pairs of generated faces and record their corresponding latent vectors and. The two images in one pair have different attributes, such as gender, expression, hairstyle, and age. Then, we generate novel samples by linear interpolating between these pairs (
i.e., corresponding noisy vectors). We find that these generated samples can seamlessly change between these semantically meaningful face attributes. This experiment demonstrates that generator training does not merely memorize training samples but learns a meaningful projection from latent noisy space to face images. Meanwhile, it also shows that the generator trained by EGAN does not suffer from mode collapse, and shows great space continuity.5 Conclusion
In this paper, we present an evolutionary GAN framework (EGAN) for training deep generative models. To reduce training difficulties and improve generative performance, we devise an evolutionary algorithm to evolve a population of generators to adapt to the dynamic environment (i.e., the discriminator ). In contrast to conventional GANs, the evolutionary paradigm allows the proposed EGAN to overcome the limitations of individual adversarial objectives and preserve the best offspring after each iteration. Experiments show that EGAN improves the training stability of GAN models and achieves convincing performance in several image generation tasks. Future works will focus on further exploring the relationship between the environment (i.e., discriminator) and evolutionary population (i.e., generators) and further improving generative performance.
References
 [1] M. Arjovsky and L. Bottou. Towards principled methods for training generative adversarial networks. In Proceedings of the International Conference on Learning Representations (ICLR), 2017.

[2]
M. Arjovsky, S. Chintala, and L. Bottou.
Wasserstein generative adversarial networks.
In
Proceedings of the 34th International Conference on Machine Learning (ICML)
, pages 214–223, 2017.  [3] S. Arora, R. Ge, Y. Liang, T. Ma, and Y. Zhang. Generalization and equilibrium in generative adversarial nets (GANs). In Proceedings of the 34th International Conference on Machine Learning (ICML), pages 224–232, 2017.
 [4] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), pages 2172–2180, 2016.
 [5] K. A. De Jong. Evolutionary computation: a unified approach. MIT press, 2006.
 [6] A. E. Eiben and J. Smith. From evolutionary computation to the evolution of things. Nature, 521(7553):476, 2015.
 [7] A. E. Eiben, J. E. Smith, et al. Introduction to evolutionary computing, volume 53. Springer, 2003.
 [8] C. Finn, I. Goodfellow, and S. Levine. Unsupervised learning for physical interaction through video prediction. In Advances in Neural Information Processing Systems (NIPS), pages 64–72, 2016.
 [9] Z. Gan, L. Chen, W. Wang, Y. Pu, Y. Zhang, H. Liu, C. Li, and L. Carin. Triangle generative adversarial networks. In Advances in Neural Information Processing Systems (NIPS), 2017.
 [10] I. Goodfellow. Nips 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160, 2016.
 [11] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), pages 2672–2680, 2014.
 [12] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville. Improved training of wasserstein gans. In Advances in Neural Information Processing Systems (NIPS), 2017.

[13]
P. Isola, J.Y. Zhu, T. Zhou, and A. A. Efros.
Imagetoimage translation with conditional adversarial networks.
In
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, 2017.  [14] A. Krizhevsky. Learning multiple layers of features from tiny images. 2009.

[15]
S. Lander and Y. Shang.
Evoae–a new evolutionary method for training autoencoders for deep learning networks.
In Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, volume 2, pages 790–795. IEEE, 2015.  [16] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In The IEEE International Conference on Computer Vision (ICCV), pages 3730–3738, 2015.
 [17] J. Lu, A. Kannan, J. Yang, D. Parikh, and D. Batra. Best of both worlds: Transferring knowledge from discriminative learning to a generative visual dialog model. In Advances in Neural Information Processing Systems (NIPS), 2017.
 [18] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley. Least squares generative adversarial networks. In The IEEE International Conference on Computer Vision (ICCV), 2017.
 [19] L. Metz, B. Poole, D. Pfau, and J. SohlDickstein. Unrolled generative adversarial networks. In Proceedings of the International Conference on Learning Representations (ICLR), 2017.
 [20] R. Miikkulainen, J. Liang, E. Meyerson, A. Rawal, D. Fink, O. Francon, B. Raju, A. Navruzyan, N. Duffy, and B. Hodjat. Evolving deep neural networks. arXiv preprint arXiv:1703.00548, 2017.
 [21] V. Nagarajan and J. Z. Kolter. Gradient descent gan optimization is locally stable. In Advances in Neural Information Processing Systems (NIPS), 2017.
 [22] A. Nguyen, J. Clune, Y. Bengio, A. Dosovitskiy, and J. Yosinski. Plug & play generative networks: Conditional iterative generation of images in latent space. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
 [23] T. D. Nguyen, T. Le, H. Vu, and D. Phung. Dual discriminator generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), 2017.
 [24] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In Proceedings of the International Conference on Learning Representations (ICLR), 2016.

[25]
E. Real, S. Moore, A. Selle, S. Saxena, Y. L. Suematsu, Q. Le, and A. Kurakin.
Largescale evolution of image classifiers.
In Proceedings of the 34th International Conference on Machine Learning (ICML), pages 214–223, 2017.  [26] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems (NIPS), pages 2234–2242, 2016.
 [27] T. Salimans, J. Ho, X. Chen, and I. Sutskever. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864, 2017.
 [28] A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu. Wavenet: A generative model for raw audio. In 9th ISCA Speech Synthesis Workshop, pages 125–125.
 [29] C. Vondrick, H. Pirsiavash, and A. Torralba. Generating videos with scene dynamics. In Advances In Neural Information Processing Systems (NIPS), pages 613–621, 2016.
 [30] C. Vondrick and A. Torralba. Generating the future with adversarial transformers. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
 [31] C. Wang, C. Wang, C. Xu, and D. Tao. Tag disentangled generative adversarial networks for object image rerendering. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI), pages 2901–2907, 2017.
 [32] W. Wang, Q. Huang, S. You, C. Yang, and U. Neumann. Shape inpainting using 3d generative adversarial network and recurrent convolutional networks. In The IEEE International Conference on Computer Vision (ICCV), 2017.
 [33] Y. Wang, C. Xu, J. Qiu, C. Xu, and D. Tao. Towards evolutional compression. arXiv preprint arXiv:1707.08005, 2017.
 [34] X. Yao. Evolving artificial neural networks. Proceedings of the IEEE, 87(9):1423–1447, 1999.
 [35] S. R. Young, D. C. Rose, T. P. Karnowski, S.H. Lim, and R. M. Patton. Optimizing deep learning hyperparameters through an evolutionary algorithm. In Proceedings of the Workshop on Machine Learning in HighPerformance Computing Environments, page 4. ACM, 2015.
 [36] F. Yu, A. Seff, Y. Zhang, S. Song, T. Funkhouser, and J. Xiao. Lsun: Construction of a largescale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
 [37] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. N. Metaxas. Stackgan: Text to photorealistic image synthesis with stacked generative adversarial networks. In The IEEE International Conference on Computer Vision (ICCV), 2017.
 [38] Y. Zhang, Z. Gan, and L. Carin. Generating text via adversarial training. In NIPS workshop on Adversarial Training, 2016.
 [39] J. Zhao, M. Mathieu, and Y. LeCun. Energybased generative adversarial network. In Proceedings of the International Conference on Learning Representations (ICLR), 2017.
 [40] J.Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired imagetoimage translation using cycleconsistent adversarial networks. In The IEEE International Conference on Computer Vision (ICCV), 2017.
Comments
There are no comments yet.