Creative AI Through Evolutionary Computation

01/12/2019 ∙ by Risto Miikkulainen, et al. ∙ 0

In the last decade or so we have seen tremendous progress in Artificial Intelligence (AI). AI is now in the real world, powering applications that have a large practical impact. Most of it is based on modeling, i.e. machine learning of statistical models that make it possible to predict what the right decision might be in future situations. The next step for AI is machine creativity, i.e. tasks where the correct, or even good, solutions are not known, but need to be discovered. Methods for machine creativity have existed for decades. I believe we are now in a similar situation as deep learning was a few years ago: with the million-fold increase in computational power, those methods can now be used to scale up to creativity in real-world tasks. In particular, Evolutionary Computation is in a unique position to take advantage of that power, and become the next deep learning.



There are no comments yet.


page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.


  • Cuccu and Gomez (2011) Cuccu, G., and Gomez, F. (2011). When novelty is not enough. In Proceedings of the 2011 International Conference on Applications of Evolutionary Computation - Volume Part I, 234–243. Berlin, Heidelberg: Springer-Verlag.
  • Deb et al. (2000) Deb, K., Agrawal, S., Pratab, A., and Meyarivan, T. (2000).

    A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II.

    PPSN VI, 849–858.
  • Deb and Myburgh (2017) Deb, K., and Myburgh, C. (2017). A population-based fast algorithm for a billion-dimensional resource allocation problem with integer variablesbreaking the billion-variable barrier in real-world. European Journal of Operational Research, 261:460–474.
  • Dupuis et al. (2015) Dupuis, J.-F., Fan, Z., and Goodman, E. (2015). Evolutionary design of discrete controllers for hybrid mechatronic systems. International Journal of Systems Science, 46:303–316.
  • Forrest and Mitchell (1993) Forrest, S., and Mitchell, M. (1993). Relative building-block fitness and the building-block hypothesis. In Whitley, L. D., editor, Foundations of Genetic Algorithms, vol. 2, 109–126. Elsevier.
  • Gomes et al. (2015) Gomes, J., Mariano, P., and Christensen, A. L. (2015). Devising effective novelty search algorithms: A comprehensive empirical study. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, 943–950. New York, NY: ACM.
  • Harper et al. (2018) Harper, C. B., Johnson, A. J., Meyerson, E., Savas, T. L., and Miikkulainen, R. (2018). Flavor-cyber-agriculture: Optimization of plant metabolites in an open-source control environment through surrogate modeling. bioRxiv.
  • Hassan Awadalla et al. (2018) Hassan Awadalla, H., Aue, A., Chen, C., Chowdhary, V., Clark, J., Federmann, C., Huang, X., Junczys-Dowmunt, M., Lewis, W., Li, M., Liu, S., Liu, T.-Y., Luo, R., Menezes, A., Qin, T., Seide, F., Tan, X., Tian, F., Wu, L., Wu, S., Xia, Y., Zhang, D., Zhang, Z., and Zhou, M. (2018). Achieving human parity on automatic chinese to english news translation. Technical report, Microsoft Research.
  • Hessel et al. (2017) Hessel, M., Modayil, J., van Hasselt, H., Schaul, T., Ostrovski, G., Dabney, W., Horgan, D., Piot, B., Azar, M. G., and Silver, D. (2017). Rainbow: Combining improvements in deep reinforcement learning. CoRR, abs/1710.02298.
  • Holland (1975) Holland, J. H. (1975). Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control and Artificial Intelligence. Ann Arbor, MI: University of Michigan Press.
  • Hu et al. (2008) Hu, J., Goodman, E. D., Li, S., and Rosenberg, R. C. (2008).

    Automated synthesis of mechanical vibration absorbers using genetic programming.

    Artificial Intelligence in Engineering Design and Manufacturing, 22:207–217.
  • Ishida Lab (2018) Ishida Lab (2018). The n700 series shinkansen (bullet train). Retrieved 9/29/2018.
  • Koza (1991) Koza, J. R. (1991). A hierarchical approach to learning the boolean multiplexer function. In Rawlins, G. J. E., editor, Foundations of Genetic Algorithms, 171–192. Morgan Kaufmann.
  • LeCun et al. (2015) LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature, 521:436–444.
  • Lehman and Stanley (2010) Lehman, J., and Stanley, K. O. (2010). Revising the evolutionary computation abstraction: Minimal criteria novelty search. In Proceedings of the Genetic and Evolutionary Computation Conference.
  • McQuesten (2002) McQuesten, P. (2002). Cultural Enhancement of Neuroevolution. PhD thesis, Department of Computer Sciences, The University of Texas at Austin, Austin, TX. Technical Report AI-02-295.
  • Meyerson and Miikkulainen (2017) Meyerson, E., and Miikkulainen, R. (2017). Discovering evolutionary stepping stones through behavior domination. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2017). Berlin, Germany.
  • Miikkulainen et al. (2018) Miikkulainen, R., Iscoe, N., Shagrin, A., Rapp, R., Nazari, S., McGrath, P., Schoolland, C., Achkar, E., Brundage, M., Miller, J., Epstein, J., and Lamba, G. (2018). Sentient ascend: Ai-based massively multivariate conversion rate optimization. In Proceedings of the Thirtieth Innovative Applications of Artificial Intelligence Conference. AAAI.
  • Mouret and Doncieux (2012) Mouret, J.-B., and Doncieux, S. (2012). Encouraging behavioral diversity in evolutionary robotics: An empirical study. Evolutionary Computation, 20:91–133.
  • Russakovsky et al. (2014) Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M. S., Berg, A. C., and Li, F. (2014). Imagenet large scale visual recognition challenge. CoRR, abs/1409.0575.
  • Salimans et al. (2017) Salimans, T., Ho, J., Chen, X., and Sutskever, I. (2017). Evolution strategies as a scalable alternative to reinforcement learning. CoRR, abs/1703.03864.
  • Schmidhuber (2015) Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61:85–117.
  • Shahrzad et al. (2018) Shahrzad, H., Fink, D., and Miikkulainen, R. (2018). Enhanced optimization with composite objectives and novelty selection. In Proceedings of the 2018 Conference on Artificial Life. Tokyo, Japan.
  • Stanley and Lehman (2015) Stanley, K. O., and Lehman, J. (2015). Why Greatness Cannot Be Planned: The Myth of the Objective. Berlin: Springer.
  • Zhang et al. (2017) Zhang, X., Clune, J., and Stanley, K. O. (2017). On the relationship between the openai evolution strategy and stochastic gradient descent. arXiv:1712.06564.