Log In Sign Up

A Very Brief and Critical Discussion on AutoML

by   Bin Liu, et al.

This contribution presents a very brief and critical discussion on automated machine learning (AutoML), which is categorized here into two classes, referred to as narrow AutoML and generalized AutoML, respectively. The conclusions yielded from this discussion can be summarized as follows: (1) most existent research on AutoML belongs to the class of narrow AutoML; (2) advances in narrow AutoML are mainly motivated by commercial needs, while any possible benefit obtained is definitely at a cost of increase in computing burdens; (3)the concept of generalized AutoML has a strong tie in spirit with artificial general intelligence (AGI), also called "strong AI", for which obstacles abound for obtaining pivotal progresses.


page 1

page 2

page 3

page 4


Machine Learning in Quantitative PET Imaging

This paper reviewed the machine learning-based studies for quantitative ...

A Game of Dice: Machine Learning and the Question Concerning Art

We review some practical and philosophical questions raised by the use o...

A Review of Affective Generation Models

Affective computing is an emerging interdisciplinary field where computa...

A brief network analysis of Artificial Intelligence publication

In this paper, we present an illustration to the history of Artificial I...

Subjectivity Learning Theory towards Artificial General Intelligence

The construction of artificial general intelligence (AGI) was a long-ter...

A brief introduction to the Grey Machine Learning

This paper presents a brief introduction to the key points of the Grey M...

The DUNE Framework: Basic Concepts and Recent Developments

This paper presents the basic concepts and the module structure of the D...

I Introduction

AutoML has recently emerged as a hot research topic in the field of machine learning (ML) and artificial intelligence (AI). As we know, a typical ML pipeline requires a lot of human’s participation for e.g., data pre-processing, feature engineering, algorithm selection, model selection and hyperparameter optimization. The purpose of AutoML is to make the ML pipeline automated, getting rid of the aforementioned cumbersome issues that are often beyond the abilities of non-experts.

The notion AutoML was first introduced in an ICML-2014 workshop [1], while the idea of AutoML appeared earlier in e.g. [2]. Some classical examples of AutoML implementations include the Bayesian optimization (BO) based automation of WEKA (Auto-WEKA) [2] and sklearn (auto-sklearn) [3]

, genetic programming based automation of ML pipelines using Python library TPOT

[4] and the RECIPE framework [5], and Google’s Cloud AutoML, which can do adaptive neural architecture search [6].

Although intuition tells us that the idea of AutoML is cool and it has indeed gained some successes in some applications, I shall provide several critical discussions on it, hoping to stimulate more reasonable considerations and discussions on the concept of and technologies about AutoML.

Ii Discussions

Ii-a A Classification of AutoML

In the first AutoML workshop, AutoML was described a research area that targets progressive automation of ML [1]. The term “progressive” indicates that it is likely a long-term goal to implement fully automated ML. Here I categorize all possible AutoML techniques into two classes, referred to as narrow AutoML and generalized AutoML, respectively. The former denotes all intermediate ML techniques developed under the way in achieving fully automated ML and the latter represents the fully automated ML. In contrast with traditional ML, narrow AutoML can reduce the amount of but cannot fully get rid of experts’ involvement, while if generalized AutoML is ideally implemented, then no expert effort is needed to perform an ML task.

Ii-B On narrow AutoML

I argue that most existent progresses in the area of AutoML are just progresses in the sub-field of narrow AutoML. Take the BO based automatic algorithm configuration as an example for analysis. BO has been successfully used for hyperparameter optimization [7], model selection [8]

, structure tuning of neural networks

[9] and so on. The basic idea of BO is to substitute the expert’s effort with an outer-loop algorithm, e.g., Gaussian Process (GP) regression, for configuring hyperparameters or model structures for the inner-loop ML algorithm of interest. Hence the outer-loop GP based searching procedure can be regarded as a surrogate of the expert, whose task is to seek appropriate hyperparameter values or model structures for the inner-loop algorithm. The BO based AutoML is just a kind of narrow AutoML because as an outer-loop algorithm, the BO itself has hyperparameters, e.g., the kernel type, the acquisition function, that need to be configured by an expert or an outer-loop algorithm which acts as a surrogate of the expert. For example, in [10], a meta-learning technique is adopted for initializing BO, while both of the selection and the parameter configuration of the meta-learning technique are done by human expert. In [11, 8], a compositional kernel searching strategy is developed to automate BO, while it still requires expert knowledge to specify the space of compositional kernels as well as hyper parameter priors.

Although the above analysis is only restricted to BO, the same result holds for other types of automatic algorithm configuration techniques, e.g., transfer learning, reinforcement learning, heuristic methods, which are mentioned in

[12]. To summarize, for most existent AutoML works, regardless of the number of layers of the outer-loop algorithms, the configuration of the outermost layer is definitely done by human experts. So they all belong to the narrow AutoML class. It is shown that employing such narrow AutoML methods, the amount of required expert knowledge can be reduced but cannot be avoided.

Another point to note is that any benefit gained by reducing expert’s involvements is at the cost of the increase in the computing burdens. Still taking the BO based AutoML as an instance, we can see that any automation brought by BO is implemented through expanding the searching space of hyperparameters or models or both for the inner-loop ML algorithm. Consider an ideal extreme case in which we have a qualified expert who can determine the values of all hyperparameters and the model structure very quickly totally based on his experience, then no additional computing burden except running the algorithm is needed to finish the learning task. Then if we now substitute this expert with an out-loop BO algorithm, then we have to pay corresponding computing burdens to initializing the BO and letting it search the value space of the hyperparameters and model structures.

Ii-C On generalized AutoML

As aforementioned, narrow AutoML is characterized by algorithmically configuration of an inner-loop ML algorithm through invoking an outer-loop algorithm, and thus can not achieve fully automated ML, since the configuration of the outermost layer algorithm has to be configured by human experts. In contrast, the concept of generalized AutoML means fully automated ML without expert’s involvement or expert knowledge, which represents the final goal of the AutoML research, as stated in recent AutoML workshops at ICML [1, 13, 14]. I argue that the concept of generalized AutoML has a strong tie in spirit with AGI, also called “strong AI”, for which obstacles abound for obtaining pivotal progresses. Details on AGI are referred to [15, 16, 17, 18].

Iii Conclusions

In this brief note, I categorized the concept of AutoML into two classes, namely narrow AutoML and generalized AutoML, and showed that most existent advances in the area of AutoML are just within the scope of narrow AutoML. I also provided a cautious reminder of that the employment of narrow AutoML also requires an amount of expert’s involvement or expert knowledge, and any benefit obtained is definitely at a cost in the increase of computing burdens. Finally, I pointed out that the concept of generalized AutoML is closely related with AGI, for which obstacles abound for obtaining pivotal progresses.


  • [1] F. Hutter, R. Caruana, R. Bardenet, M. Bilenko, I. Guyon, B. Kegl, and H. Larochelle, “AutoML 2014 workshop,” in ICML, 2014.
  • [2] C. Thornton, F. Hutter, H. Hoos, and K. Leyton-Brown, “Auto-weka: Combined selection and hyperparameter optimization of classification algorithms,” in Proc. of the 19th ACM SIGKDD.   ACM, 2013, pp. 847–855.
  • [3] M. Feurer, A. Klein, K. Eggensperger, J. Springenberg, M. Blum, and F. Hutter, “Efficient and robust automated machine learning,” in NIPS, 2015, pp. 2962–2970.
  • [4]

    R. Olson, R. Urbanowicz, P. Andrews, N. Lavender, and J. Moore, “Automating biomedical data science through tree-based pipeline optimization,” in

    European Conf. on the Applications of Evolutionary Computation

    .   Springer, 2016, pp. 123–137.
  • [5] A. G. de Sá, W. J. G. Pinto, L. O. V. Oliveira, and G. L. Pappa, “Recipe: a grammar-based framework for automatically evolving classification pipelines,” in European Conf. on Genetic Programming.   Springer, 2017, pp. 246–261.
  • [6] J. Li and F. Li, “Cloud AutoML: Making AI accessible to every business,” https://www. blog. google/topics/google-cloud/cloud-automl-making-ai-accessible-everybusiness, 2018.
  • [7] B. Shahriari, K. Swersky, Z. Wang, R. Adams, and N. De Freitas, “Taking the human out of the loop: A review of bayesian optimization,” Proceedings of the IEEE, vol. 104, no. 1, pp. 148–175, 2016.
  • [8] G. Malkomes, C. Schaff, and R. Garnett, “Bayesian optimization for automated model selection,” in NIPS Workshop on Automatic Machine Learning, 2016, pp. 2900–2908.
  • [9] H. Mendoza, A. Klein, M. Feurer, J. Springenberg, and F. Hutter, “Towards automatically-tuned neural networks,” in NIPS Workshop on Automatic Machine Learning, 2016, pp. 58–65.
  • [10] M. Feurer, J. Springenberg, and F. Hutter, “Initializing bayesian hyperparameter optimization via meta-learning.” in AAAI, 2015, pp. 1128–1135.
  • [11] J. Gardner, C. Guo, K. Weinberger, R. Garnett, and R. Grosse, “Discovering and exploiting additive structure for bayesian optimization,” in Artificial Intelligence and Statistics, 2017, pp. 1311–1319.
  • [12] Q. Yao, M. Wang, H. Jair Escalante, I. Guyon, Y. Hu, Y. Li, W. Tu, Q. Yang, and Y. Yu, “Taking human out of learning applications: A survey on automated machine learning,” arXiv preprint arXiv:1810.13306, 2018.
  • [13] R. Garnett, F. Hutter, and J. Vanschoren, “AutoML 2018 workshop,” in ICML, 2018.
  • [14] R. Adams, N. de Freitas, K. Smith-Miles, and M. Sebag, “AutoML 2016 workshop,” in ICML, 2016.
  • [15] B. Goertzel and C. Pennachin, Artificial general intelligence.   Springer, 2007, vol. 2.
  • [16] B. Goertzel and P. Wang, “A foundational architecture for artificial general intelligence,” Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms, vol. 6, p. 36, 2007.
  • [17] P. Wang and B. Goertzel, “Introduction: Aspects of artificial general intelligence,” in Proc. of the 2007 Conf. on Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proc. of the AGI Workshop 2006.   IOS Press, 2007, pp. 1–16.
  • [18] P. Voss, “Essentials of general intelligence: The direct path to artificial general intelligence,” in Artificial general intelligence.   Springer, 2007, pp. 131–157.