Automated Machine Learning (AutoML) aims to automate the process of building machine learning models, for instance, by automating the selection and tuning of preprocessing and learning algorithms in machine learning pipelines. In recent years, many AutoML systems have been developed, such as Auto-WEKA , auto-sklearn , TPOT  and ML-Plan . They vary in the types of pipelines they build (e.g. fixed or variable length), how they optimize them (e.g. using evolutionary or Bayesian optimization), and whether or how they employ meta-learning (e.g. warm-starting) or post-processing (e.g. ensembling).
We demonstrate111A video demonstration can be found at https://youtu.be/angsGMvEd1w
a new open-source AutoML system, GAMA222Code and documentation can be found at https://github.com/PGijsbers/gama/ , which distinguishes itself by it modularity (allowing users to compose AutoML systems from sub-components), extensibility (allowing new components to be added), transparency (tracking and visualizing the search process to better understand what the AutoML system is doing), and support for research, such as integration with the AutoML benchmark . The main difference to our earlier publication () is the redesign to allow for a modular AutoML pipeline and the addition of a graphical user interface.
As such, it caters to a wide range of users, from people without a deep machine learning background who want an easy-to-use AutoML tool, to those who want better control and understanding of the AutoML process, and especially researchers who want to perform systematic AutoML research.
Currently, three different search algorithms and two post-processing techniques are available, but we welcome and plan to include more techniques in the future. For novice users, GAMA offers a default configuration shown to perform well in our benchmarks.
2 System Overview
Modular AutoML Pipeline
Rather than prescribing a specific combination of AutoML techniques, GAMA allows users to combine different search and post-processing algorithms into a flexible AutoML ‘pipeline’ that can be tuned to the problem at hand.
After the pipeline search has completed, a post-processing technique will be executed to construct the final model. It is currently possible to either train the single best pipeline or create an ensemble out of pipelines evaluated during search, as described in . In subsequent work, we plan to expand the number of search and post-processing techniques available out-of-the-box.
Listing 1 shows how to configure GAMA with non-default search and postprocessing methods and use it as a drop-in replacement for scikit-learn estimators.333An always up-to-date version of this listing can be found at https://pgijsbers.github.io/gama/master/citing.html
New AutoML algorithms or variations to existing ones can be included and tested with relative ease. For instance, each of the search algorithms described above has been implemented and integrated in GAMA with less than 170 lines of code, and they can all make use of shared functions for logging, parallel pipeline evaluation and adhering to runtime constraints. It also allows users to research other questions, such as how to choose the search algorithm for AutoML.
GAMA comes with a graphical web interface which allows novice users to start and configure GAMA.Moreover, it visualizes the AutoML process to enable researchers to easily monitor and analyse the behavior of specific AutoML configurations.
GAMA logs the creation and evaluation of each pipeline, including meta-data such as creation time and evaluation duration. For pipelines created through evolution, it also records the parent pipelines and how they differ. One can also compare multiple logs at once, creating figures such as Figure 1 that shows the convergence rate of five different GAMA runs over time on the airline dataset444https://www.openml.org/d/1169.
GAMA in integrated with the open-source AutoML Benchmark introduced in . Figure 2 shows the results of running GAMA with its default settings some of the biggest and most challenging datasets for which each other framework had results in the original work.555Although we could not run these experiments on the same (AWS) hardware, we took care to use the same computational constraints. The full and latest results will be made available in the GAMA documentation.
3 Related work
GAMA compares most closely to auto-sklearn and TPOT as they also optimize scikit-learn pipelines. Auto-sklearn and GAMA both implement the same ensembling technique . GAMA and TPOT both feature evolutionary search with NSGA2 selection , although GAMA’s implementation uses asynchronous evolution, which is often faster. While TPOT and auto-sklearn have a fixed AutoML pipeline, they do allow modifications to their search space. To the best of our knowledge, GAMA is the only AutoML framework that offers a modular and extensible composition of AutoML systems, and extensive support for AutoML research.
In this proposal we presented GAMA, an open-source AutoML tool which facilitates AutoML research and skillful use through its modular design and built-in logging and visualization. Novice users can make use of the graphical interface to start GAMA, or simply use the default configuration which is shown to generate models of similar performance to other AutoML frameworks. Researchers can leverage GAMA’s modularity to integrate and test new AutoML search procedures in combination with other readily available building blocks, and then log, visualize, and analyze their behavior, or run extensive benchmarks. In future work, we aim to integrate additional search techniques as well as extend the AutoML pipeline with additional steps, such as warm-starting the pipeline search with meta-data.
This software was developed with support from the Data Driven Discovery of Models (D3M) program run by DARPA and the Air Force Research Laboratory.
-  Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. Journal of Machine Learning Research 13(Feb), 281–305 (2012)
-  Caruana, R., Niculescu-Mizil, A., Crew, G., Ksikes, A.: Ensemble selection from libraries of models. In: Proceedings of the twenty-first international conference on Machine learning. p. 18 (2004)
-  6(2), 182–197 (2002)
-  Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., Hutter, F.: Efficient and robust automated machine learning. In: Advances in neural information processing systems. pp. 2962–2970 (2015)
-  Gijsbers, P., LeDell, E., Thomas, J., Poirier, S., Bischl, B., Vanschoren, J.: An open source automl benchmark. arXiv preprint arXiv:1907.00909 (2019)
-  Gijsbers, P., Vanschoren, J.: Gama: genetic automated machine learning assistant. Journal of Open Source Software 4(33), 1132 (2019)
-  Li, L., Jamieson, K., Rostamizadeh, A., Gonina, E., Hardt, M., Recht, B., Talwalkar, A.: Massively parallel hyperparameter tuning. arXiv preprint arXiv:1810.05934 (2018)
-  Mohr, F., Wever, M., Hüllermeier, E.: Ml-plan: Automated machine learning via hierarchical planning. Machine Learning 107(8-10), 1495–1515 (2018)
Olson, R.S., Urbanowicz, R.J., Andrews, P.C., Lavender, N.A., Kidd, L.C., Moore, J.H.: Applications of Evolutionary Computation: 19th European Conference, EvoApplications 2016, Porto, Portugal, March 30 – April 1, 2016, Proceedings, Part I, chap. Automating Biomedical Data Science Through Tree-Based Pipeline Optimization, pp. 123–137. Springer International Publishing (2016)
Thornton, C., Hutter, F., Hoos, H.H., Leyton-Brown, K.: Auto-WEKA: Combined selection and hyperparameter optimization of classification algorithms. In: Proc. of KDD-2013. pp. 847–855 (2013)