A data scientist facing a challenging new supervised learning task does not generally invent a new algorithm. Instead, they consider what they know about the dataset and which algorithms have worked well for similar datasets in past experience. Automated machine learning (AutoML) seeks to automate these tasks to enable widespread use of machine learning by non-experts. A major challenge is to develop fast, efficient algorithms to accelerate applications of machine learning kokiopoulou2019fast . This work develops automated solutions that exploit human expertise to learn which datasets are similar and what algorithms perform best
. We use modern natural language processing (NLP) tools to teach AutoML systems how to read text descriptions of datasets, and develop a structured representation of the solutions to Kaggle challenges to allow our system to run winning solutions on new datasets.
A simple idea is to use machine learning pipelines that performed well (for the same task) on similar datasets. What constitutes a similar dataset? The success of an AutoML system often hinges on this question, and different frameworks have different answers: for example, AutoSklearn feurer2015efficient computes a set of metafeatures for each dataset, while OBOE yang2019oboe
uses the performance of a few fast, informative models to compute latent features. More generally, for any supervised learning task, one can view the list of recommended algorithms generated by any AutoML system as a vector describing that task. Oddly, no previous work uses the information that a human would check first: a summary description of the dataset, written in free text. These dataset features induce a metric structure on the space of datasets. Under an ideal metric, a model that performs well on one dataset would also perform well on nearby datasets. The methods we develop in this work show how to learn such a metric using the recommendations ofany AutoML framework together with the dataset description.
This work makes several contributions to the literature by marrying techniques from NLP with AutoML. First, we develop NLP text embeddings for datasets by reading the dataset metadata, including dataset title, description, and keywords. We show that using these embeddings improves the performance of state of the art AutoML frameworks such as OBOE yang2019oboe , AutoSklearn feurer2015efficient , AlphaD3M drori2019alphad3m , and TPOT olson2019tpot . Second, we develop NLP text embeddings for machine learning pipelines by reading the algorithm documentation. We show how to use these embeddings to develop a new training objective for AutoML: for any two training datasets, we compute the distance between the embeddings of the algorithm that performs best on each dataset. We learn a metric on dataset embeddings to match the distance between the embeddings of the corresponding best algorithms. We can use this metric for zero-shot AutoML: given a new dataset, we compute its embedding (given a text description of the dataset), use it to find the closest training dataset, and output the best algorithm known for that training dataset. Using the additional information present in the dataset metadata embeddings and pipeline embeddings improves performance of existing AutoML systems. The third major contribution of this work is a new metadata dataset for AutoML that we call AutoKaggle. AutoKaggle consists of a collection of Kaggle competitions, tasks, winning pipelines, and an execution engine.
AutoML is an emerging field of machine learning with the potential to transform the practice of data science by automatically choosing a model to best fit the data. The reader interested in a comprehensive review of the field can consult one of the three surveys published in the last twelve monthsyao2018survey ; he2019automl ; zoller2019survey . A new benchmark amlb2019 provides a quantitative comparison of many top algorithms.
Language has a common unstructured representation of words, sentences, paragraphs, and trees of paragraphs which form stories. The most significant recent advances in NLP learn language models and embeddings from very large corpuses of text devlin2018bert ; radford2019language . An unsupervised corpus of text is transformed into a supervised dataset by defining content-target pairs along the entire text: for example, target words that appear in each sentence, or target sentences which appear in each paragraph. A language model is first trained to learn a low dimensional embedding of words or sentences followed by a map from low dimensional content to target mikolov2013efficient
. This embedding can be used to embed on a new, unseen and small, dataset in the same low-dimensional space. This work is the first to propose using such embeddings for automatic machine learning. Specifically, we use an embedding for the datasets, an embedding for the pipelines, and the non-linear interactions between these embedding using a neural network.
One major factor in the performance of an AutoML system is the base set of algorithms it can use to compose more complex pipelines. For a fair comparison, in our numerical experiments we compare our proposed methods only to other AutoML systems that build pipelines out of Scikit-learn scikit-learn primitives. Now, humans who compete in Kaggle competitions do not restrict themselves in the same way. Hence as part of AutoKaggle, we have developed and released translations of every winning Kaggle entry that comply with the AutoKaggle pipeline format. Each of these can be interpreted by the AutoKaggle execution engine. The resulting Scikit-learn translation of the pipeline is sometimes better, but generally a tad worse, than the original human-engineered pipeline.
|Metadata of dataset|
|Machine learning task (classification, regression)|
|OBOE, AutoSklearn, AlphaD3M, TPOT, and human algorithm|
|for||Solution pipeline on dataset for task|
|Evaluating performance of pipeline on and|
|Pre-trained language embedding|
|Language embedding of dataset metadata|
|Distance between dataset metadata embeddings|
|Nearest neighbor of under distance of embeddings|
|Pipeline of most similar embedding|
|Direct pipeline transfer using dataset metadata embedding|
|Language embedding of solution pipeline|
|Representation of embeddings for dataset and task|
|Interaction between embeddings|
|,||Neural network input: pair of representations|
|Network output: distance between human pipeline embeddings|
This work uses NLP embeddings to find machine learning pipelines that perform well for a given dataset and task. We separately embed dataset metadata and machine learning pipelines and pass the embeddings through a neural network. This work designs appropriate embedding methods for both dataset metadata and machine learning pipelines and demonstrates that the resulting recommender system works well. Table 1 provides the mathematical notation that defines these methods.
We rely on NLP tools to produce embeddings of dataset metadata and of pipelines. Concretely, in our experiments, we embed dataset metadata by applying the USE3 embedding cer2018universal to the dataset description (including title, subtitle, description, and keywords) to form , and embed a pipeline
by applying the embedding to the function call and the header for each estimator used in the pipeline to form.
To facilitate evaluation of arbitrary pipelines, we have developed an execution engine in Python that can represent pipelines composed of machine learning primitives from the Scikit-learn library. The execution engine takes as input a description of a pipeline (consisting of machine learning primitives and their parameters, structured in a chain describing the order of execution) and computes the value of pipeline on task for dataset by running the primitives sequentially on a given dataset and task .
The execution engine allows us to run any pipeline for a given task on any dataset. Our hypothesis is that datasets whose metadata embeddings are similar will share successful pipelines. To test this hypothesis, we develop an AutoML approach that we call direct pipeline transfer. For a given dataset , we find the most similar dataset with respect to the metadata embedding:
and evaluate the corresponding pipeline on the original dataset to compute . Dataset metadata is useful alone, but can be even more powerful in combination with other information. We develop embeddings that are formed by concatenating metadata embeddings with pipeline embeddings for pipelines produced by any AutoML system, as shown in Figure 1. We refer to these dataset embeddings as AutoML embeddings. We extend the direct pipeline transfer methodology to use these more complex dataset representations. For a given dataset with representation , we find the most similar dataset with respect to the metadata embedding:
and evaluate the corresponding pipeline on the original dataset to compute . A disadvantage of this method is that we must specify both the similarity metric and the importance of each component of the representation. Instead, we can learn the similarity metric and interaction between representations automatically using a neural network, to learn the similarity metric and output the distance between representations. The neural network takes as input pairs of datasets and and is trained to output the performance of the human-selected model for dataset, , evaluated on dataset : . At prediction time, given a new dataset , we first compute the representation , and then compute the distance of the new dataset to all other datasets using a neural network. We choose the pipeline corresponding to the dataset with the smallest distance, and evaluate its performance on the new dataset .
An important methodology in data science is the common task framework blei2017datascience ; donoho2017fifty in which a common dataset and task is given to multiple participants and evaluated using the same performance metrics. In this work we curate AutoKaggle, a meta-dataset which contains meta-data about datasets , tasks , and solution pipelines
. This dataset is important for: analyzing which solution components are used for which datasets and tasks, understanding which tasks and sub-tasks are given for which datasets, recommending which high performance solutions be used for new unseen datasets and tasks, and identifying usage trends of machine learning libraries and primitives. Our new meta-dataset contains structured information about a wide variety of machine learning tasks, together with meta-data about the data, task, and solution pipelines. Solution source code is parsed into structured machine learning pipelines including: pre-processing operations, feature extractors, feature selectors, estimators, and post-processing operations. The dataset can be viewed as a sparse high dimensional tensor. Rows correspond to (dataset, problem) pairs; other dimensions correspond to possible values for preprocessors, feature extractors, feature selectors, estimators, and post-processors. The entries of the tensor are the performance of the corresponding pipeline for the task on the dataset.
Table 2 shows our results for a representative set of tabular datasets on classification tasks. For each dataset (row), Table 2 reports mean evaluation accuracy of different pipelines (columns) running on the same well-defined task. Specifically, prediction accuracy of OBOE , AutoSklearn , AlphaD3M , and TPOT , evaluation of human generated pipeline , and the predicted pipeline accuracy of the best dataset metadata embedding and single pipeline embedding . All AutoML systems were given one minute of computation time for a fair comparison; whereas our zero-shot AutoML using dataset metadata embedding runs under one second of computation, and our pipeline embedding runs within the same one minute of computation while improving performance.
|Dataset||OBOE||AutoSklearn||AlphaD3M||TPOT||Human||Ours DE||Ours PE|
To implement the metric neural network that learns to predict the distance between the predicted pipeline embedding of pairs of datasets and
, we construct a fully connected neural network with four layers, batch size of 16, and 1200 training epochs. We use the Adam optimizer with 0.001 learning rate. The input to the neural network is the representationof the test dataset and the representation of every other dataset. We train the neural network for every test dataset and get our evaluation accuracy by running the obtained pipeline on the test dataset using our execution engine.
We have introduced a neural architecture to embed textual descriptions of dataset metadata and machine learning pipelines for AutoML. We use a new dataset AutoKaggle consisting of structured representations of winning solutions of Kaggle competitions and an execution engine to run multiple ML pipelines. We make our data, models, and code publicly available autommlembeddings2019code . In future work we would like to apply our method to additional AutoML systems such as Auto-WEKA and H2O AutoML. We would also like to compare the performance of embedding AutoML using different large language embeddings such as BERT devlin2018bert
and GPT-2radford2019language .
-  David M Blei and Padhraic Smyth. Science and data science. Proceedings of the National Academy of Sciences, 114(33):8689–8692, 2017.
-  Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. Universal sentence encoder. arXiv preprint arXiv:1803.11175, 2018.
-  Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
-  David Donoho. 50 years of data science. Journal of Computational and Graphical Statistics, 26(4):745–766, 2017.
Iddo Drori, Yamuna Krishnamurthy, Remi Rampin, Raoni de Paula Lourenco,
Kyunghyun Cho, Claudio Silva, and Juliana Freire.
Automatic machine learning by pipeline synthesis using model-based reinforcement learning and a grammar.ICML Workshop on Automated Machine Learning, 2019.
-  Iddo Drori, Lu Liu, Yi Nian, Sharath Koorathota, Jie Li, Antonio Khalil Moretti, Juliana Freire, and Madeleine Udell. GitHub repo for AutoML using metadata language embeddings: data, models, and code. https://github.com/idrori/automl-embedding, 2019.
-  Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum, and Frank Hutter. Efficient and robust automated machine learning. In Advances in Neural Information Processing Systems, pages 2962–2970, 2015.
-  P. Gijsbers, E. LeDell, S. Poirier, J. Thomas, B. Bischl, and J. Vanschoren. An open source AutoML benchmark. ICML Workshop on Automated Machine Learning, 2019.
-  Xin He, Kaiyong Zhao, and Xiaowen Chu. Automl: A survey of the state-of-the-art. arXiv preprint arXiv:1908.00709, 2019.
-  Efi Kokiopoulou, Anja Hauth, Luciano Sbaiz, Andrea Gesmundo, Gabor Bartok, and Jesse Berent. Fast task-aware architecture inference. arXiv preprint arXiv:1902.05781, 2019.
-  Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. International Conference on Learning Representations Workshop, 2013.
-  Randal S Olson and Jason H Moore. TPOT: A tree-based pipeline optimization tool for automating machine learning. In Automated Machine Learning, pages 151–160. Springer, 2019.
-  F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
-  Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 2019.
-  Chengrun Yang, Yuji Akimoto, Dae Won Kim, and Madeleine Udell. OBOE: Collaborative filtering for AutoML model selection. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1173–1183, 2019.
-  Quanming Yao, Mengshuo Wang, Hugo Jair Escalante, Isabelle Guyon, Yi-Qi Hu, Yu-Feng Li, Wei-Wei Tu, Qiang Yang, and Yang Yu. Taking human out of learning applications: A survey on automated machine learning. CoRR, abs/1810.13306, 2018.
-  Marc-André Zöller and Marco F. Huber. Survey on automated machine learning. CoRR, abs/1904.12054, 2019.