Meta-Learning a Real-Time Tabular AutoML Method For Small Data

07/05/2022
by   Noah Hollmann, et al.
0

We present TabPFN, an AutoML method that is competitive with the state of the art on small tabular datasets while being over 1,000× faster. Our method is very simple: it is fully entailed in the weights of a single neural network, and a single forward pass directly yields predictions for a new dataset. Our AutoML method is meta-learned using the Transformer-based Prior-Data Fitted Network (PFN) architecture and approximates Bayesian inference with a prior that is based on assumptions of simplicity and causal structures. The prior contains a large space of structural causal models and Bayesian neural networks with a bias for small architectures and thus low complexity. Furthermore, we extend the PFN approach to differentiably calibrate the prior's hyperparameters on real data. By doing so, we separate our abstract prior assumptions from their heuristic calibration on real data. Afterwards, the calibrated hyperparameters are fixed and TabPFN can be applied to any new tabular dataset at the push of a button. Finally, on 30 datasets from the OpenML-CC18 suite we show that our method outperforms boosted trees and performs on par with complex state-of-the-art AutoML systems with predictions produced in less than a second. We provide all our code and our final trained TabPFN in the supplementary materials.

READ FULL TEXT
research
07/24/2018

Meta-Learning Priors for Efficient Online Bayesian Regression

Gaussian Process (GP) regression has seen widespread use in robotics due...
research
09/06/2023

Amortised Inference in Bayesian Neural Networks

Meta-learning is a framework in which machine learning models train over...
research
02/18/2015

Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks

Large multilayer neural networks trained with backpropagation have recen...
research
08/02/2021

Learning to Learn to Demodulate with Uncertainty Quantification via Bayesian Meta-Learning

Meta-learning, or learning to learn, offers a principled framework for f...
research
02/20/2023

CMVAE: Causal Meta VAE for Unsupervised Meta-Learning

Unsupervised meta-learning aims to learn the meta knowledge from unlabel...
research
03/27/2023

Meta-Calibration Regularized Neural Networks

Miscalibration-the mismatch between predicted probability and the true c...
research
10/01/2018

Probabilistic Meta-Representations Of Neural Networks

Existing Bayesian treatments of neural networks are typically characteri...

Please sign up or login with your details

Forgot password? Click here to reset