AnyPredict: Foundation Model for Tabular Prediction

05/20/2023
by   Zifeng Wang, et al.
0

Foundation models are pre-trained on massive data to perform well across many downstream tasks. They have demonstrated significant success in natural language processing and computer vision. Nonetheless, the use of such models in tabular prediction tasks has been limited, with the main hurdles consisting of (1) the lack of large-scale and diverse tabular datasets with standardized labels and (2) the schema mismatch and predictive target heterogeneity across domains. This paper proposes a method for building training data at scale for tabular prediction foundation models (AnyPredict) using both in-domain and a wide range of out-domain datasets. The method uses a data engine that leverages large language models (LLMs) to consolidate tabular samples to overcome the barrier across tables with varying schema and align out-domain data with the target task using a “learn, annotate, and audit” pipeline. The expanded training data enables the pre-trained AnyPredict to support every tabular dataset in the domain without fine-tuning, resulting in significant improvements over supervised baselines: it reaches an average ranking of 1.57 and 1.00 on 7 patient outcome prediction datasets and 3 trial outcome prediction datasets, respectively. In addition, AnyPredict exhibits impressive zero-shot performances: it outperforms supervised XGBoost models by 8.9 average in two prediction tasks, respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/03/2022

Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively

Large-scale pre-trained language models have achieved impressive results...
research
04/29/2023

POUF: Prompt-oriented unsupervised fine-tuning for large pre-trained models

Through prompting, large-scale pre-trained models have become more expre...
research
09/02/2021

Learning to Prompt for Vision-Language Models

Vision-language pre-training has recently emerged as a promising alterna...
research
03/31/2023

DIME-FM: DIstilling Multimodal and Efficient Foundation Models

Large Vision-Language Foundation Models (VLFM), such as CLIP, ALIGN and ...
research
08/18/2023

BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine

Foundation models (FMs) have exhibited remarkable performance across a w...
research
05/17/2022

M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems

Industrial recommender systems have been growing increasingly complex, m...

Please sign up or login with your details

Forgot password? Click here to reset