Selecting Near-Optimal Learners via Incremental Data Allocation

12/31/2015
by   Ashish Sabharwal, et al.
0

We study a novel machine learning (ML) problem setting of sequentially allocating small subsets of training data amongst a large set of classifiers. The goal is to select a classifier that will give near-optimal accuracy when trained on all data, while also minimizing the cost of misallocated samples. This is motivated by large modern datasets and ML toolkits with many combinations of learning algorithms and hyper-parameters. Inspired by the principle of "optimism under uncertainty," we propose an innovative strategy, Data Allocation using Upper Bounds (DAUB), which robustly achieves these objectives across a variety of real-world datasets. We further develop substantial theoretical support for DAUB in an idealized setting where the expected accuracy of a classifier trained on n samples can be known exactly. Under these conditions we establish a rigorous sub-linear bound on the regret of the approach (in terms of misallocated data), as well as a rigorous bound on suboptimality of the selected classifier. Our accuracy estimates using real-world datasets only entail mild violations of the theoretical scenario, suggesting that the practical behavior of DAUB is likely to approach the idealized behavior.

READ FULL TEXT
research
11/15/2022

Behavior of Hyper-Parameters for Selected Machine Learning Algorithms: An Empirical Investigation

Hyper-parameters (HPs) are an important part of machine learning (ML) mo...
research
12/16/2020

Try Before You Buy: A practical data purchasing algorithm for real-world data marketplaces

Data trading is becoming increasingly popular, as evident by the appeara...
research
05/24/2018

New Insights into Bootstrapping for Bandits

We investigate the use of bootstrapping in the bandit setting. We first ...
research
03/16/2017

Efficient Online Learning for Optimizing Value of Information: Theory and Application to Interactive Troubleshooting

We consider the optimal value of information (VoI) problem, where the go...
research
03/08/2023

Certifiable Robustness for Naive Bayes Classifiers

Data cleaning is crucial but often laborious in most machine learning (M...
research
08/30/2021

Evaluating Bayes Error Estimators on Read-World Datasets with FeeBee

The Bayes error rate (BER) is a fundamental concept in machine learning ...
research
03/13/2022

Algorithmic Recourse in the Face of Noisy Human Responses

As machine learning (ML) models are increasingly being deployed in high-...

Please sign up or login with your details

Forgot password? Click here to reset