On Anytime Learning at Macroscale

by   Lucas Caccia, et al.

Classical machine learning frameworks assume access to a possibly large dataset in order to train a predictive model. In many practical applications however, data does not arrive all at once, but in batches over time. This creates a natural trade-off between accuracy of a model and time to obtain such a model. A greedy predictor could produce non-trivial predictions by immediately training on batches as soon as these become available but, it may also make sub-optimal use of future data. On the other hand, a tardy predictor could wait for a long time to aggregate several batches into a larger dataset, but ultimately deliver a much better performance. In this work, we consider such a streaming learning setting, which we dub anytime learning at macroscale (ALMA). It is an instance of anytime learning applied not at the level of a single chunk of data, but at the level of the entire sequence of large batches. We first formalize this learning setting, we then introduce metrics to assess how well learners perform on the given task for a given memory and compute budget, and finally we test several baseline approaches on standard benchmarks repurposed for anytime learning at macroscale. The general finding is that bigger models always generalize better. In particular, it is important to grow model capacity over time if the initial model is relatively small. Moreover, updating the model at an intermediate rate strikes the best trade off between accuracy and time to obtain a useful predictor.


page 1

page 2

page 3

page 4


Optimized conformal classification using gradient descent approximation

Conformal predictors are an important class of algorithms that allow pre...

ShareBoost: Efficient Multiclass Learning with Feature Sharing

Multiclass prediction is the problem of classifying an object into a rel...

The Sharpe predictor for fairness in machine learning

In machine learning (ML) applications, unfair predictions may discrimina...

Training verified learners with learned verifiers

This paper proposes a new algorithmic framework,predictor-verifier train...

Time-Constrained Learning

Consider a scenario in which we have a huge labeled dataset D and a limi...

Clustering-based Aggregations for Prediction in Event Streams

Predicting the behaviour of shoppers provides valuable information for r...

FR: Folded Rationalization with a Unified Encoder

Conventional works generally employ a two-phase model in which a generat...

Please sign up or login with your details

Forgot password? Click here to reset