When are ensembles really effective?

05/21/2023
by   Ryan Theisen, et al.
0

Ensembling has a long history in statistical data analysis, with many impactful applications. However, in many modern machine learning settings, the benefits of ensembling are less ubiquitous and less obvious. We study, both theoretically and empirically, the fundamental question of when ensembling yields significant performance improvements in classification tasks. Theoretically, we prove new results relating the ensemble improvement rate (a measure of how much ensembling decreases the error rate versus a single model, on a relative scale) to the disagreement-error ratio. We show that ensembling improves performance significantly whenever the disagreement rate is large relative to the average error rate; and that, conversely, one classifier is often enough whenever the disagreement rate is low relative to the average error rate. On the way to proving these results, we derive, under a mild condition called competence, improved upper and lower bounds on the average test error rate of the majority vote classifier. To complement this theory, we study ensembling empirically in a variety of settings, verifying the predictions made by our theory, and identifying practical scenarios where ensembling does and does not result in large performance improvements. Perhaps most notably, we demonstrate a distinct difference in behavior between interpolating models (popular in current practice) and non-interpolating models (such as tree-based methods, where ensembling is popular), demonstrating that ensembling helps considerably more in the latter case than in the former.

READ FULL TEXT
research
09/01/2023

Prediction Error Estimation in Random Forests

In this paper, error estimates of classification Random Forests are quan...
research
06/15/2018

On the Relationship between Data Efficiency and Error for Uncertainty Sampling

While active learning offers potential cost savings, the actual data eff...
research
10/17/2022

On the Tightness of the Laplace Approximation for Statistical Inference

Laplace's method is used to approximate intractable integrals in a wide ...
research
09/18/2023

New Bounds on the Accuracy of Majority Voting for Multi-Class Classification

Majority voting is a simple mathematical function that returns the value...
research
01/17/2022

Adjudication with Rational Jurors

We analyze a mechanism for adjudication involving majority voting and ra...
research
04/21/2019

Achieving the Bayes Error Rate in Synchronization and Block Models by SDP, Robustly

We study the statistical performance of semidefinite programming (SDP) r...
research
07/13/2017

On Measuring and Quantifying Performance: Error Rates, Surrogate Loss, and an Example in SSL

In various approaches to learning, notably in domain adaptation, active ...

Please sign up or login with your details

Forgot password? Click here to reset