Ensembling over Classifiers: a Bias-Variance Perspective

06/21/2022
by   Neha Gupta, et al.
0

Ensembles are a straightforward, remarkably effective method for improving the accuracy,calibration, and robustness of models on classification tasks; yet, the reasons that underlie their success remain an active area of research. We build upon the extension to the bias-variance decomposition by Pfau (2013) in order to gain crucial insights into the behavior of ensembles of classifiers. Introducing a dual reparameterization of the bias-variance tradeoff, we first derive generalized laws of total expectation and variance for nonsymmetric losses typical of classification tasks. Comparing conditional and bootstrap bias/variance estimates, we then show that conditional estimates necessarily incur an irreducible error. Next, we show that ensembling in dual space reduces the variance and leaves the bias unchanged, whereas standard ensembling can arbitrarily affect the bias. Empirically, standard ensembling reducesthe bias, leading us to hypothesize that ensembles of classifiers may perform well in part because of this unexpected reduction.We conclude by an empirical analysis of recent deep learning methods that ensemble over hyperparameters, revealing that these techniques indeed favor bias reduction. This suggests that, contrary to classical wisdom, targeting bias reduction may be a promising direction for classifier ensembles.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/01/2015

Bootstrap Bias Corrections for Ensemble Methods

This paper examines the use of a residual bootstrap for bias correction ...
research
02/08/2022

Understanding the bias-variance tradeoff of Bregman divergences

This paper builds upon the work of Pfau (2013), which generalized the bi...
research
12/14/2018

Conditional bias reduction can be dangerous: a key example from sequential analysis

We present a key example from sequential analysis, which illustrates tha...
research
04/25/2023

Certifying Ensembles: A General Certification Theory with S-Lipschitzness

Improving and guaranteeing the robustness of deep learning models has be...
research
09/24/2011

Bias Plus Variance Decomposition for Survival Analysis Problems

Bias - variance decomposition of the expected error defined for regressi...
research
02/01/2022

An Empirical Study of Modular Bias Mitigators and Ensembles

There are several bias mitigators that can reduce algorithmic bias in ma...
research
06/10/2021

Bias, Consistency, and Alternative Perspectives of the Infinitesimal Jackknife

Though introduced nearly 50 years ago, the infinitesimal jackknife (IJ) ...

Please sign up or login with your details

Forgot password? Click here to reset