Generalized Negative Correlation Learning for Deep Ensembling

11/05/2020
by   Sebastian Buschjäger, et al.
6

Ensemble algorithms offer state of the art performance in many machine learning applications. A common explanation for their excellent performance is due to the bias-variance decomposition of the mean squared error which shows that the algorithm's error can be decomposed into its bias and variance. Both quantities are often opposed to each other and ensembles offer an effective way to manage them as they reduce the variance through a diverse set of base learners while keeping the bias low at the same time. Even though there have been numerous works on decomposing other loss functions, the exact mathematical connection is rarely exploited explicitly for ensembling, but merely used as a guiding principle. In this paper, we formulate a generalized bias-variance decomposition for arbitrary twice differentiable loss functions and study it in the context of Deep Learning. We use this decomposition to derive a Generalized Negative Correlation Learning (GNCL) algorithm which offers explicit control over the ensemble's diversity and smoothly interpolates between the two extremes of independent training and the joint training of the ensemble. We show how GNCL encapsulates many previous works and discuss under which circumstances training of an ensemble of Neural Networks might fail and what ensembling method should be favored depending on the choice of the individual networks.

READ FULL TEXT
research
08/01/2020

Vulnerability Under Adversarial Machine Learning: Bias or Variance?

Prior studies have unveiled the vulnerability of the deep neural network...
research
09/29/2021

Neural Network Ensembles: Theory, Training, and the Importance of Explicit Diversity

Ensemble learning is a process by which multiple base learners are strat...
research
12/28/2013

Generalized Ambiguity Decomposition for Understanding Ensemble Diversity

Diversity or complementarity of experts in ensemble pattern recognition ...
research
06/24/2020

Reducing Overestimation Bias by Increasing Representation Dissimilarity in Ensemble Based Deep Q-Learning

The first deep RL algorithm, DQN, was limited by the overestimation bias...
research
01/10/2023

A Unified Theory of Diversity in Ensemble Learning

We present a theory of ensemble diversity, explaining the nature and eff...
research
04/26/2022

Bias-Variance Decompositions for Margin Losses

We introduce a novel bias-variance decomposition for a range of strictly...
research
09/30/2020

Global convergence of Negative Correlation Extreme Learning Machine

Ensemble approaches introduced in the Extreme Learning Machine (ELM) lit...

Please sign up or login with your details

Forgot password? Click here to reset