Model Agnostic Combination for Ensemble Learning

06/16/2020
by   Ohad Silbert, et al.
0

Ensemble of models is well known to improve single model performance. We present a novel ensembling technique coined MAC that is designed to find the optimal function for combining models while remaining invariant to the number of sub-models involved in the combination. Being agnostic to the number of sub-models enables addition and replacement of sub-models to the combination even after deployment, unlike many of the current methods for ensembling such as stacking, boosting, mixture of experts and super learners that lock the models used for combination during training and therefore need retraining whenever a new model is introduced into the ensemble. We show that on the Kaggle RSNA Intracranial Hemorrhage Detection challenge, MAC outperforms classical average methods, demonstrates competitive results to boosting via XGBoost for a fixed number of sub-models, and outperforms it when adding sub-models to the combination without retraining.

READ FULL TEXT
research
03/09/2015

Distilling the Knowledge in a Neural Network

A very simple way to improve the performance of almost any machine learn...
research
01/21/2021

Better Short than Greedy: Interpretable Models through Optimal Rule Boosting

Rule ensembles are designed to provide a useful trade-off between predic...
research
04/06/2021

Enhancing the Diversity of Predictions Combination by Negative Correlation Learning

Predictions combination, as a combination model approach with adjustment...
research
05/30/2022

Online Agnostic Multiclass Boosting

Boosting is a fundamental approach in machine learning that enjoys both ...
research
01/04/2014

From Kernel Machines to Ensemble Learning

Ensemble methods such as boosting combine multiple learners to obtain be...
research
02/14/2012

Boosting as a Product of Experts

In this paper, we derive a novel probabilistic model of boosting as a Pr...
research
10/08/2020

An Empirical Study on Model-agnostic Debiasing Strategies for Robust Natural Language Inference

The prior work on natural language inference (NLI) debiasing mainly targ...

Please sign up or login with your details

Forgot password? Click here to reset