Optimizing Ensemble Weights and Hyperparameters of Machine Learning Models for Regression Problems

08/14/2019
by   Mohsen Shahhosseini, et al.
26

Aggregating multiple learners through an ensemble of models aims to make better predictions by capturing the underlying distribution more accurately. Different ensembling methods, such as bagging, boosting and stacking/blending, have been studied and adopted extensively in research and practice. While bagging and boosting intend to reduce variance and bias, respectively, blending approaches target both by finding the optimal way to combine base learners to find the best trade-off between bias and variance. In blending, ensembles are created from weighted averages of multiple base learners. In this study, a systematic approach is proposed to find the optimal weights to create these ensembles for bias-variance tradeoff using cross-validation for regression problems (Cross-validated Optimal Weighted Ensemble (COWE)). Furthermore, it is known that tuning hyperparameters of each base learner inside the ensemble weight optimization process can produce better performing ensembles. To this end, a nested algorithm based on bi-level optimization that considers tuning hyperparameters as well as finding the optimal weights to combine ensembles (Cross-validated Optimal Weighted Ensemble with Internally Tuned Hyperparameters (COWE-ITH)) was proposed. The algorithm is shown to be generalizable to real data sets though analyses with ten publicly available data sets. The prediction accuracies of COWE-ITH and COWE have been compared to base learners and the state-of-art ensemble methods. The results show that COWE-ITH outperforms other benchmarks as well as base learners in 9 out of 10 data sets.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 14

06/30/2016

Vote-boosting ensembles

Vote-boosting is a sequential ensemble learning method in which individu...
03/28/2014

Systematic Ensemble Learning for Regression

The motivation of this work is to improve the performance of standard st...
04/26/2018

Decentralized learning with budgeted network load using Gaussian copulas and classifier ensembles

We examine a network of learners which address the same classification t...
12/02/2019

Asymptotic Normality and Variance Estimation For Supervised Ensembles

Ensemble methods based on bootstrapping have improved the predictive acc...
04/03/2020

Stacked Generalizations in Imbalanced Fraud Data Sets using Resampling Methods

This study uses stacked generalization, which is a two-step process of c...
09/30/2020

Global convergence of Negative Correlation Extreme Learning Machine

Ensemble approaches introduced in the Extreme Learning Machine (ELM) lit...

Code Repositories

GEM-ITH-Ensemble

https://arxiv.org/abs/1908.05287


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.