DeepAI AI Chat
Log In Sign Up

Multi-Model Ensemble Optimization

Methodology and optimization algorithms for sparse regression are extended to multi-model regression ensembles. In particular, we adapt optimization algorithms for l0-penalized problems to learn ensembles of sparse and diverse models. To generate an initial solution for our algorithm, we generalize forward stepwise regression to multi-model regression ensembles. The sparse and diverse models are learned jointly from the data and constitute alternative explanations for the relationship between the predictors and the response variable. Beyond the advantage of interpretability, in prediction tasks the ensembles are shown to outperform state-of-the-art competitors on both simulated and gene expression data. We study the effect of the number of models and show how the ensembles achieve excellent prediction accuracy by exploiting the accuracy-diversity tradeoff of ensembles. The optimization algorithms are implemented in publicly available R/C++ software packages.


page 1

page 2

page 3

page 4


Developing parsimonious ensembles using predictor diversity within a reinforcement learning framework

Heterogeneous ensembles that can aggregate an unrestricted number and va...

Sequential Bayesian Neural Subnetwork Ensembles

Deep neural network ensembles that appeal to model diversity have been u...

KNN Ensembles for Tweedie Regression: The Power of Multiscale Neighborhoods

Very few K-nearest-neighbor (KNN) ensembles exist, despite the efficacy ...

Node harvest

When choosing a suitable technique for regression and classification wit...

Ensembles of Regularized Linear Models

We propose an approach for building ensembles of regularized linear mode...

Confident Neural Network Regression with Bootstrapped Deep Ensembles

With the rise of the popularity and usage of neural networks, trustworth...