MDL-motivated compression of GLM ensembles increases interpretability and retains predictive power

11/21/2016
by   Boris Hayete, et al.
0

Over the years, ensemble methods have become a staple of machine learning. Similarly, generalized linear models (GLMs) have become very popular for a wide variety of statistical inference tasks. The former have been shown to enhance out- of-sample predictive power and the latter possess easy interpretability. Recently, ensembles of GLMs have been proposed as a possibility. On the downside, this approach loses the interpretability that GLMs possess. We show that minimum description length (MDL)-motivated compression of the inferred ensembles can be used to recover interpretability without much, if any, downside to performance and illustrate on a number of standard classification data sets.

READ FULL TEXT
research
05/05/2018

Developing parsimonious ensembles using ensemble diversity within a reinforcement learning framework

Heterogeneous ensembles built from the predictions of a wide variety and...
research
06/17/2016

Making Tree Ensembles Interpretable

Tree ensembles, such as random forest and boosted trees, are renowned fo...
research
06/16/2022

Explainable Models via Compression of Tree Ensembles

Ensemble models (bagging and gradient-boosting) of relational decision t...
research
05/20/2022

Compression ensembles quantify aesthetic complexity and the evolution of visual art

The quantification of visual aesthetics and complexity have a long histo...
research
12/06/2022

Explainability as statistical inference

A wide variety of model explanation approaches have been proposed in rec...
research
05/29/2018

Human-in-the-Loop Interpretability Prior

We often desire our models to be interpretable as well as accurate. Prio...
research
11/07/2017

Distributed Bayesian Piecewise Sparse Linear Models

The importance of interpretability of machine learning models has been i...

Please sign up or login with your details

Forgot password? Click here to reset