Entropic Variable Boosting for Explainability and Interpretability in Machine Learning

10/18/2018
by   François Bachoc, et al.
0

In this paper, we present a new explainability formalism to make clear the impact of each variable on the predictions given by black-box decision rules. Our method consists in evaluating the decision rules on test samples generated in such a way that each variable is stressed incrementally while preserving the original distribution of the machine learning problem. We then propose a new computation-ally efficient algorithm to stress the variables, which only reweights the reference observations and predictions. This makes our methodology scalable to large datasets. Results obtained on standard machine learning datasets are presented and discussed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2018

Hybrid Decision Making: When Interpretable Models Collaborate With Black-Box Models

Interpretable machine learning models have received increasing interest ...
research
02/11/2018

Global Model Interpretation via Recursive Partitioning

In this work, we propose a simple but effective method to interpret blac...
research
04/11/2022

Bayes Point Rule Set Learning

Interpretability is having an increasingly important role in the design ...
research
12/03/2020

Interpretability and Explainability: A Machine Learning Zoo Mini-tour

In this review, we examine the problem of designing interpretable and ex...
research
03/10/2020

Metafeatures-based Rule-Extraction for Classifiers on Behavioral and Textual Data

Machine learning using behavioral and text data can result in highly acc...
research
05/13/2021

Explainable Machine Learning for Fraud Detection

The application of machine learning to support the processing of large d...
research
09/17/2022

Unveil the unseen: Exploit information hidden in noise

Noise and uncertainty are usually the enemy of machine learning, noise i...

Please sign up or login with your details

Forgot password? Click here to reset