Explanations of model predictions with live and breakDown packages

04/05/2018
by   Mateusz Staniak, et al.
0

Complex models are commonly used in predictive modeling. In this paper we present R packages that can be used to explain predictions from complex black box models and attribute parts of these predictions to input features. We introduce two new approaches and corresponding packages for such attribution, namely live and breakDown. We also compare their results with existing implementations of state of the art solutions, namely lime that implements Locally Interpretable Model-agnostic Explanations and ShapleyR that implements Shapley values.

READ FULL TEXT
research
07/14/2023

Dissenting Explanations: Leveraging Disagreement to Reduce Model Overreliance

While explainability is a desirable characteristic of increasingly compl...
research
11/27/2022

Latent SHAP: Toward Practical Human-Interpretable Explanations

Model agnostic feature attribution algorithms (such as SHAP and LIME) ar...
research
10/09/2017

Comparison of Gaussian process modeling software

Gaussian process fitting, or kriging, is often used to create a model fr...
research
04/11/2017

Interpretable Explanations of Black Boxes by Meaningful Perturbation

As machine learning algorithms are increasingly applied to high impact y...
research
08/14/2020

Interpretable Real-Time Win Prediction for Honor of Kings, a Popular Mobile MOBA Esport

With the rapid prevalence and explosive development of MOBA esports (Mul...
research
08/16/2021

Writing R Extensions in Rust

This paper complements "Writing R Extensions," the official guide for wr...
research
01/10/2023

Manifold Restricted Interventional Shapley Values

Shapley values are model-agnostic methods for explaining model predictio...

Please sign up or login with your details

Forgot password? Click here to reset