How Much Can We See? A Note on Quantifying Explainability of Machine Learning Models

10/29/2019
by   Gero Szepannek, et al.
0

One of the most popular approaches to understanding feature effects of modern black box machine learning models are partial dependence plots (PDP). These plots are easy to understand but only able to visualize low order dependencies. The paper is about the question 'How much can we see?': A framework is developed to quantify the explainability of arbitrary machine learning models, i.e. up to what degree the visualization as given by a PDP is able to explain the predictions of the model. The result allows for a judgement whether an attempt to explain a black box model is sufficient or not.

READ FULL TEXT
research
02/12/2020

Explainable Deep Modeling of Tabular Data using TableGraphNet

The vast majority of research on explainability focuses on post-explaina...
research
06/24/2021

Promises and Pitfalls of Black-Box Concept Learning Models

Machine learning models that incorporate concept learning as an intermed...
research
07/17/2018

RuleMatrix: Visualizing and Understanding Classifiers with Rules

With the growing adoption of machine learning techniques, there is a sur...
research
07/07/2019

Case-Based Reasoning for Assisting Domain Experts in Processing Fraud Alerts of Black-Box Machine Learning Models

In many contexts, it can be useful for domain experts to understand to w...
research
11/06/2020

Explaining Differences in Classes of Discrete Sequences

While there are many machine learning methods to classify and cluster se...
research
06/09/2022

Xplique: A Deep Learning Explainability Toolbox

Today's most advanced machine-learning models are hardly scrutable. The ...

Please sign up or login with your details

Forgot password? Click here to reset