Quantifying Interpretability of Arbitrary Machine Learning Models Through Functional Decomposition

04/08/2019
by   Christoph Molnar, et al.
0

To obtain interpretable machine learning models, either interpretable models are constructed from the outset - e.g. shallow decision trees, rule lists, or sparse generalized linear models - or post-hoc interpretation methods - e.g. partial dependence or ALE plots - are employed. Both approaches have disadvantages. While the former can restrict the hypothesis space too conservatively, leading to potentially suboptimal solutions, the latter can produce too verbose or misleading results if the resulting model is too complex, especially w.r.t. feature interactions. We propose to make the compromise between predictive power and interpretability explicit by quantifying the complexity / interpretability of machine learning models. Based on functional decomposition, we propose measures of number of features used, interaction strength and main effect complexity. We show that post-hoc interpretation of models that minimize the three measures becomes more reliable and compact. Furthermore, we demonstrate the application of such measures in a multi-objective optimization approach which considers predictive power and interpretability at the same time.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2021

Interpretable Machine Learning Classifiers for Brain Tumour Survival Prediction

Prediction of survival in patients diagnosed with a brain tumour is chal...
research
06/10/2016

The Mythos of Model Interpretability

Supervised machine learning models boast remarkable predictive capabilit...
research
07/10/2023

Interpreting and generalizing deep learning in physics-based problems with functional linear models

Although deep learning has achieved remarkable success in various scient...
research
03/01/2021

Interpretable Artificial Intelligence through the Lens of Feature Interaction

Interpretation of deep learning models is a very challenging problem bec...
research
06/18/2021

It's FLAN time! Summing feature-wise latent representations for interpretability

Interpretability has become a necessary feature for machine learning mod...
research
11/16/2021

SMACE: A New Method for the Interpretability of Composite Decision Systems

Interpretability is a pressing issue for decision systems. Many post hoc...
research
07/08/2019

Optimal Explanations of Linear Models

When predictive models are used to support complex and important decisio...

Please sign up or login with your details

Forgot password? Click here to reset