DeepAI AI Chat
Log In Sign Up

Pitfalls to Avoid when Interpreting Machine Learning Models

by   Christoph Molnar, et al.

Modern requirements for machine learning (ML) models include both high predictive performance and model interpretability. A growing number of techniques provide model interpretations, but can lead to wrong conclusions if applied incorrectly. We illustrate pitfalls of ML model interpretation such as bad model generalization, dependent features, feature interactions or unjustified causal interpretations. Our paper addresses ML practitioners by raising awareness of pitfalls and pointing out solutions for correct model interpretation, as well as ML researchers by discussing open issues for further research.


page 1

page 2

page 3

page 4


SUBPLEX: Towards a Better Understanding of Black Box Model Explanations at the Subpopulation Level

Understanding the interpretation of machine learning (ML) models has bee...

Automatic Componentwise Boosting: An Interpretable AutoML System

In practice, machine learning (ML) workflows require various different s...

Interpreting Complex Regression Models

Interpretation of a machine learning induced models is critical for feat...

Intellige: A User-Facing Model Explainer for Narrative Explanations

Predictive machine learning models often lack interpretability, resultin...

Interpretable machine learning: definitions, methods, and applications

Machine-learning models have demonstrated great success in learning comp...

EFI: A Toolbox for Feature Importance Fusion and Interpretation in Python

This paper presents an open-source Python toolbox called Ensemble Featur...