DeepAI AI Chat
Log In Sign Up

Pitfalls to Avoid when Interpreting Machine Learning Models

07/08/2020
by   Christoph Molnar, et al.
66

Modern requirements for machine learning (ML) models include both high predictive performance and model interpretability. A growing number of techniques provide model interpretations, but can lead to wrong conclusions if applied incorrectly. We illustrate pitfalls of ML model interpretation such as bad model generalization, dependent features, feature interactions or unjustified causal interpretations. Our paper addresses ML practitioners by raising awareness of pitfalls and pointing out solutions for correct model interpretation, as well as ML researchers by discussing open issues for further research.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/21/2020

SUBPLEX: Towards a Better Understanding of Black Box Model Explanations at the Subpopulation Level

Understanding the interpretation of machine learning (ML) models has bee...
09/12/2021

Automatic Componentwise Boosting: An Interpretable AutoML System

In practice, machine learning (ML) workflows require various different s...
02/26/2018

Interpreting Complex Regression Models

Interpretation of a machine learning induced models is critical for feat...
05/27/2021

Intellige: A User-Facing Model Explainer for Narrative Explanations

Predictive machine learning models often lack interpretability, resultin...
01/14/2019

Interpretable machine learning: definitions, methods, and applications

Machine-learning models have demonstrated great success in learning comp...
08/08/2022

EFI: A Toolbox for Feature Importance Fusion and Interpretation in Python

This paper presents an open-source Python toolbox called Ensemble Featur...