Interpretability is Harder in the Multiclass Setting: Axiomatic Interpretability for Multiclass Additive Models

10/22/2018
by   Xuezhou Zhang, et al.
0

Generalized additive models (GAMs) are favored in many regression and binary classification problems because they are able to fit complex, nonlinear functions while still remaining interpretable. In the first part of this paper, we generalize a state-of-the-art GAM learning algorithm based on boosted trees to the multiclass setting, and show that this multiclass algorithm outperforms existing GAM fitting algorithms and sometimes matches the performance of full complex models. In the second part, we turn our attention to the interpretability of GAMs in the multiclass setting. Surprisingly, the natural interpretability of GAMs breaks down when there are more than two classes. Drawing inspiration from binary GAMs, we identify two axioms that any additive model must satisfy to not be visually misleading. We then develop a post-processing technique (API) that provably transforms pretrained additive models to satisfy the interpretability axioms without sacrificing accuracy. The technique works not just on models trained with our algorithm, but on any multiclass additive model. We demonstrate API on a 12-class infant-mortality dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/11/2022

Rethinking Log Odds: Linear Probability Modelling and Expert Advice in Interpretable Machine Learning

We introduce a family of interpretable machine learning models, with two...
research
09/19/2019

InterpretML: A Unified Framework for Machine Learning Interpretability

InterpretML is an open-source Python package which exposes machine learn...
research
05/06/2020

Interpretable Learning-to-Rank with Generalized Additive Models

Interpretability of learning-to-rank models is a crucial yet relatively ...
research
06/03/2021

NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning

Deployment of machine learning models in real high-risk settings (e.g. h...
research
02/23/2017

GapTV: Accurate and Interpretable Low-Dimensional Regression and Classification

We consider the problem of estimating a regression function in the commo...
research
06/03/2022

Additive MIL: Intrinsic Interpretability for Pathology

Multiple Instance Learning (MIL) has been widely applied in pathology to...
research
12/10/2021

Analysis of the Pennsylvania Additive Classification Tool: Biases and Important Features

The Pennsylvania Additive Classification Tool (PACT) is a carceral algor...

Please sign up or login with your details

Forgot password? Click here to reset