Making Bayesian Predictive Models Interpretable: A Decision Theoretic Approach

10/21/2019
by   Homayun Afrabandpey, et al.
52

A salient approach to interpretable machine learning is to restrict modeling to simple and hence understandable models. In the Bayesian framework, this can be pursued by restricting the model structure and prior to favor interpretable models. Fundamentally, however, interpretability is about users' preferences, not the data generation mechanism: it is more natural to formulate interpretability as a utility function. In this work, we propose an interpretability utility, which explicates the trade-off between explanation fidelity and interpretability in the Bayesian framework. The method consists of two steps. First, a reference model, possibly a black-box Bayesian predictive model compromising no accuracy, is constructed and fitted to the training data. Second, a proxy model from an interpretable model family that best mimics the predictive behaviour of the reference model is found by optimizing the interpretability utility function. The approach is model agnostic - neither the interpretable model nor the reference model are restricted to be from a certain class of models - and the optimization problem can be solved using standard tools in the chosen model family. Through experiments on real-word data sets using decision trees as interpretable models and Bayesian additive regression models as reference models, we show that for the same level of interpretability, our approach generates more accurate models than the earlier alternative of restricting the prior. We also propose a systematic way to measure stabilities of interpretabile models constructed by different interpretability approaches and show that our proposed approach generates more stable models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/18/2019

Regularizing Black-box Models for Improved Interpretability

Most work on interpretability in machine learning has focused on designi...
research
05/29/2018

Human-in-the-Loop Interpretability Prior

We often desire our models to be interpretable as well as accurate. Prio...
research
11/02/2022

On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach

Interpretable and explainable machine learning has seen a recent surge o...
research
08/31/2022

Bayesian order identification of ARMA models with projection predictive inference

Auto-regressive moving-average (ARMA) models are ubiquitous forecasting ...
research
04/13/2021

Model Learning with Personalized Interpretability Estimation (ML-PIE)

High-stakes applications require AI-generated models to be interpretable...
research
06/30/2018

Game-Theoretic Interpretability for Temporal Modeling

Interpretability has arisen as a key desideratum of machine learning mod...
research
02/23/2017

GapTV: Accurate and Interpretable Low-Dimensional Regression and Classification

We consider the problem of estimating a regression function in the commo...

Please sign up or login with your details

Forgot password? Click here to reset