Partially Interpretable Estimators (PIE): Black-Box-Refined Interpretable Machine Learning

05/06/2021
by   Tong Wang, et al.
18

We propose Partially Interpretable Estimators (PIE) which attribute a prediction to individual features via an interpretable model, while a (possibly) small part of the PIE prediction is attributed to the interaction of features via a black-box model, with the goal to boost the predictive performance while maintaining interpretability. As such, the interpretable model captures the main contributions of features, and the black-box model attempts to complement the interpretable piece by capturing the "nuances" of feature interactions as a refinement. We design an iterative training algorithm to jointly train the two types of models. Experimental results show that PIE is highly competitive to black-box models while outperforming interpretable baselines. In addition, the understandability of PIE is comparable to simple linear models as validated via a human evaluation.

READ FULL TEXT
research
09/23/2019

Model-Agnostic Linear Competitors – When Interpretable Models Compete and Collaborate with Black-Box Models

Driven by an increasing need for model interpretability, interpretable m...
research
09/26/2019

RL-LIM: Reinforcement Learning-based Locally Interpretable Modeling

Understanding black-box machine learning models is important towards the...
research
12/16/2022

Interpretable models for extrapolation in scientific machine learning

Data-driven models are central to scientific discovery. In efforts to ac...
research
02/23/2016

Auditing Black-box Models for Indirect Influence

Data-trained predictive models see widespread use, but for the most part...
research
09/13/2019

A Double Penalty Model for Interpretability

Modern statistical learning techniques have often emphasized prediction ...
research
11/09/2022

Mapping the Ictal-Interictal-Injury Continuum Using Interpretable Machine Learning

IMPORTANCE: An interpretable machine learning model can provide faithful...
research
11/16/2022

Interpretable Few-shot Learning with Online Attribute Selection

Few-shot learning (FSL) is a challenging learning problem in which only ...

Please sign up or login with your details

Forgot password? Click here to reset