Explainability as statistical inference

A wide variety of model explanation approaches have been proposed in recent years, all guided by very different rationales and heuristics. In this paper, we take a new route and cast interpretability as a statistical inference problem. We propose a general deep probabilistic model designed to produce interpretable predictions. The model parameters can be learned via maximum likelihood, and the method can be adapted to any predictor network architecture and any type of prediction problem. Our method is a case of amortized interpretability models, where a neural network is used as a selector to allow for fast interpretation at inference time. Several popular interpretability methods are shown to be particular cases of regularised maximum likelihood for our general model. We propose new datasets with ground truth selection which allow for the evaluation of the features importance map. Using these datasets, we show experimentally that using multiple imputation provides more reasonable interpretations.

READ FULL TEXT

page 7

page 9

page 18

page 26

page 27

research
06/10/2015

A Scheme for Molecular Computation of Maximum Likelihood Estimators for Log-Linear Models

We propose a novel molecular computing scheme for statistical inference....
research
06/08/2021

General-order observation-driven models: ergodicity and consistency of the maximum likelihood estimator

The class of observation-driven models (ODMs) includes many models of no...
research
12/16/2017

Hierarchical Bayesian Bradley-Terry for Applications in Major League Baseball

A common problem faced in statistical inference is drawing conclusions f...
research
07/31/2020

Gibbsian T-tessellation model for agricultural landscape characterization

A new class of planar tessellations, named T-tessellations, was introduc...
research
11/21/2016

MDL-motivated compression of GLM ensembles increases interpretability and retains predictive power

Over the years, ensemble methods have become a staple of machine learnin...
research
03/02/2021

Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations

While the need for interpretable machine learning has been established, ...
research
09/28/2021

Constructing Prediction Intervals Using the Likelihood Ratio Statistic

Statistical prediction plays an important role in many decision processe...

Please sign up or login with your details

Forgot password? Click here to reset