Information-theoretic Evolution of Model Agnostic Global Explanations

05/14/2021
by   Sukriti Verma, et al.
0

Explaining the behavior of black box machine learning models through human interpretable rules is an important research area. Recent work has focused on explaining model behavior locally i.e. for specific predictions as well as globally across the fields of vision, natural language, reinforcement learning and data science. We present a novel model-agnostic approach that derives rules to globally explain the behavior of classification models trained on numerical and/or categorical data. Our approach builds on top of existing local model explanation methods to extract conditions important for explaining model behavior for specific instances followed by an evolutionary algorithm that optimizes an information theory based fitness function to construct rules that explain global model behavior. We show how our approach outperforms existing approaches on a variety of datasets. Further, we introduce a parameter to evaluate the quality of interpretation under the scenario of distributional shift. This parameter evaluates how well the interpretation can predict model behavior for previously unseen data distributions. We show how existing approaches for interpreting models globally lack distributional robustness. Finally, we show how the quality of the interpretation can be improved under the scenario of distributional shift by adding out of distribution samples to the dataset used to learn the interpretation and thereby, increase robustness. All of the datasets used in our paper are open and publicly available. Our approach has been deployed in a leading digital marketing suite of products.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/22/2017

MAGIX: Model Agnostic Globally Interpretable Explanations

Explaining the behavior of a black box machine learning model at the ins...
research
09/26/2019

RL-LIM: Reinforcement Learning-based Locally Interpretable Modeling

Understanding black-box machine learning models is important towards the...
research
10/05/2018

Local Interpretable Model-agnostic Explanations of Bayesian Predictive Models via Kullback-Leibler Projections

We introduce a method, KL-LIME, for explaining predictions of Bayesian p...
research
02/19/2019

Explaining a black-box using Deep Variational Information Bottleneck Approach

Briefness and comprehensiveness are necessary in order to give a lot of ...
research
07/17/2021

BEDS-Bench: Behavior of EHR-models under Distributional Shift–A Benchmark

Machine learning has recently demonstrated impressive progress in predic...
research
01/26/2021

Model-agnostic interpretation by visualization of feature perturbations

Interpretation of machine learning models has become one of the most imp...
research
11/09/2018

An Overview of Computational Approaches for Analyzing Interpretation

It is said that beauty is in the eye of the beholder. But how exactly ca...

Please sign up or login with your details

Forgot password? Click here to reset