Interpretability via Model Extraction

06/29/2017
by   Osbert Bastani, et al.
0

The ability to interpret machine learning models has become increasingly important now that machine learning is used to inform consequential decisions. We propose an approach called model extraction for interpreting complex, blackbox models. Our approach approximates the complex model using a much more interpretable model; as long as the approximation quality is good, then statistical properties of the complex model are reflected in the interpretable model. We show how model extraction can be used to understand and debug random forests and neural nets trained on several datasets from the UCI Machine Learning Repository, as well as control policies learned for several classical reinforcement learning problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/27/2017

Proceedings of NIPS 2017 Symposium on Interpretable Machine Learning

This is the Proceedings of NIPS 2017 Symposium on Interpretable Machine ...
research
09/24/2019

Interpretable Models of Human Interaction in Immersive Simulation Settings

Immersive simulations are increasingly used for teaching and training in...
research
09/11/2020

Deducing neighborhoods of classes from a fitted model

In todays world the request for very complex models for huge data sets i...
research
09/30/2019

MonoNet: Towards Interpretable Models by Learning Monotonic Features

Being able to interpret, or explain, the predictions made by a machine l...
research
01/28/2020

Statistical Exploration of Relationships Between Routine and Agnostic Features Towards Interpretable Risk Characterization

As is typical in other fields of application of high throughput systems,...
research
06/30/2016

SnapToGrid: From Statistical to Interpretable Models for Biomedical Information Extraction

We propose an approach for biomedical information extraction that marrie...
research
09/19/2018

Interpretable Reinforcement Learning with Ensemble Methods

We propose to use boosted regression trees as a way to compute human-int...

Please sign up or login with your details

Forgot password? Click here to reset