Assessing the Local Interpretability of Machine Learning Models

02/09/2019
by   Sorelle A. Friedler, et al.
0

The increasing adoption of machine learning tools has led to calls for accountability via model interpretability. But what does it mean for a machine learning model to be interpretable by humans, and how can this be assessed? We focus on two definitions of interpretability that have been introduced in the machine learning literature: simulatability (a user's ability to run a model on a given input) and "what if" local explainability (a user's ability to correctly indicate the outcome to a model under local changes to the input). Through a user study with 1000 participants, we test whether humans perform well on tasks that mimic the definitions of simulatability and "what if" local explainability on models that are typically considered locally interpretable. We find evidence consistent with the common intuition that decision trees and logistic regression models are interpretable and are more interpretable than neural networks. We propose a metric - the runtime operation count on the simulatability task - to indicate the relative interpretability of models and show that as the number of operations increases the users' accuracy on the local interpretability tasks decreases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/03/2020

Interpretability and Explainability: A Machine Learning Zoo Mini-tour

In this review, we examine the problem of designing interpretable and ex...
research
11/02/2021

Designing Inherently Interpretable Machine Learning Models

Interpretable machine learning (IML) becomes increasingly important in h...
research
11/02/2022

On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach

Interpretable and explainable machine learning has seen a recent surge o...
research
07/15/2020

On quantitative aspects of model interpretability

Despite the growing body of work in interpretable machine learning, it r...
research
05/07/2023

PiML Toolbox for Interpretable Machine Learning Model Development and Validation

PiML (read π-ML, /`pai.`em.`el/) is an integrated and open-access Python...
research
10/05/2021

Foundations of Symbolic Languages for Model Interpretability

Several queries and scores have recently been proposed to explain indivi...
research
05/16/2022

Pest presence prediction using interpretable machine learning

Helicoverpa Armigera, or cotton bollworm, is a serious insect pest of co...

Please sign up or login with your details

Forgot password? Click here to reset