Enforcing Interpretability and its Statistical Impacts: Trade-offs between Accuracy and Interpretability

To date, there has been no formal study of the statistical cost of interpretability in machine learning. As such, the discourse around potential trade-offs is often informal and misconceptions abound. In this work, we aim to initiate a formal study of these trade-offs. A seemingly insurmountable roadblock is the lack of any agreed upon definition of interpretability. Instead, we propose a shift in perspective. Rather than attempt to define interpretability, we propose to model the act of enforcing interpretability. As a starting point, we focus on the setting of empirical risk minimization for binary classification, and view interpretability as a constraint placed on learning. That is, we assume we are given a subset of hypothesis that are deemed to be interpretable, possibly depending on the data distribution and other aspects of the context. We then model the act of enforcing interpretability as that of performing empirical risk minimization over the set of interpretable hypotheses. This model allows us to reason about the statistical implications of enforcing interpretability, using known results in statistical learning theory. Focusing on accuracy, we perform a case analysis, explaining why one may or may not observe a trade-off between accuracy and interpretability when the restriction to interpretable classifiers does or does not come at the cost of some excess statistical risk. We close with some worked examples and some open problems, which we hope will spur further theoretical development around the tradeoffs involved in interpretability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2022

Accuracy, Fairness, and Interpretability of Machine Learning Criminal Recidivism Models

Criminal recidivism models are tools that have gained widespread adoptio...
research
03/17/2023

Finding Competence Regions in Domain Generalization

We propose a "learning to reject" framework to address the problem of si...
research
06/07/2022

A Trade-off-centered Framework of Content Moderation

Content moderation research typically prioritizes representing and addre...
research
04/13/2021

Model Learning with Personalized Interpretability Estimation (ML-PIE)

High-stakes applications require AI-generated models to be interpretable...
research
02/08/2021

The Limits of Computation in Solving Equity Trade-Offs in Machine Learning and Justice System Risk Assessment

This paper explores how different ideas of racial equity in machine lear...
research
07/12/2017

A Formal Framework to Characterize Interpretability of Procedures

We provide a novel notion of what it means to be interpretable, looking ...
research
06/21/2019

Trade-offs in Large-Scale Distributed Tuplewise Estimation and Learning

The development of cluster computing frameworks has allowed practitioner...

Please sign up or login with your details

Forgot password? Click here to reset