A Formal Framework to Characterize Interpretability of Procedures

07/12/2017
by   Amit Dhurandhar, et al.
0

We provide a novel notion of what it means to be interpretable, looking past the usual association with human understanding. Our key insight is that interpretability is not an absolute concept and so we define it relative to a target model, which may or may not be a human. We define a framework that allows for comparing interpretable procedures by linking it to important practical aspects such as accuracy and robustness. We characterize many of the current state-of-the-art interpretable methods in our framework portraying its general applicability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/09/2017

TIP: Typifying the Interpretability of Procedures

We provide a novel notion of what it means to be interpretable, looking ...
research
05/29/2021

The Definitions of Interpretability and Learning of Interpretable Models

As machine learning algorithms getting adopted in an ever-increasing num...
research
07/08/2019

The Price of Interpretability

When quantitative models are used to support decision-making on complex ...
research
04/07/2020

Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?

With the growing popularity of deep-learning based NLP models, comes a n...
research
01/24/2021

Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs

To ensure accountability and mitigate harm, it is critical that diverse ...
research
10/26/2020

Enforcing Interpretability and its Statistical Impacts: Trade-offs between Accuracy and Interpretability

To date, there has been no formal study of the statistical cost of inter...
research
05/30/2021

Human Interpretable AI: Enhancing Tsetlin Machine Stochasticity with Drop Clause

In this article, we introduce a novel variant of the Tsetlin machine (TM...

Please sign up or login with your details

Forgot password? Click here to reset