The Promise and Peril of Human Evaluation for Model Interpretability

11/20/2017
by   Bernease Herman, et al.
0

Transparency, user trust, and human comprehension are popular ethical motivations for interpretable machine learning. In support of these goals, researchers evaluate model explanation performance using humans and real world applications. This alone presents a challenge in many areas of artificial intelligence. In this position paper, we propose a distinction between descriptive and persuasive explanations. We discuss reasoning suggesting that functional interpretability may be correlated with cognitive function and user preferences. If this is indeed the case, evaluation and optimization using functional metrics could perpetuate implicit cognitive bias in explanations that threaten transparency. Finally, we propose two potential research directions to disambiguate cognitive function and explanation models, retaining control over the tradeoff between accuracy and interpretability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/16/2018

A Human-Grounded Evaluation Benchmark for Local Explanations of Machine Learning

In order for people to be able to trust and take advantage of the result...
research
02/23/2022

Margin-distancing for safe model explanation

The growing use of machine learning models in consequential settings has...
research
07/01/2021

Quality Metrics for Transparent Machine Learning With and Without Humans In the Loop Are Not Correlated

The field explainable artificial intelligence (XAI) has brought about an...
research
07/23/2019

Interpretable and Steerable Sequence Learning via Prototypes

One of the major challenges in machine learning nowadays is to provide p...
research
04/27/2021

A Human-Centered Interpretability Framework Based on Weight of Evidence

In this paper, we take a human-centered approach to interpretable machin...
research
10/31/2022

SoK: Modeling Explainability in Security Monitoring for Trust, Privacy, and Interpretability

Trust, privacy, and interpretability have emerged as significant concern...
research
07/15/2020

On quantitative aspects of model interpretability

Despite the growing body of work in interpretable machine learning, it r...

Please sign up or login with your details

Forgot password? Click here to reset