HIVE: Evaluating the Human Interpretability of Visual Explanations

12/06/2021
by   Sunnie S. Y. Kim, et al.
6

As machine learning is increasingly applied to high-impact, high-risk domains, there have been a number of new methods aimed at making AI models more human interpretable. Despite the recent growth of interpretability work, there is a lack of systematic evaluation of proposed techniques. In this work, we propose a novel human evaluation framework HIVE (Human Interpretability of Visual Explanations) for diverse interpretability methods in computer vision; to the best of our knowledge, this is the first work of its kind. We argue that human studies should be the gold standard in properly evaluating how interpretable a method is to human users. While human studies are often avoided due to challenges associated with cost, study design, and cross-method comparison, we describe how our framework mitigates these issues and conduct IRB-approved studies of four methods that represent the diversity of interpretability works: GradCAM, BagNet, ProtoPNet, and ProtoTree. Our results suggest that explanations (regardless of if they are actually correct) engender human trust, yet are not distinct enough for users to distinguish between correct and incorrect predictions. Lastly, we also open-source our framework to enable future studies and to encourage more human-centered approaches to interpretability.

READ FULL TEXT

page 4

page 13

page 14

page 17

page 19

page 21

page 22

page 24

research
10/29/2019

Weight of Evidence as a Basis for Human-Oriented Explanations

Interpretability is an elusive but highly sought-after characteristic of...
research
10/10/2021

Interpretable Aesthetic Analysis Model for Intelligent Photography Guidance Systems

An aesthetics evaluation model is at the heart of predicting users' aest...
research
02/10/2022

Trust in AI: Interpretability is not necessary or sufficient, while black-box interaction is necessary and sufficient

The problem of human trust in artificial intelligence is one of the most...
research
05/30/2021

Human Interpretable AI: Enhancing Tsetlin Machine Stochasticity with Drop Clause

In this article, we introduce a novel variant of the Tsetlin machine (TM...
research
02/21/2018

Manipulating and Measuring Model Interpretability

Despite a growing body of research focused on creating interpretable mac...
research
03/01/2021

ToxCCIn: Toxic Content Classification with Interpretability

Despite the recent successes of transformer-based models in terms of eff...
research
04/27/2021

A Human-Centered Interpretability Framework Based on Weight of Evidence

In this paper, we take a human-centered approach to interpretable machin...

Please sign up or login with your details

Forgot password? Click here to reset