Shared Interest: Large-Scale Visual Analysis of Model Behavior by Measuring Human-AI Alignment

07/20/2021
by   Angie Boggust, et al.
10

Saliency methods – techniques to identify the importance of input features on a model's output – are a common first step in understanding neural network behavior. However, interpreting saliency requires tedious manual inspection to identify and aggregate patterns in model behavior, resulting in ad hoc or cherry-picked analysis. To address these concerns, we present Shared Interest: a set of metrics for comparing saliency with human annotated ground truths. By providing quantitative descriptors, Shared Interest allows ranking, sorting, and aggregation of inputs thereby facilitating large-scale systematic analysis of model behavior. We use Shared Interest to identify eight recurring patterns in model behavior including focusing on a sufficient subset of ground truth features or being distracted by contextual features. Working with representative real-world users, we show how Shared Interest can be used to rapidly develop or lose trust in a model's reliability, uncover issues that are missed in manual analyses, and enable interactive probing of model behavior.

READ FULL TEXT

page 4

page 6

page 8

page 11

page 12

page 14

research
05/31/2016

Quantitative Analysis of Saliency Models

Previous saliency detection research required the reader to evaluate per...
research
06/07/2022

Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods

Saliency methods calculate how important each input feature is to a mach...
research
10/27/2021

Revisiting Sanity Checks for Saliency Maps

Saliency methods are a popular approach for model debugging and explaina...
research
01/14/2013

Wavelet-based Scale Saliency

Both pixel-based scale saliency (PSS) and basis project methods focus on...
research
10/08/2018

Saliency Prediction in the Deep Learning Era: An Empirical Investigation

Visual saliency models have enjoyed a big leap in performance in recent ...
research
11/14/2021

Scrutinizing XAI using linear ground-truth data with suppressor variables

Machine learning (ML) is increasingly often used to inform high-stakes d...
research
09/13/2018

SafeCity: Understanding Diverse Forms of Sexual Harassment Personal Stories

With the recent rise of #MeToo, an increasing number of personal stories...

Please sign up or login with your details

Forgot password? Click here to reset