Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs

02/17/2021
by   Harini Suresh, et al.
10

Interpretability methods aim to help users build trust in and understand the capabilities of machine learning models. However, existing approaches often rely on abstract, complex visualizations that poorly map to the task at hand or require non-trivial ML expertise to interpret. Here, we present two interface modules to facilitate a more intuitive assessment of model reliability. To help users better characterize and reason about a model's uncertainty, we visualize raw and aggregate information about a given input's nearest neighbors in the training dataset. Using an interactive editor, users can manipulate this input in semantically-meaningful ways, determine the effect on the output, and compare against their prior expectations. We evaluate our interface using an electrocardiogram beat classification case study. Compared to a baseline feature importance interface, we find that 9 physicians are better able to align the model's uncertainty with clinically relevant factors and build intuition about its capabilities and limitations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/19/2023

Where does a computer vision model make mistakes? Using interactive visualizations to find where and how CV models can improve

Creating Computer Vision (CV) models remains a complex and taxing practi...
research
06/06/2019

Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild

Accessibility is a major challenge of machine learning (ML). Typical ML ...
research
03/05/2020

ViCE: Visual Counterfactual Explanations for Machine Learning Models

The continued improvements in the predictive accuracy of machine learnin...
research
09/16/2019

Prediction Uncertainty Estimation for Hate Speech Classification

As a result of social network popularity, in recent years, hate speech p...
research
06/30/2022

Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values

Machine learning (ML) interpretability techniques can reveal undesirable...
research
11/18/2019

Justification-Based Reliability in Machine Learning

With the advent of Deep Learning, the field of machine learning (ML) has...
research
04/24/2022

An empirical study of the effect of background data size on the stability of SHapley Additive exPlanations (SHAP) for deep learning models

Nowadays, the interpretation of why a machine learning (ML) model makes ...

Please sign up or login with your details

Forgot password? Click here to reset