Log In Sign Up

Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs

by   Harini Suresh, et al.

Interpretability methods aim to help users build trust in and understand the capabilities of machine learning models. However, existing approaches often rely on abstract, complex visualizations that poorly map to the task at hand or require non-trivial ML expertise to interpret. Here, we present two interface modules to facilitate a more intuitive assessment of model reliability. To help users better characterize and reason about a model's uncertainty, we visualize raw and aggregate information about a given input's nearest neighbors in the training dataset. Using an interactive editor, users can manipulate this input in semantically-meaningful ways, determine the effect on the output, and compare against their prior expectations. We evaluate our interface using an electrocardiogram beat classification case study. Compared to a baseline feature importance interface, we find that 9 physicians are better able to align the model's uncertainty with clinically relevant factors and build intuition about its capabilities and limitations.


page 1

page 2

page 3

page 4


Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild

Accessibility is a major challenge of machine learning (ML). Typical ML ...

ViCE: Visual Counterfactual Explanations for Machine Learning Models

The continued improvements in the predictive accuracy of machine learnin...

Show or Suppress? Managing Input Uncertainty in Machine Learning Model Explanations

Feature attribution is widely used in interpretable machine learning to ...

Prediction Uncertainty Estimation for Hate Speech Classification

As a result of social network popularity, in recent years, hate speech p...

Leveraging Explanations in Interactive Machine Learning: An Overview

Explanations have gained an increasing level of interest in the AI and M...

The Role of Individual User Differences in Interpretable and Explainable Machine Learning Systems

There is increased interest in assisting non-expert audiences to effecti...

More Than Accuracy: Towards Trustworthy Machine Learning Interfaces for Object Recognition

This paper investigates the user experience of visualizations of a machi...