Log In Sign Up

More Than Accuracy: Towards Trustworthy Machine Learning Interfaces for Object Recognition

by   Hendrik Heuer, et al.

This paper investigates the user experience of visualizations of a machine learning (ML) system that recognizes objects in images. This is important since even good systems can fail in unexpected ways as misclassifications on photo-sharing websites showed. In our study, we exposed users with a background in ML to three visualizations of three systems with different levels of accuracy. In interviews, we explored how the visualization helped users assess the accuracy of systems in use and how the visualization and the accuracy of the system affected trust and reliance. We found that participants do not only focus on accuracy when assessing ML systems. They also take the perceived plausibility and severity of misclassification into account and prefer seeing the probability of predictions. Semantically plausible errors are judged as less severe than errors that are implausible, which means that system accuracy could be communicated through the types of errors.


page 1

page 2

page 3

page 4


A Game-Based Approach for Helping Designers Learn Machine Learning Concepts

Machine Learning (ML) is becoming more prevalent in the systems we use d...

Visual Analytics and Human Involvement in Machine Learning

The rapidly developing AI systems and applications still require human i...

Using Processing Fluency as a Metric of Trust in Scatterplot Visualizations

Establishing trust with readers is an important first step in visual dat...

Embedding is not Cipher: Understanding the risk of embedding leakages

Machine Learning (ML) already has been integrated into all kinds of syst...

Towards Guidelines for Assessing Qualities of Machine Learning Systems

Nowadays, systems containing components based on machine learning (ML) m...

Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs

Interpretability methods aim to help users build trust in and understand...