DeepAI
Log In Sign Up

More Than Accuracy: Towards Trustworthy Machine Learning Interfaces for Object Recognition

08/05/2020
by   Hendrik Heuer, et al.
0

This paper investigates the user experience of visualizations of a machine learning (ML) system that recognizes objects in images. This is important since even good systems can fail in unexpected ways as misclassifications on photo-sharing websites showed. In our study, we exposed users with a background in ML to three visualizations of three systems with different levels of accuracy. In interviews, we explored how the visualization helped users assess the accuracy of systems in use and how the visualization and the accuracy of the system affected trust and reliance. We found that participants do not only focus on accuracy when assessing ML systems. They also take the perceived plausibility and severity of misclassification into account and prefer seeing the probability of predictions. Semantically plausible errors are judged as less severe than errors that are implausible, which means that system accuracy could be communicated through the types of errors.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/11/2020

A Game-Based Approach for Helping Designers Learn Machine Learning Concepts

Machine Learning (ML) is becoming more prevalent in the systems we use d...
05/12/2020

Visual Analytics and Human Involvement in Machine Learning

The rapidly developing AI systems and applications still require human i...
09/28/2022

Using Processing Fluency as a Metric of Trust in Scatterplot Visualizations

Establishing trust with readers is an important first step in visual dat...
01/28/2019

Embedding is not Cipher: Understanding the risk of embedding leakages

Machine Learning (ML) already has been integrated into all kinds of syst...
08/25/2020

Towards Guidelines for Assessing Qualities of Machine Learning Systems

Nowadays, systems containing components based on machine learning (ML) m...
02/17/2021

Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs

Interpretability methods aim to help users build trust in and understand...