A Case Study on Designing Evaluations of ML Explanations with Simulated User Studies

02/15/2023
by   Ada Martin, et al.
0

When conducting user studies to ascertain the usefulness of model explanations in aiding human decision-making, it is important to use real-world use cases, data, and users. However, this process can be resource-intensive, allowing only a limited number of explanation methods to be evaluated. Simulated user evaluations (SimEvals), which use machine learning models as a proxy for human users, have been proposed as an intermediate step to select promising explanation methods. In this work, we conduct the first SimEvals on a real-world use case to evaluate whether explanations can better support ML-assisted decision-making in e-commerce fraud detection. We study whether SimEvals can corroborate findings from a user study conducted in this fraud detection context. In particular, we find that SimEvals suggest that all considered explainers are equally performant, and none beat a baseline without explanations – this matches the conclusions of the original user study. Such correspondences between our results and the original user study provide initial evidence in favor of using SimEvals before running user studies. We also explore the use of SimEvals as a cheap proxy to explore an alternative user study set-up. We hope that this work motivates further study of when and how SimEvals should be used to aid in the design of real-world evaluations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/05/2022

Use-Case-Grounded Simulations for Explanation Evaluation

A growing body of research runs human subject evaluations to study wheth...
research
01/21/2021

How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations

There have been several research works proposing new Explainable AI (XAI...
research
11/03/2018

SimplerVoice: A Key Message & Visual Description Generator System for Illiteracy

We introduce SimplerVoice: a key message and visual description generato...
research
12/30/2020

Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA

While research on explaining predictions of open-domain QA systems (ODQA...
research
06/15/2020

Explaining reputation assessments

Reputation is crucial to enabling human or software agents to select amo...
research
11/20/2018

Evaluating the End-User Experience of Private Browsing Mode

Nowadays, all major web browsers have a private browsing mode. However, ...

Please sign up or login with your details

Forgot password? Click here to reset