Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience

01/24/2020
by   Bhavya Ghai, et al.
43

Active Learning (AL) is a human-in-the-loop Machine Learning paradigm favored for its ability to learn with fewer labeled instances, but the model's states and progress remain opaque to the annotators. Meanwhile, many recognize the benefits of model transparency for people interacting with ML models, as reflected by the surge of explainable AI (XAI) as a research field. However, explaining an evolving model introduces many open questions regarding its impact on the annotation quality and the annotator's experience. In this paper, we propose a novel paradigm of explainable active learning (XAL), by explaining the learning algorithm's prediction for the instance it wants to learn from and soliciting feedback from the annotator. We conduct an empirical study comparing the model learning outcome, human feedback content and the annotator experience with XAL, to that of traditional AL and coactive learning (providing the model's prediction without the explanation). Our study reveals benefits–supporting trust calibration and enabling additional forms of human feedback, and potential drawbacks–anchoring effect and frustration from transparent model limitations–of providing local explanations in AL. We conclude by suggesting directions for developing explanations that better support annotator experience in AL and interactive ML settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/23/2019

Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment

Ensuring fairness of machine learning systems is a human-in-the-loop pro...
research
06/21/2021

A Turing Test for Transparency

A central goal of explainable artificial intelligence (XAI) is to improv...
research
09/01/2023

Explainable Active Learning for Preference Elicitation

Gaining insights into the preferences of new users and subsequently pers...
research
09/06/2020

Active Learning++: Incorporating Annotator's Rationale using Local Model Explanation

We propose a new active learning (AL) framework, Active Learning++, whic...
research
07/20/2020

Toward Machine-Guided, Human-Initiated Explanatory Interactive Learning

Recent work has demonstrated the promise of combining local explanations...
research
03/01/2023

Implementing Active Learning in Cybersecurity: Detecting Anomalies in Redacted Emails

Research on email anomaly detection has typically relied on specially pr...
research
06/13/2021

Active Learning for Network Traffic Classification: A Technical Study

Network Traffic Classification (NTC) has become an important feature in ...

Please sign up or login with your details

Forgot password? Click here to reset