Interpretable Active Learning

07/31/2017
by   Richard L. Phillips, et al.
0

Active learning has long been a topic of study in machine learning. However, as increasingly complex and opaque models have become standard practice, the process of active learning, too, has become more opaque. There has been little investigation into interpreting what specific trends and patterns an active learning strategy may be exploring. This work expands on the Local Interpretable Model-agnostic Explanations framework (LIME) to provide explanations for active learning recommendations. We demonstrate how LIME can be used to generate locally faithful explanations for an active learning strategy, and how these explanations can be used to understand how different models and datasets explore a problem space over time. In order to quantify the per-subgroup differences in how an active learning strategy queries spatial regions, we introduce a notion of uncertainty bias (based on disparate impact) to measure the discrepancy in the confidence for a model's predictions between one subgroup and another. Using the uncertainty bias measure, we show that our query explanations accurately reflect the subgroup focus of the active learning queries, allowing for an interpretable explanation of what is being learned as points with similar sources of uncertainty have their uncertainty bias resolved. We demonstrate that this technique can be applied to track uncertainty bias over user-defined clusters or automatically generated clusters based on the source of uncertainty.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/20/2020

Toward Machine-Guided, Human-Initiated Explanatory Interactive Learning

Recent work has demonstrated the promise of combining local explanations...
research
09/06/2020

Active Learning++: Incorporating Annotator's Rationale using Local Model Explanation

We propose a new active learning (AL) framework, Active Learning++, whic...
research
08/18/2016

Active Learning for Approximation of Expensive Functions with Normal Distributed Output Uncertainty

When approximating a black-box function, sampling with active learning f...
research
04/01/2020

Ontology-based Interpretable Machine Learning for Textual Data

In this paper, we introduce a novel interpreting framework that learns a...
research
12/13/2021

Depth Uncertainty Networks for Active Learning

In active learning, the size and complexity of the training dataset chan...
research
04/15/2021

Learning User's confidence for active learning

In this paper, we study the applicability of active learning in operativ...
research
01/27/2021

On Statistical Bias In Active Learning: How and When To Fix It

Active learning is a powerful tool when labelling data is expensive, but...

Please sign up or login with your details

Forgot password? Click here to reset