Interactive Label Cleaning with Example-based Explanations

06/07/2021
by   Stefano Teso, et al.
0

We tackle sequential learning under label noise in applications where a human supervisor can be queried to relabel suspicious examples. Existing approaches are flawed, in that they only relabel incoming examples that look “suspicious” to the model. As a consequence, those mislabeled examples that elude (or don't undergo) this cleaning step end up tainting the training data and the model with no further chance of being cleaned. We propose Cincer, a novel approach that cleans both new and past data by identifying pairs of mutually incompatible examples. Whenever it detects a suspicious example, Cincer identifies a counter-example in the training set that – according to the model – is maximally incompatible with the suspicious example, and asks the annotator to relabel either or both examples, resolving this possible inconsistency. The counter-examples are chosen to be maximally incompatible, so to serve as explanations of the model' suspicion, and highly influential, so to convey as much information as possible if relabeled. Cincer achieves this by leveraging an efficient and robust approximation of influence functions based on the Fisher information matrix (FIM). Our extensive empirical evaluation shows that clarifying the reasons behind the model's suspicions by cleaning the counter-examples helps acquiring substantially better data and models, especially when paired with our FIM approximation.

READ FULL TEXT

page 2

page 14

research
01/27/2020

Explaining with Counter Visual Attributes and Examples

In this paper, we aim to explain the decisions of neural networks by uti...
research
02/19/2020

Estimating Training Data Influence by Tracking Gradient Descent

We introduce a method called TrackIn that computes the influence of a tr...
research
09/26/2013

Boosting in the presence of label noise

Boosting is known to be sensitive to label noise. We studied two approac...
research
09/21/2020

Machine Guides, Human Supervises: Interactive Learning with Global Explanations

We introduce explanatory guided learning (XGL), a novel interactive lear...
research
11/08/2021

Revisiting Methods for Finding Influential Examples

Several instance-based explainability methods for finding influential tr...
research
11/20/2020

Efficient Data-Dependent Learnability

The predictive normalized maximum likelihood (pNML) approach has recentl...
research
02/21/2020

An end-to-end approach for the verification problem: learning the right distance

In this contribution, we augment the metric learning setting by introduc...

Please sign up or login with your details

Forgot password? Click here to reset