CNT (Conditioning on Noisy Targets): A new Algorithm for Leveraging Top-Down Feedback

We propose a novel regularizer for supervised learning called Conditioning on Noisy Targets (CNT). This approach consists in conditioning the model on a noisy version of the target(s) (e.g., actions in imitation learning or labels in classification) at a random noise level (from small to large noise). At inference time, since we do not know the target, we run the network with only noise in place of the noisy target. CNT provides hints through the noisy label (with less noise, we can more easily infer the true target). This give two main benefits: 1) the top-down feedback allows the model to focus on simpler and more digestible sub-problems and 2) rather than learning to solve the task from scratch, the model will first learn to master easy examples (with less noise), while slowly progressing toward harder examples (with more noise).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2020

Class2Simi: A New Perspective on Learning with Label Noise

Label noise is ubiquitous in the era of big data. Deep learning algorith...
research
09/12/2017

Learning with Bounded Instance- and Label-dependent Label Noise

Instance- and label-dependent label noise (ILN) is widely existed in rea...
research
03/03/2023

When does Privileged Information Explain Away Label Noise?

Leveraging privileged information (PI), or features available during tra...
research
07/10/2020

ExpertNet: Adversarial Learning and Recovery Against Noisy Labels

Today's available datasets in the wild, e.g., from social media and open...
research
10/12/2020

SURF: Improving classifiers in production by learning from busy and noisy end users

Supervised learning classifiers inevitably make mistakes in production, ...
research
10/20/2021

One-Step Abductive Multi-Target Learning with Diverse Noisy Samples

One-step abductive multi-target learning (OSAMTL) was proposed to handle...
research
10/19/2020

Importance Reweighting for Biquality Learning

The field of Weakly Supervised Learning (WSL) has recently seen a surge ...

Please sign up or login with your details

Forgot password? Click here to reset