Transfer and Marginalize: Explaining Away Label Noise with Privileged Information

02/18/2022
by   Mark Collier, et al.
0

Supervised learning datasets often have privileged information, in the form of features which are available at training time but are not available at test time e.g. the ID of the annotator that provided the label. We argue that privileged information is useful for explaining away label noise, thereby reducing the harmful impact of noisy labels. We develop a simple and efficient method for supervised neural networks: it transfers via weight sharing the knowledge learned with privileged information and approximately marginalizes over privileged information at test time. Our method, TRAM (TRansfer and Marginalize), has minimal training time overhead and has the same test time cost as not using privileged information. TRAM performs strongly on CIFAR-10H, ImageNet and Civil Comments benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/03/2023

When does Privileged Information Explain Away Label Noise?

Leveraging privileged information (PI), or features available during tra...
research
10/17/2022

Learning Less Generalizable Patterns with an Asymmetrically Trained Double Classifier for Better Test-Time Adaptation

Deep neural networks often fail to generalize outside of their training ...
research
05/18/2022

TTAPS: Test-Time Adaption by Aligning Prototypes using Self-Supervision

Nowadays, deep neural networks outperform humans in many tasks. However,...
research
08/12/2021

AffRankNet+: Ranking Affect Using Privileged Information

Many of the affect modelling tasks present an asymmetric distribution of...
research
02/18/2020

Picking Winning Tickets Before Training by Preserving Gradient Flow

Overparameterization has been shown to benefit both the optimization and...
research
02/24/2017

Changing Model Behavior at Test-Time Using Reinforcement Learning

Machine learning models are often used at test-time subject to constrain...
research
07/12/2022

Utilizing Excess Resources in Training Neural Networks

In this work, we suggest Kernel Filtering Linear Overparameterization (K...

Please sign up or login with your details

Forgot password? Click here to reset