Learning to Rank from Samples of Variable Quality

06/21/2018
by   Mostafa Dehghani, et al.
16

Training deep neural networks requires many training samples, but in practice, training labels are expensive to obtain and may be of varying quality, as some may be from trusted expert labelers while others might be from heuristics or other sources of weak supervision such as crowd-sourcing. This creates a fundamental quality-versus quantity trade-off in the learning process. Do we learn from the small amount of high-quality data or the potentially large amount of weakly-labeled data? We argue that if the learner could somehow know and take the label-quality into account when learning the data representation, we could get the best of both worlds. To this end, we introduce "fidelity-weighted learning" (FWL), a semi-supervised student-teacher approach for training deep neural networks using weakly-labeled data. FWL modulates the parameter updates to a student network (trained on the task we care about) on a per-sample basis according to the posterior confidence of its label-quality estimated by a teacher (who has access to the high-quality labels). Both student and teacher are learned from the data. We evaluate FWL on document ranking where we outperform state-of-the-art alternative semi-supervised methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/08/2017

Fidelity-Weighted Learning

Training deep neural networks requires many training samples, but in pra...
research
11/01/2017

Avoiding Your Teacher's Mistakes: Training Neural Networks with Controlled Weak Supervision

Training deep neural networks requires massive amounts of training data,...
research
09/25/2019

Teacher-Student Learning Paradigm for Tri-training: An Efficient Method for Unlabeled Data Exploitation

Given that labeled data is expensive to obtain in real-world scenarios, ...
research
10/13/2022

Weighted Distillation with Unlabeled Examples

Distillation with unlabeled examples is a popular and powerful method fo...
research
11/30/2017

Learning to Learn from Weak Supervision by Full Supervision

In this paper, we propose a method for training neural networks when we ...
research
02/06/2021

Jointly Improving Language Understanding and Generation with Quality-Weighted Weak Supervision of Automatic Labeling

Neural natural language generation (NLG) and understanding (NLU) models ...
research
07/13/2022

Wakeword Detection under Distribution Shifts

We propose a novel approach for semi-supervised learning (SSL) designed ...

Please sign up or login with your details

Forgot password? Click here to reset