Push the Student to Learn Right: Progressive Gradient Correcting by Meta-learner on Corrupted Labels

02/20/2019
by   Jun Shu, et al.
0

While deep networks have strong fitting capability to complex input patterns, they can easily overfit to biased training data with corrupted labels. Sample reweighting strategy is commonly used to alleviate this robust learning issue, through imposing zero or possibly smaller weights to corrupted samples to suppress their negative influence to learning. Current reweighting algorithms, however, need elaborate tuning of additional hyper-parameters or careful designing of complex meta-learner for learning to assign weights on samples. To address these issues, we propose a new meta-learning method with few tuned hyper-parameters and simple structure of a meta-learner (one hidden layer MLP network). Guided by a small amount of unbiased meta-data, the parameters of the proposed meta-learner can be gradually evolved for finely tugging the classifier gradient approaching to the right direction. This learning manner complies with a real teaching progress: A good teacher should more respect the student's own learning manner and help progressively correct his learning bias based on his/her current learning status. Experimental results substantiate the robustness of the new algorithm on corrupted label cases, as well as its stability and efficiency in learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/20/2019

Tug the Student to Learn Right: Progressive Gradient Correcting by Meta-learner on Corrupted Labels

While deep networks have strong fitting capability to complex input patt...
research
08/03/2020

Learning to Purify Noisy Labels via Meta Soft Label Corrector

Recent deep neural networks (DNNs) can easily overfit to biased training...
research
03/24/2018

Learning to Reweight Examples for Robust Deep Learning

Deep neural networks have been shown to be very powerful modeling tools ...
research
12/30/2021

Delving into Sample Loss Curve to Embrace Noisy and Imbalanced Data

Corrupted labels and class imbalance are commonly encountered in practic...
research
06/04/2022

Robust Meta-learning with Sampling Noise and Label Noise via Eigen-Reptile

Recent years have seen a surge of interest in meta-learning techniques f...
research
07/29/2020

Meta-LR-Schedule-Net: Learned LR Schedules that Scale and Generalize

The learning rate (LR) is one of the most important hyper-parameters in ...
research
10/10/2019

Learning from Multiple Corrupted Sources, with Application to Learning from Label Proportions

We study binary classification in the setting where the learner is prese...

Please sign up or login with your details

Forgot password? Click here to reset