Tug the Student to Learn Right: Progressive Gradient Correcting by Meta-learner on Corrupted Labels

02/20/2019
by   Jun Shu, et al.
0

While deep networks have strong fitting capability to complex input patterns, they can easily overfit to biased training data with corrupted labels. Sample reweighting strategy is commonly used to alleviate this robust learning issue, through imposing zero or possibly smaller weights to corrupted samples to suppress their negative influence to learning. Current reweighting algorithms, however, need elaborate tuning of additional hyper-parameters or careful designing of complex meta-learner for learning to assign weights on samples. To address these issues, we propose a new meta-learning method with few tuned hyper-parameters and simple structure of a meta-learner (one hidden layer MLP network). Guided by a small amount of unbiased meta-data, the parameters of the proposed meta-learner can be gradually evolved for finely tugging the classifier gradient approaching to the right direction. This learning manner complies with a real teaching progress: A good teacher should more respect the student's own learning manner and help progressively correct his learning bias based on his/her current learning status. Experimental results substantiate the robustness of the new algorithm on corrupted label cases, as well as its stability and efficiency in learning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset