Improve Noise Tolerance of Robust Loss via Noise-Awareness

by   Kehui Ding, et al.

Robust loss minimization is an important strategy for handling robust learning issue on noisy labels. Current robust losses, however, inevitably involve hyperparameters to be tuned for different datasets with noisy labels, manually or heuristically through cross validation, which makes them fairly hard to be generally applied in practice. Existing robust loss methods usually assume that all training samples share common hyperparameters, which are independent of instances. This limits the ability of these methods on distinguishing individual noise properties of different samples, making them hardly adapt to different noise structures. To address above issues, we propose to assemble robust loss with instance-dependent hyperparameters to improve their noise-tolerance with theoretical guarantee. To achieve setting such instance-dependent hyperparameters for robust loss, we propose a meta-learning method capable of adaptively learning a hyperparameter prediction function, called Noise-Aware-Robust-Loss-Adjuster (NARL-Adjuster). Specifically, through mutual amelioration between hyperparameter prediction function and classifier parameters in our method, both of them can be simultaneously finely ameliorated and coordinated to attain solutions with good generalization capability. Four kinds of SOTA robust losses are attempted to be integrated with our algorithm, and experiments substantiate the general availability and effectiveness of the proposed method in both its noise tolerance and generalization performance. Meanwhile, the explicit parameterized structure makes the meta-learned prediction function capable of being readily transferrable and plug-and-play to unseen datasets with noisy labels. Specifically, we transfer our meta-learned NARL-Adjuster to unseen tasks, including several real noisy datasets, and achieve better performance compared with conventional hyperparameter tuning strategy.


page 1

page 2

page 3

page 4


Learning Adaptive Loss for Robust Learning with Noisy Labels

Robust loss minimization is an important strategy for handling robust le...

Learning an Explicit Hyperparameter Prediction Policy Conditioned on Tasks

Meta learning has attracted much attention recently in machine learning ...

Learning to Rectify for Robust Learning with Noisy Labels

Label noise significantly degrades the generalization ability of deep mo...

Robust Meta-learning with Sampling Noise and Label Noise via Eigen-Reptile

Recent years have seen a surge of interest in meta-learning techniques f...

CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep Learning

Modern deep neural networks can easily overfit to biased training data c...

MetaLabelNet: Learning to Generate Soft-Labels from Noisy-Labels

Real-world datasets commonly have noisy labels, which negatively affects...

Towards Learning Universal Hyperparameter Optimizers with Transformers

Meta-learning hyperparameter optimization (HPO) algorithms from prior ex...

Please sign up or login with your details

Forgot password? Click here to reset