Learning with Noisy Labels via Sparse Regularization

07/31/2021
by   Xiong Zhou, et al.
0

Learning with noisy labels is an important and challenging task for training accurate deep neural networks. Some commonly-used loss functions, such as Cross Entropy (CE), suffer from severe overfitting to noisy labels. Robust loss functions that satisfy the symmetric condition were tailored to remedy this problem, which however encounter the underfitting effect. In this paper, we theoretically prove that any loss can be made robust to noisy labels by restricting the network output to the set of permutations over a fixed vector. When the fixed vector is one-hot, we only need to constrain the output to be one-hot, which however produces zero gradients almost everywhere and thus makes gradient-based optimization difficult. In this work, we introduce the sparse regularization strategy to approximate the one-hot constraint, which is composed of network output sharpening operation that enforces the output distribution of a network to be sharp and the ℓ_p-norm (p≤ 1) regularization that promotes the network output to be sparse. This simple approach guarantees the robustness of arbitrary loss functions while not hindering the fitting ability. Experimental results demonstrate that our method can significantly improve the performance of commonly-used loss functions in the presence of noisy labels and class imbalance, and outperform the state-of-the-art methods. The code is available at https://github.com/hitcszx/lnl_sr.

READ FULL TEXT
research
06/06/2021

Asymmetric Loss Functions for Learning with Noisy Labels

Robust loss functions are essential for training deep neural networks wi...
research
12/08/2022

Logit Clipping for Robust Learning against Label Noise

In the presence of noisy labels, designing robust loss functions is crit...
research
05/20/2018

Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels

Deep neural networks (DNNs) have achieved tremendous success in a variet...
research
08/16/2019

Symmetric Cross Entropy for Robust Learning with Noisy Labels

Training accurate deep neural networks (DNNs) in the presence of noisy l...
research
09/03/2022

Noise-Robust Bidirectional Learning with Dynamic Sample Reweighting

Deep neural networks trained with standard cross-entropy loss are more p...
research
12/06/2019

Robust Deep Graph Based Learning for Binary Classification

Convolutional neural network (CNN)-based feature learning has become sta...
research
11/30/2021

The Devil is in the Margin: Margin-based Label Smoothing for Network Calibration

In spite of the dominant performances of deep neural networks, recent wo...

Please sign up or login with your details

Forgot password? Click here to reset