DeepAI AI Chat
Log In Sign Up

Improving MAE against CCE under Label Noise

03/28/2019
by   Xinshao Wang, et al.
10

Label noise is inherent in many deep learning tasks when the training set becomes large. A typical approach to tackle noisy labels is using robust loss functions. Categorical cross entropy (CCE) is a successful loss function in many applications. However, CCE is also notorious for fitting samples with corrupted labels easily. In contrast, mean absolute error (MAE) is noise-tolerant theoretically, but it generally works much worse than CCE in practice. In this work, we have three main points. First, to explain why MAE generally performs much worse than CCE, we introduce a new understanding of them fundamentally by exposing their intrinsic sample weighting schemes from the perspective of every sample's gradient magnitude with respect to logit vector. Consequently, we find that MAE's differentiation degree over training examples is too small so that informative ones cannot contribute enough against the non-informative during training. Therefore, MAE generally underfits training data when noise rate is high. Second, based on our finding, we propose an improved MAE (IMAE), which inherits MAE's good noise-robustness. Moreover, the differentiation degree over training data points is controllable so that IMAE addresses the underfitting problem of MAE. Third, the effectiveness of IMAE against CCE and MAE is evaluated empirically with extensive experiments, which focus on image classification under synthetic corrupted labels and video retrieval under real noisy labels.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/27/2017

Robust Loss Functions under Label Noise for Deep Neural Networks

In many applications of classifier learning, training data suffers from ...
06/08/2023

Reevaluating Loss Functions: Enhancing Robustness to Label Noise in Deep Learning Models

Large annotated datasets inevitably contain incorrect labels, which pose...
05/27/2019

Emphasis Regularisation by Gradient Rescaling for Training Deep Neural Networks with Noisy Labels

It is fundamental and challenging to train robust and accurate Deep Neur...
04/19/2021

Do We Really Need Gold Samples for Sample Weighting Under Label Noise?

Learning with labels noise has gained significant traction recently due ...
11/22/2019

Instance Cross Entropy for Deep Metric Learning

Loss functions play a crucial role in deep metric learning thus a variet...
06/17/2021

Towards Understanding Deep Learning from Noisy Labels with Small-Loss Criterion

Deep neural networks need large amounts of labeled data to achieve good ...
04/23/2020

Doubly-stochastic mining for heterogeneous retrieval

Modern retrieval problems are characterised by training sets with potent...

Code Repositories

Improving-Mean-Absolute-Error-against-CCE

Mean Absolute Error Does Not Treat Examples Equally and Gradient Magnitude’s Variance Matters


view repo

DerivativeManipulation

In the context of Deep Learning: What is the right way to conduct example weighting? How do you understand loss functions and so-called theorems on them?


view repo