Reevaluating Loss Functions: Enhancing Robustness to Label Noise in Deep Learning Models

06/08/2023
by   Max Staats, et al.
0

Large annotated datasets inevitably contain incorrect labels, which poses a major challenge for the training of deep neural networks as they easily fit the labels. Only when training with a robust model that is not easily distracted by the noise, a good generalization performance can be achieved. A simple yet effective way to create a noise robust model is to use a noise robust loss function. However, the number of proposed loss functions is large, they often come with hyperparameters, and may learn slower than the widely used but noise sensitive Cross Entropy loss. By heuristic considerations and extensive numerical experiments, we study in which situations the proposed loss functions are applicable and give suggestions on how to choose an appropriate loss. Additionally, we propose a novel technique to enhance learning with bounded loss functions: the inclusion of an output bias, i.e. a slight increase in the neuron pre-activation corresponding to the correct label. Surprisingly, we find that this not only significantly improves the learning of bounded losses, but also leads to the Mean Absolute Error loss outperforming the Cross Entropy loss on the Cifar-100 dataset - even in the absence of additional label noise. This suggests that training with a bounded loss function can be advantageous even in the presence of minimal label noise. To further strengthen our analysis of the learning behavior of different loss functions, we additionally design and test a novel loss function denoted as Bounded Cross Entropy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/08/2022

Logit Clipping for Robust Learning against Label Noise

In the presence of noisy labels, designing robust loss functions is crit...
research
05/10/2021

Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

We propose two novel loss functions based on Jensen-Shannon divergence f...
research
07/07/2021

On Codomain Separability and Label Inference from (Noisy) Loss Functions

Machine learning classifiers rely on loss functions for performance eval...
research
02/16/2020

Learning Not to Learn in the Presence of Noisy Labels

Learning in the presence of label noise is a challenging yet important t...
research
05/30/2019

Leveraging Simple Model Predictions for Enhancing its Performance

There has been recent interest in improving performance of simple models...
research
09/30/2021

Learning to Predict Trustworthiness with Steep Slope Loss

Understanding the trustworthiness of a prediction yielded by a classifie...
research
03/28/2019

Improving MAE against CCE under Label Noise

Label noise is inherent in many deep learning tasks when the training se...

Please sign up or login with your details

Forgot password? Click here to reset