Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

05/10/2021
by   Erik Englesson, et al.
0

We propose two novel loss functions based on Jensen-Shannon divergence for learning under label noise. Following the work of Ghosh et al. (2017), we argue about their theoretical robustness. Furthermore, we reveal several other desirable properties by drawing informative connections to various loss functions, e.g., cross entropy, mean absolute error, generalized cross entropy, symmetric cross entropy, label smoothing, and most importantly consistency regularization. We conduct extensive and systematic experiments using both synthetic (CIFAR) and real (WebVision) noise and demonstrate significant and consistent improvements over other loss functions. Also, we conduct several informative side experiments that highlight the different theoretical properties.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2023

Reevaluating Loss Functions: Enhancing Robustness to Label Noise in Deep Learning Models

Large annotated datasets inevitably contain incorrect labels, which pose...
research
05/20/2018

Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels

Deep neural networks (DNNs) have achieved tremendous success in a variet...
research
06/14/2021

Unified Interpretation of Softmax Cross-Entropy and Negative Sampling: With Case Study for Knowledge Graph Embedding

In knowledge graph embedding, the theoretical relationship between the s...
research
10/13/2021

Boosting the Certified Robustness of L-infinity Distance Nets

Recently, Zhang et al. (2021) developed a new neural network architectur...
research
03/08/2023

Unimodal Distributions for Ordinal Regression

In many real-world prediction tasks, class labels contain information ab...
research
07/07/2021

On Codomain Separability and Label Inference from (Noisy) Loss Functions

Machine learning classifiers rely on loss functions for performance eval...
research
03/22/2022

A Quantitative Comparison between Shannon and Tsallis Havrda Charvat Entropies Applied to Cancer Outcome Prediction

In this paper, we propose to quantitatively compare loss functions based...

Please sign up or login with your details

Forgot password? Click here to reset