DeepAI AI Chat
Log In Sign Up

Taming the Cross Entropy Loss

by   Manuel Martinez, et al.

We present the Tamed Cross Entropy (TCE) loss function, a robust derivative of the standard Cross Entropy (CE) loss used in deep learning for classification tasks. However, unlike other robust losses, the TCE loss is designed to exhibit the same training properties than the CE loss in noiseless scenarios. Therefore, the TCE loss requires no modification on the training regime compared to the CE loss and, in consequence, can be applied in all applications where the CE loss is currently used. We evaluate the TCE loss using the ResNet architecture on four image datasets that we artificially contaminated with various levels of label noise. The TCE loss outperforms the CE loss in every tested scenario.


page 1

page 2

page 3

page 4


Neural Collapse with Cross-Entropy Loss

We consider the variational problem of cross-entropy loss with n feature...

Smooth Loss Functions for Deep Top-k Classification

The top-k error is a common measure of performance in machine learning a...

Loss Functions for Classification using Structured Entropy

Cross-entropy loss is the standard metric used to train classification m...

OWAdapt: An adaptive loss function for deep learning using OWA operators

In this paper, we propose a fuzzy adaptive loss function for enhancing d...

Role of Orthogonality Constraints in Improving Properties of Deep Networks for Image Classification

Standard deep learning models that employ the categorical cross-entropy ...

Deep Learning on Small Datasets without Pre-Training using Cosine Loss

Two things seem to be indisputable in the contemporary deep learning dis...