L_DMI: An Information-theoretic Noise-robust Loss Function

09/08/2019
by   Yilun Xu, et al.
16

Accurately annotating large scale dataset is notoriously expensive both in time and in money. Although acquiring low-quality-annotated dataset can be much cheaper, it often badly damages the performance of trained models when using such dataset without particular treatment. Various of methods have been proposed for learning with noisy labels. However, they only handle limited kinds of noise patterns, require auxiliary information (e.g,, the noise transition matrix), or lack theoretical justification. In this paper, we propose a novel information-theoretic loss function, L_ DMI, for training deep neural networks robust to label noise. The core of L_ DMI is a generalized version of mutual information, termed Determinant based Mutual Information (DMI), which is not only information-monotone but also relatively invariant. To the best of our knowledge, L_ DMI is the first loss function that is provably not sensitive to noise patterns and noise amounts, and it can be applied to any existing classification neural networks straightforwardly without any auxiliary information. In addition to theoretical justification, we also empirically show that using L_ DMI outperforms all other counterparts in the classification task on Fashion-MNIST, CIFAR-10, Dogs vs. Cats datasets with a variety of synthesized noise patterns and noise amounts as well as a real-world dataset Clothing1M. Codes are available at https://github.com/Newbeeer/L_DMI

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/21/2022

Mutual Information Learned Classifiers: an Information-theoretic Viewpoint of Training Deep Learning Classification Systems

Deep learning systems have been reported to achieve state-of-the-art per...
research
02/19/2020

Improving Generalization by Controlling Label-Noise Information in Neural Network Weights

In the presence of noisy or incorrect labels, neural networks have the u...
research
05/31/2019

Max-MIG: an Information Theoretic Approach for Joint Learning from Crowds

Eliciting labels from crowds is a potential way to obtain large labeled ...
research
05/20/2018

Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels

Deep neural networks (DNNs) have achieved tremendous success in a variet...
research
12/01/2022

Mutual Information-based Generalized Category Discovery

We introduce an information-maximization approach for the Generalized Ca...
research
04/19/2020

A Committee of Convolutional Neural Networks for Image Classication in the Concurrent Presence of Feature and Label Noise

Image classification has become a ubiquitous task. Models trained on goo...
research
09/13/2023

Video Infringement Detection via Feature Disentanglement and Mutual Information Maximization

The self-media era provides us tremendous high quality videos. Unfortuna...

Please sign up or login with your details

Forgot password? Click here to reset