Learning Deep Networks from Noisy Labels with Dropout Regularization

05/09/2017
by   Ishan Jindal, et al.
0

Large datasets often have unreliable labels-such as those obtained from Amazon's Mechanical Turk or social media platforms-and classifiers trained on mislabeled datasets often exhibit poor performance. We present a simple, effective technique for accounting for label noise when training deep neural networks. We augment a standard deep network with a softmax layer that models the label noise statistics. Then, we train the deep network and noise model jointly via end-to-end stochastic gradient descent on the (perhaps mislabeled) dataset. The augmented model is overdetermined, so in order to encourage the learning of a non-trivial noise model, we apply dropout regularization to the weights of the noise model during training. Numerical experiments on noisy versions of the CIFAR-10 and MNIST datasets show that the proposed dropout technique outperforms state-of-the-art methods.

READ FULL TEXT

page 5

page 6

page 7

research
02/19/2020

Improving Generalization by Controlling Label-Noise Information in Neural Network Weights

In the presence of noisy or incorrect labels, neural networks have the u...
research
12/20/2014

Training Deep Neural Networks on Noisy Labels with Bootstrapping

Current state-of-the-art deep learning systems for visual object recogni...
research
03/18/2019

An Effective Label Noise Model for DNN Text Classification

Because large, human-annotated datasets suffer from labeling errors, it ...
research
03/22/2021

On the Robustness of Monte Carlo Dropout Trained with Noisy Labels

The memorization effect of deep learning hinders its performance to effe...
research
04/28/2021

Boosting Co-teaching with Compression Regularization for Label Noise

In this paper, we study the problem of learning image classification mod...
research
06/14/2020

Proximal Mapping for Deep Regularization

Underpinning the success of deep learning is effective regularizations t...
research
12/04/2017

Data Dropout in Arbitrary Basis for Deep Network Regularization

An important problem in training deep networks with high capacity is to ...

Please sign up or login with your details

Forgot password? Click here to reset