Leveraging inductive bias of neural networks for learning without explicit human annotations

10/20/2019
by   Fatih Furkan Yilmaz, et al.
0

Classification problems today are typically solved by first collecting examples along with candidate labels, second obtaining clean labels from workers, and third training a large, overparameterized deep neural network on the clean examples. The second, labeling step is often the most expensive one as it requires manually going through all examples. In this paper we skip the labeling step entirely and propose to directly train the deep neural network on the noisy candidate labels and early stop the training to avoid overfitting. With this procedure we exploit an intriguing property of large overparameterized neural networks: While they are capable of perfectly fitting the noisy data, gradient descent fits clean labels much faster than the noisy ones, thus early stopping resembles training on the clean labels. Our results show that early stopping the training of standard deep networks such as ResNet-18 on part of the Tiny Images dataset, which does not involve any human labeled data, and of which only about half of the labels are correct, gives a significantly higher test performance than when trained on the clean CIFAR-10 training dataset, which is a labeled version of the Tiny Images dataset, for the same classification problem. In addition, our results show that the noise generated through the label collection process is not nearly as adversarial for learning as the noise generated by randomly flipping labels, which is the noise most prevalent in works demonstrating noise robustness of neural networks.

READ FULL TEXT

page 5

page 13

page 14

research
03/27/2019

Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks

Modern neural networks are typically trained in an over-parameterized re...
research
06/05/2023

On Emergence of Clean-Priority Learning in Early Stopped Neural Networks

When random label noise is added to a training dataset, the prediction e...
research
10/02/2022

The Dynamic of Consensus in Deep Networks and the Identification of Noisy Labels

Deep neural networks have incredible capacity and expressibility, and ca...
research
12/04/2019

Epoch-wise label attacks for robustness against label noise

The current accessibility to large medical datasets for training convolu...
research
08/26/2023

Late Stopping: Avoiding Confidently Learning from Mislabeled Examples

Sample selection is a prevalent method in learning with noisy labels, wh...
research
03/21/2023

Fighting over-fitting with quantization for learning deep neural networks on noisy labels

The rising performance of deep neural networks is often empirically attr...
research
06/30/2020

Early-Learning Regularization Prevents Memorization of Noisy Labels

We propose a novel framework to perform classification via deep learning...

Please sign up or login with your details

Forgot password? Click here to reset