Deep neural networks have become an essential tool for classification tasks krizhevsky2012imagenet ; he2016deep ; girshick2014rich . These models tend to be trained on large curated datasets such as CIFAR-10 krizhevsky2009learning
or ImageNetdeng2009imagenet , where the vast majority of labels have been manually verified. Unfortunately, in many applications such datasets are not available, due to the cost or difficulty of manual labeling (e.g. guan2018said ; pechenizkiy2006class ; liu20a ; ait2010high ). However, datasets with lower quality annotations, obtained for instance from online queries blum2003noise or crowdsourcing yan2014learning ; yu2018learning , may be available. Such annotations inevitably contain numerous mistakes or label noise. It is therefore of great importance to develop methodology that is robust to the presence of noisy annotations.
When trained on noisy labels, deep neural networks have been observed to first fit the training data with clean labels during an early learning phase, before eventually memorizing the examples with false labels arpit2017closer ; zhang2016understanding . In this work we study this phenomenon and introduce a novel framework that exploits it to achieve robustness to noisy labels. Our main contributions are the following:
In Section 3 we establish that early learning and memorization are fundamental phenomena in high dimensions, proving that they occur even for simple linear generative models.
In Section 4
we propose a technique that utilizes the early-learning phenomenon to counteract the influence of the noisy labels on the gradient of the cross entropy loss. This is achieved through a regularization term that incorporates target probabilities estimated from the model outputs using several semi-supervised learning techniques.
In Section 6 we show that the proposed methodology achieves results comparable to the state of the art on several standard benchmarks and real-world datasets. We also perform a systematic ablation study to evaluate the different alternatives to compute the target probabilities, and the effect of incorporating mixup data augmentation zhang2017mixup .
|Clean labels||Wrong labels|
2 Related Work
In this section we describe existing techniques to train deep-learning classification models using data with noisy annotations. We focus our discussion on methods that do not assume the availability of small subsets of training data with clean labels (as opposed, for example, to Hendrycks2018UsingTD ; Ren2018LearningTR ; veit2017learning ). We also assume that the correct classes are known (as opposed to Wang2018IterativeLW ).
Robust-loss methods propose cost functions specifically designed to be robust in the presence of noisy labels. These include Mean Absolute Error (MAE) ghosh2017robust , Generalized Cross Entropy zhang2018generalized , which can be interpreted as a generalization of MAE, Symmetric Cross Entropy Wang2019SymmetricCE , which adds a reverse cross-entropy term to the usual cross-entropy loss, and Xu2019L_DMIAN , which is based on information-theoretic considerations. Loss-correction
methods explicitly correct the loss function to take into account the noise distribution, represented by a transition matrix of mislabeling probabilitiespatrini2017making ; Goldberger2017TrainingDN ; xia2019anchor .
Robust-loss and loss-correction techniques do not exploit the early-learning phenomenon mentioned in the introduction. This phenomenon was described in arpit2017closer (see also zhang2016understanding ), and analyzed theoretically in li2019gradient . Our theoretical approach differs from theirs in two respects. First, Ref. li2019gradient focus on a least squares regression task, whereas we focus on the noisy label problem in classification. Second, and more importantly, we prove that early learning and memorization occur even in a linear model.
Early learning can be exploited through sample selection, where the model output during the early-learning stage is used to predict which examples are mislabeled and which have been labeled correctly. The prediction is based on the observation that mislabeled examples tend to have higher loss values. Co-teaching Han2018CoteachingRT ; Yu2019HowDD performs sample selection by using two networks, each trained on a subset of examples that have a small training loss for the other network (see Jiang2018MentorNetLD ; malach2017decoupling for related approaches). A limitation of this approach is that the examples that are selected tend to be easier, in the sense that the model output during early learning approaches the true label. As a result, the gradient of the cross-entropy with respect to these examples is small, which slows down learning chang2017active . In addition, the subset of selected examples may not be rich enough to generalize effectively to held-out data song2019selfie .
An alternative to sample selection is label correction. During the early-learning stage the model predictions are accurate on a subset of the mislabeled examples (see the top row of Figure 1). This suggests correcting the corresponding labels. This can be achieved by computing new labels equal to the probabilities estimated by the model (known as soft labels
) or to one-hot vectors representing the model predictions (hard labels) Tanaka2018JointOF ; yi2019probabilistic . Another option is to set the new labels to equal a convex combination of the noisy labels and the soft or hard labels Reed2015TrainingDN . Label correction is usually combined with some form of iterative sample selection Arazo2019unsup ; Ma2018DimensionalityDrivenLW ; song2019selfie ; li2020dividemix or with additional regularization terms Tanaka2018JointOF . SELFIE song2019selfie uses label replacement to correct a subset of the labels selected by considering past model outputs. Ref. Ma2018DimensionalityDrivenLW computes a different convex combination with hard labels for each example based on a measure of model dimensionality. Ref. Arazo2019unsup fits a two-component mixture model to carry out sample selection, and then corrects labels via convex combination as in Reed2015TrainingDN . They also apply mixup data augmentation zhang2017mixup to enhance performance. In a similar spirit, DivideMix li2020dividemix uses two networks to perform sample selection via a two-component mixture model, and applies the semi-supervised learning technique MixMatch berthelot2019mixmatch .
Our proposed approach is somewhat related in spirit to label correction. We compute a probability estimate that is analogous to the soft labels mentioned above, and then exploit it to avoid memorization. However it is also fundamentally different: instead of modifying the labels, we propose a novel regularization term explicitly designed to correct the gradient of the cross-entropy cost function. This yields strong empirical performance, without needing to incorporate sample selection.
3 Early learning as a general phenomenon of high-dimensional classification
As the top row of Figure 1 makes clear, deep neural networks trained with noisy labels make progress during the early learning stage before memorization occurs. In this section, we show that far from being a peculiar feature of deep neural networks, this phenomenon is intrinsic to high-dimensional classification tasks, even in the simplest setting. Our theoretical analysis is also the inspiration for the early-learning regularization procedure we propose in Section 4.
We exhibit a simple linear model with noisy labels which evinces the same behavior as described above: the early learning
stage, when the classifier learns to correctly predict the true labels, even on noisy examples, and thememorization stage, when the classifier begins to make incorrect predictions because it memorizes the wrong labels. This is illustrated in Figure A.1, which demonstrates that empirically the linear model has the same qualitative behavior as the deep-learning model in Figure 1.
We show that this behavior arises because, early in training, the gradients corresponding to the correctly labeled examples dominate the dynamics—leading to early progress towards the true optimum—but that the gradients corresponding to wrong labels soon become dominant—at which point the classifier simply learns to fit the noisy labels.
We consider data drawn from a mixture of two Gaussians in .
The (clean) dataset consists of i.i.d. copies of . The label is a one-hot vector representing the cluster assignment, and
where is an arbitrary unit vector in and
is a small constant. The optimal separator between the two classes is a hyperplane through the origin perpendicular to.
We only observe a dataset with noisy labels ,
where are i.i.d. random one-hot vectors which take values and with equal probability.
We train a linear classifier by gradient descent on the cross entropy:
where is a softmax function. In order to separate the true classes well (and not overfit to the noisy labels), the rows of should be correlated with the vector .
The gradient of this loss with respect to the model parameters corresponding to class reads
Each term in the gradient therefore corresponds to a weighted sum of the examples , where the weighting depends on the agreement between and .
Our main theoretical result shows that this linear model possesses the properties described above. During the early-learning stage, the algorithm makes progress and the accuracy on wrongly labeled examples increases. However, during this initial stage, the relative importance of the wrongly labeled examples continues to grow; once the effect of the wrongly labeled examples begins to dominate, memorization occurs.
Theorem 1 (Informal).
If is sufficiently small and , then there exists a constant such that with probability as :
Early learning succeeds: Denote by the iterates of gradient descent. For , is well correlated with the correct separator , and at the classifier has higher accuracy on the wrongly labeled examples than at initialization.
Gradients from correct examples vanish: Between and , the magnitudes of the coefficients corresponding to examples with clean labels decreases while the magnitudes of the coefficients for examples with wrong labels increases.
Memorization occurs: As , the classifier memorizes all noisy labels.
Due to space constraints, we defer the formal statement of Theorem 1 and its proof to the supplementary material.
The proof of Theorem 1 is based on two observations. First, while is still not well correlated with , the coefficients are similar for all , so that points approximately in the average direction of the examples. Since the majority of data points are correctly labeled, this means the gradient is still well correlated with the correct direction during the early learning stage. Second, once becomes correlated with , the gradient begins to point in directions orthogonal to the correct direction ; when the dimension is sufficiently large, there are enough of these orthogonal directions to allow the classifier to completely memorize the noisy labels.
This analysis suggests that in order to learn on the correct labels and avoid memorization it is necessary to (1) ensure that the contribution to the gradient from examples with clean labels remains large, and (2) neutralize the influence of the examples with wrong labels on the gradient. In Section 4 we propose a method designed to achieve this via regularization.
4.1 Gradient analysis of softmax classification from noisy labels
Performing gradient descent modifies the parameters iteratively to push closer to . If is the true class so that , the contribution of the th example to is aligned with , and gradient descent moves in the direction of . However, if the label is noisy and , then gradient descent moves in the opposite direction, which eventually leads to memorization as established by Theorem 1.
We now show that for nonlinear models based on neural networks, the effect of label noise is analogous. We consider a classification problem with classes, where the training set consists of examples , is the th input and is a one-hot label vector indicating the corresponding class. The classification model maps each input to a -dimensional encoding using a deep neural network and then feeds the encoding into a softmax function to produce an estimate of the conditional probability of each class given ,
denotes the parameters of the neural network. The gradient of the cross-entropy loss,
with respect to equals
where is the Jacobian matrix of the neural-network encoding for the th input with respect to . Here we see that label noise has the same effect as in the simple linear model. If is the true class, but due to the noise, then the contribution of the th example to is reversed. The entry corresponding to the impostor class , is also reversed because
. As a result, performing stochastic gradient descent eventually results in memorization, as in the linear model (see Figures1 and A.1). Crucially, the influence of the label noise on the gradient of the cross-entropy loss is restricted to the term (see Figure B.1). In Section 4.2 we describe how to counteract this influence by exploiting the early-learning phenomenon.
4.2 Early-learning regularization
In this section we present a novel framework for learning from noisy labels called early-learning regularization (ELR). We assume that we have available a target111The term target is inspired by semi-supervised learning where target probabilities are used to learn on unlabeled examples yarowsky1995unsupervised ; mcclosky2006effective ; laine2016temporal . vector of probabilities for each example , which is computed using past outputs of the model. Section 4.3 describes several techniques to compute the targets. Here we explain how to use them to avoid memorization.
Due to the early-learning phenomenon, we assume that at the beginning of the optimization process the targets do not overfit the noisy labels. ELR exploits this using a regularization term that seeks to maximize the inner product between the model output and the targets,
The logarithm in the regularization term counteracts the exponential function implicit in the softmax function in
. A possible alternative to this approach would be to penalize the Kullback-Leibler divergence between the model outputs and the targets. However, this does not exploit the early-learning phenomenon effectively, because it leads to overfitting the targets as demonstrated in SectionC.
The key to understanding why ELR is effective lies in its gradient, derived in the following lemma, which is proved in Section D.
Lemma 2 (Gradient of the ELR loss).
The gradient of the loss defined in Eq. (6) is equal to
where the entries of are given by
|Clean labels||Wrong labels|
). However, the regularization term compensates for this, forcing the model to continue learning mainly on the examples with clean labels. On the right, we show the CE and the regularization term (dark and light red respectively) separately for the examples with wrong labels. The regularization cancels out the CE term, preventing memorization. In all plots the curves represent the mean value, and the shaded regions are within one standard deviation of the mean.
In words, the sign of is determined by a weighted combination of the difference between and the rest of the entries in the target.
If is the true class, then the th entry of tends to be dominant during early-learning. In that case, the th entry of is negative. This is useful both for examples with clean labels and for those with wrong labels. For examples with clean labels, the cross-entropy term tends to vanish after the early-learning stage because is very close to , allowing examples with wrong labels to dominate the gradient. Adding counteracts this effect by ensuring that the magnitudes of the coefficients on examples with clean labels remains large. The center image of Figure 2 shows this effect. For examples with wrong labels, the cross entropy term is positive because . Adding the negative term therefore dampens the coefficients on these mislabeled examples, thereby diminishing their effect on the gradient (see right image in Figure 2). Thus, ELR fulfils the two desired properties outlined at the end of Section 3: boosting the gradient of examples with clean labels, and neutralizing the gradient of the examples with false labels.
4.3 Target estimation
ELR requires a target probability for each example in the training set. The target can be set equal to the model output, but using a running average is more effective. In semi-supervised learning, this technique is known as temporal ensembling laine2016temporal . Let and denote the target and model output respectively for example at iteration of training. We set
where is the momentum. The basic version of our proposed method alternates between computing the targets and minimizing the cost function (6) via stochastic gradient descent.
Target estimation can be further improved in two ways. First, by using the output of a model obtained through a running average of the model weights during training. In semi-supervised learning, this weight averaging approach has been proposed to mitigate confirmation bias tarvainen2017mean . Second, by using two separate neural networks, where the target of each network is computed from the output of the other network. The approach is inspired by Co-teaching and related methods Han2018CoteachingRT ; Yu2019HowDD ; li2020dividemix . The ablation results in Section 6 show that weight averaging, two networks, and mixup data augmentation zhang2017mixup all separately improve performance. We call the combination of all these elements ELR+. A detailed description of ELR and ELR+ is provided in Section E of the supplementary material.
|Datasets (Architecture)||Methods||Symmetric label noise||Asymmetric label noise|
|CIFAR10 (ResNet34)||Cross entropy||86.98 0.12||81.88 0.29||74.14 0.56||53.82 1.04||90.69 0.17||88.59 0.34||86.14 0.40||80.11 1.44|
|Bootstrap Reed2015TrainingDN||86.23 0.23||82.23 0.37||75.12 0.56||54.12 1.32||90.32 0.21||88.26 0.24||86.57 0.35||81.21 1.47|
|Forward patrini2017making||87.99 0.36||83.25 0.38||74.96 0.65||54.64 0.44||90.52 0.26||89.09 0.47||86.79 0.36||83.55 0.58|
|GSE zhang2018generalized||89.83 0.20||87.13 0.22||82.54 0.23||64.07 1.38||90.91 0.22||89.33 0.17||85.45 0.74||76.74 0.61|
|SL Wang2019SymmetricCE||89.83 0.32||87.13 0.26||82.81 0.61||68.12 0.81||91.72 0.31||90.44 0.27||88.48 0.46||82.51 0.45|
|ELR||91.16 0.08||89.15 0.17||86.12 0.49||73.86 0.61||93.27 0.11||93.52 0.23||91.89 0.22||91.12 0.47|
|ELR||92.12 0.35||91.43 0.21||88.87 0.24||80.69 0.57||94.57 0.23||93.28 0.19||92.70 0.41||91.35 0.38|
|CIFAR100 (ResNet34)||Cross entropy||58.72 0.26||48.20 0.65||37.41 0.94||18.10 0.82||66.54 0.42||59.20 0.18||51.40 0.16||42.74 0.61|
|Bootstrap Reed2015TrainingDN||58.27 0.21||47.66 0.55||34.68 1.1||21.64 0.97||67.27 0.78||62.14 0.32||52.87 0.19||45.12 0.57|
|Forward patrini2017making||39.19 2.61||31.05 1.44||19.12 1.95||8.99 0.58||45.96 1.21||42.46 2.16||38.13 2.97||34.44 1.93|
|GSE zhang2018generalized||66.81 0.42||61.77 0.24||53.16 0.78||29.16 0.74||68.36 0.42||66.59 0.22||61.45 0.26||47.22 1.15|
|SL Wang2019SymmetricCE||70.38 0.13||62.27 0.22||54.82 0.57||25.91 0.44||73.12 0.22||72.56 0.22||72.12 0.24||69.32 0.87|
|ELR||74.21 0.22||68.28 0.31||59.28 0.67||29.78 0.56||74.20 0.31||74.03 0.31||73.71 0.22||73.26 0.64|
|ELR||74.68 0.31||68.43 0.42||60.05 0.78||30.27 0.86||74.52 0.32||74.20 0.25||74.02 0.33||73.73 0.34|
Results with cosine annealing learning rate.
We evaluate the proposed methodology on two standard benchmarks with simulated label noise, CIFAR-10 and CIFAR-100 krizhevsky2009learning , and two real-world datasets, Clothing1M xiao2015learning and WebVision li2017webvision . For CIFAR-10 and CIFAR-100 we simulate label noise by randomly flipping a certain fraction of the labels in the training set following a symmetricuniform distribution (as in Eq. (1)), as well as a more realistic asymmetric class-dependent distribution, following the scheme proposed in patrini2017making . Clothing1M consists of 1 million training images collected from online shopping websites with labels generated using surrounding text. Its noise level is estimated at song2019prestopping . For ease of comparison to previous works Jiang2018MentorNetLD ; Chen2019UnderstandingAU , we consider the mini WebVision dataset which contains the top 50 classes from the Google image subset of WebVision, which results in approximately 66 thousand images. The noise level of WebVision is estimated at li2017webvision . Table F.1 in the supplementary material reports additional details about the datasets, and our training, validation and test splits.
In our experiments, we prioritize making our results comparable to the existing literature. When possible we use the same preprocessing, and architectures as previous methods. The details are described in Section F of the supplementary material. We focus on two variants of the proposed approach: ELR with temporal ensembling, which we call ELR, and ELR with temporal ensembling, weight averaging, two networks, and mixup data augmentation, which we call ELR+ (see Section E
). The choice of hyperparameters is performed on separate validation sets. SectionG shows that the sensitivity to different hyperparameters is quite low. Finally, we also perform an ablation study on CIFAR-10 for two levels of symmetric noise (40% and 80%) in order to evaluate the contribution of the different elements in ELR+. Code to reproduce the experiments is publicly available online at https://github.com/shengliu66/ELR.
|Cross entropy||Co-teaching+ Yu2019HowDD||Mixup zhang2017mixup||PENCIL yi2019probabilistic||MD-DYR-SH Arazo2019unsup||DivideMix li2020dividemix||ELR+|
|CIFAR-10||Sym. label noise||20%||86.8||89.5||95.6||92.4||94.0||96.1||94.6|
|CIFAR-100||Sym. label noise||20%||62.0||65.6||67.8||69.4||73.9||77.3||77.5|
Table 1 evaluates the performance of ELR on CIFAR-10 and CIFAR-100 with different levels of symmetric and asymmetric label noise. We compare to the best performing methods that only modify the training loss. All techniques use the same architecture (ResNet34), batch size, and training procedure. ELR consistently outperforms the rest by a significant margin. To illustrate the influence of the training procedure, we include results with a different learning-rate scheduler (cosine annealing Loshchilov2017SGDRSG ), which further improves the results.
In Table 2, we compare ELR+ to state-of-the-art methods, which also apply sample selection and data augmentation, on CIFAR-10 and CIFAR-100. All methods use the same architecture (PreAct ResNet-18). The results from other methods may not be completely comparable to ours because they correspond to the best test performance during training, whereas we use a separate validation set. Nevertheless, ELR+ outperforms all other methods except DivideMix.
|CE||Forward patrini2017making||GCE zhang2018generalized||SL Wang2019SymmetricCE||Joint-Optim Tanaka2018JointOF||DivideMix li2020dividemix||ELR||ELR+|
Table 3 compares ELR and ELR+ to state-of-the-art methods on the Clothing1M dataset. ELR+ achieves state-of-the-art performance, slightly superior to DivideMix.
|D2L Ma2018DimensionalityDrivenLW||MentorNet Jiang2018MentorNetLD||Co-teaching Han2018CoteachingRT||Iterative-CV Wang2018IterativeLW||DivideMix li2020dividemix||ELR||ELR+|
Table 4 compares ELR and ELR+ to state-of-the-art methods trained on the mini WebVision dataset and evaluated on both the WebVision and ImageNet ILSVRC12 validation sets. ELR+ achieves state-of-the-art performance, slightly superior to DivideMix, on WebVision. ELR also performs strongly, despite its simplicity. On ILSVRC12 DivideMix produces superior results (particularly in terms of top1 accuracy).
|Weight Averaging||Weight Averaging|
|1 Network||mixup||✓||93.04 0.12||91.05 0.13||87.23 0.30||81.43 0.52|
|✗||92.09 0.08||90.83 0.07||76.50 0.65||72.54 0.35|
|2 Networks||mixup||✓||93.68 0.51||93.51 0.47||88.62 0.26||84.75 0.26|
|✗||92.95 0.05||91.86 0.14||80.13 0.51||73.49 0.47|
Table 5 shows the results of an ablation study evaluating the influence of the different elements of ELR+ for the CIFAR-10 dataset with medium (40%) and high (80%) levels of symmetric noise. Each element seems to provide an independent performance boost. At the medium noise level the improvement is modest, but at the high noise level it is very significant. This is in line with recent works showing the effectiveness of semi-supervised learning techniques in such settings Arazo2019unsup ; li2020dividemix .
7 Discussion and Future Work
In this work we provide a theoretical characterization of the early-learning and memorization phenomena for a linear generative model, and build upon the resulting insights to propose a novel framework for learning from data with noisy annotations. Our proposed methodology yields strong results on standard benchmarks and real-world datasets for several different network architectures. However, there remain multiple open problems for future research. On the theoretical front, it would be interesting to bridge the gap between linear and nonlinear models (see li2019gradient for some work in this direction), and also to investigate the dynamics of the proposed regularization scheme. On the methodological front, we hope that our work will trigger interest in the design of new forms of regularization that provide robustness to label noise.
8 Broader Impact
This work has the potential to advance the development of machine-learning methods that can be deployed in contexts where it is costly to gather accurate annotations. This is an important issue in applications such as medicine, where machine learning has great potential societal impact.
This research was supported by NSF NRT-HDR Award 1922658. JNW gratefully acknowledges the support of the Institute for Advanced Study, where a portion of this research was conducted.
-  Yacine Aït-Sahalia, Jianqing Fan, and Dacheng Xiu. High-frequency covariance estimates with noisy and asynchronous financial data. Journal of the American Statistical Association, 105(492):1504–1517, 2010.
-  Eric Arazo, Diego Ortego, Paul Albert, Noel E O’Connor, and Kevin McGuinness. Unsupervised label noise modeling and loss correction. In International Conference on Machine Learning (ICML), June 2019.
-  Devansh Arpit, Stanisław Jastrzębski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, and Simon Lacoste-Julien. A closer look at memorization in deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 233–242. JMLR. org, 2017.
-  David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. In Advances in Neural Information Processing Systems, pages 5050–5060, 2019.
-  Avrim Blum, Adam Kalai, and Hal Wasserman. Noise-tolerant learning, the parity problem, and the statistical query model. Journal of the ACM (JACM), 50(4):506–519, 2003.
Haw-Shiuan Chang, Erik Learned-Miller, and Andrew McCallum.
Active bias: Training more accurate neural networks by emphasizing high variance samples.In Advances in Neural Information Processing Systems, pages 1002–1012, 2017.
-  Pengfei Chen, Benben Liao, Guangyong Chen, and Shengyu Zhang. Understanding and utilizing deep neural networks trained with noisy labels. In ICML, 2019.
Thomas M. Cover.
Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition.IEEE Trans. Electronic Computers, 14(3):326–334, 1965.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei.
Imagenet: A large-scale hierarchical image database.
2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
Aritra Ghosh, Himanshu Kumar, and PS Sastry.
Robust loss functions under label noise for deep neural networks.
Thirty-First AAAI Conference on Artificial Intelligence, 2017.
-  Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580–587, 2014.
-  Jacob Goldberger and Ehud Ben-Reuven. Training deep neural-networks using a noise adaptation layer. In ICLR, 2017.
-  Melody Y Guan, Varun Gulshan, Andrew M Dai, and Geoffrey E Hinton. Who said what: Modeling individual labelers improves classification. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
-  Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Wai-Hung Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In NeurIPS, 2018.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
-  Dan Hendrycks, Mantas Mazeika, Duncan Wilson, and Kevin Gimpel. Using trusted data to train deep networks on labels corrupted by severe noise. In NeurIPS, 2018.
-  Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In ICML, 2018.
-  Alex Krizhevsky. Learning multiple layers of features from tiny images. University of Toronto, 05 2012.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton.
Imagenet classification with deep convolutional neural networks.In Advances in neural information processing systems, pages 1097–1105, 2012.
-  Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. In ICLR, 2018.
-  Michel Ledoux and Michel Talagrand. Probability in Banach Spaces: isoperimetry and processes. Springer Science & Business Media, 2013.
-  Junnan Li, Richard Socher, and Steven C.H. Hoi. Dividemix: Learning with noisy labels as semi-supervised learning. In International Conference on Learning Representations, 2020.
-  Mingchen Li, Mahdi Soltanolkotabi, and Samet Oymak. Gradient descent with early stopping is provably robust to label noise for overparameterized neural networks. arXiv preprint arXiv:1903.11680, 2019.
-  Wen Li, Limin Wang, Wei Li, Eirikur Agustsson, and Luc Van Gool. Webvision database: Visual learning and understanding from web data. arXiv preprint arXiv:1708.02862, 2017.
-  Sheng Liu, Chhavi Yadav, Carlos Fernandez-Granda, and Narges Razavian. On the design of convolutional neural networks for automatic detection of Alzheimer’s disease. In Proceedings of the Machine Learning for Health NeurIPS Workshop, volume 116 of Proceedings of Machine Learning Research (PMLR), pages 184–201. PMLR, 13 Dec 2020.
-  Ilya Loshchilov and Frank Hutter. SGDR: Stochastic gradient descent with warm restarts. In ICLR, 2017.
-  Xingjun Ma, Yisen Wang, Michael E. Houle, Shuo Zhou, Sarah M. Erfani, Shu-Tao Xia, Sudanthi N. R. Wijewickrema, and James Bailey. Dimensionality-driven learning with noisy labels. In ICML, 2018.
-  Eran Malach and Shai Shalev-Shwartz. Decoupling" when to update" from" how to update". In Advances in Neural Information Processing Systems, pages 960–970, 2017.
-  David McClosky, Eugene Charniak, and Mark Johnson. Effective self-training for parsing. In Proceedings of the main conference on human language technology conference of the North American Chapter of the Association of Computational Linguistics, pages 152–159. Association for Computational Linguistics, 2006.
-  Shahar Mendelson. Learning without concentration. In Conference on Learning Theory, pages 25–39, 2014.
-  Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making deep neural networks robust to label noise: A loss correction approach. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), pages 2233–2241, 2017.
Mykola Pechenizkiy, Alexey Tsymbal, Seppo Puuronen, and Oleksandr Pechenizkiy.
Class noise and supervised learning in medical domains: The effect of feature extraction.In 19th IEEE symposium on computer-based medical systems (CBMS’06), pages 708–713. IEEE, 2006.
-  Scott E. Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich. Training deep neural networks on noisy labels with bootstrapping. CoRR, abs/1412.6596, 2015.
-  Mengye Ren, Wenyuan Zeng, Bin Yang, and Raquel Urtasun. Learning to reweight examples for robust deep learning. In ICML, 2018.
-  Hwanjun Song, Minseok Kim, and Jae-Gil Lee. SELFIE: Refurbishing unclean samples for robust deep learning. In ICML, 2019.
-  Hwanjun Song, Minseok Kim, Dongmin Park, and Jae-Gil Lee. Prestopping: How does early stopping help generalization against label noise? arXiv preprint arXiv:1911.08059, 2019.
-  Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro. The implicit bias of gradient descent on separable data. The Journal of Machine Learning Research, 19(1):2822–2878, 2018.
-  Daiki Tanaka, Daiki Ikami, Toshihiko Yamasaki, and Kiyoharu Aizawa. Joint optimization framework for learning with noisy labels. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5552–5560, 2018.
-  Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in neural information processing systems, pages 1195–1204, 2017.
-  Andreas Veit, Neil Alldrin, Gal Chechik, Ivan Krasin, Abhinav Gupta, and Serge J Belongie. Learning from noisy large-scale datasets with minimal supervision. In CVPR, pages 6575–6583, 2017.
Introduction to the non-asymptotic analysis of random matrices.In Compressed sensing, pages 210–268. Cambridge Univ. Press, Cambridge, 2012.
-  Yisen Wang, Weiyang Liu, Xingjun Ma, James Bailey, Hongyuan Zha, Le Song, and Shu-Tao Xia. Iterative learning with open-set noisy labels. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8688–8696, 2018.
-  Yisen Wang, Xingjun Ma, Zaiyi Chen, Yuan Luo, Jinfeng Yi, and James Bailey. Symmetric cross entropy for robust learning with noisy labels. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 322–330, 2019.
-  Xiaobo Xia, Tongliang Liu, Nannan Wang, Bo Han, Chen Gong, Gang Niu, and Masashi Sugiyama. Are anchor points really indispensable in label-noise learning? In Advances in Neural Information Processing Systems, pages 6835–6846, 2019.
-  Tong Xiao, Tian Xia, Yi Yang, Chang Huang, and Xiaogang Wang. Learning from massive noisy labeled data for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2691–2699, 2015.
-  Yilun Xu, Peng Cao, Yuqing Kong, and Yizhou Wang. LDMI: A novel information-theoretic loss function for training deep nets robust to label noise. In noisy labels, 2019.
-  Yan Yan, Rómer Rosales, Glenn Fung, Ramanathan Subramanian, and Jennifer Dy. Learning from multiple annotators with varying expertise. Machine learning, 95(3):291–327, 2014.
-  David Yarowsky. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd annual meeting of the association for computational linguistics, pages 189–196, 1995.
-  Kun Yi and Jianxin Wu. Probabilistic end-to-end noise correction for learning with noisy labels. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7017–7025, 2019.
-  Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor Wai-Hung Tsang, and Masashi Sugiyama. How does disagreement help generalization against label corruption? In ICML, 2019.
-  Xiyu Yu, Tongliang Liu, Mingming Gong, and Dacheng Tao. Learning with biased complementary labels. In Proceedings of the European Conference on Computer Vision (ECCV), pages 68–83, 2018.
-  Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In ICLR, 2017.
-  Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In ICLR, 2018.
-  Zhilu Zhang and Mert R Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. In NeurIPS, 2018.
Appendix A Theoretical analysis of early learning and memorization in a linear model
In this section, we formalize and substantiate the claims of Theorem 1.
Theorem 1 has three parts, which we address in the following sections. First, in Section A.2, we show that the classifier makes progress during the early-learning phase: over the first iterations, the gradient is well correlated with and the accuracy on mislabeled examples increases. However, as noted in the main text, this early progress halts because the gradient terms corresponding to correctly labeled examples begin to disappear. We prove this rigorously in Section A.3, which shows that the overall magnitude of the gradient terms corresponding to correctly labeled examples shrinks over the first iterations. Finally, in Section A.4, we prove the claimed asymptotic behavior: as , gradient descent perfectly memorizes the noisy labels.
a.1 Notation and setup
We consider a softmax regression model parameterized by two weight vectors and , which are the rows of the parameter matrix . In the linear case this is equivalent to a logistic regression model, because the cross-entropy loss on two classes depends only on the vector
. In the linear case this is equivalent to a logistic regression model, because the cross-entropy loss on two classes depends only on the vector. If we reparametrize the labels as
and set , we can then write the loss as
We write for the true cluster assignments: if comes from the cluster with mean , and otherwise.
Note that, with this convention, we can always write
, where is a standard Gaussian random vector independent of all other random variables.
is a standard Gaussian random vector independent of all other random variables.
In terms of and , the gradient (2) reads
As noted in the main text, the coefficient is the key quantity governing the properties of the gradient.
Let us write for the set of indices for which the labels are correct, and for the set of indices for which labels are wrong.
We assume that is initialized randomly on the sphere with radius , and then optimized to minimize via gradient descent with fixed step size . We denote the iterates by .
We consider the asymptotic regime where and are constants and , with . For convenience, we assume that , though it is straightforward to extend the analysis below to any bounded away from . We will use the phrase “with high probability” to denote an event which happens with probability as , and we use to denote a random quantity which converges to in probability. We use the symbol to refer to an unspecified positive constant whose value may change from line to line. We use subscripts to indicate when this constant depends on other parameters of the problem.
a.2 Early-learning succeeds
We first show that, for the first iterations, the negative gradient has constant correlation with . (Note that, by contrast, a random vector in typically has negligible correlation with .)
Assume is sufficiently small. With high probability, for all , we have
We will prove the claim by induction. We write
Since , the law of large numbers implies
, the law of large numbers implies
Moreover, by Lemma 9, there exists a positive constant such that with high probability
Thus, applying Lemma 8 yields that with high probability
When , the first term is by Lemma 7. Since we have assumed that , choosing sufficiently small yields that this quantity is bounded below by , as desired.
We proceed with the induction. If we assume the claim holds up to time , then the definition of gradient descent implies
where satisfies . Since the set of vectors satisfying this requirement forms a convex cone, we obtain that
Given , we denote by
the accuracy of on mislabeled examples. We now show that the classifier’s accuracy on the mislabeled examples improves over the first rounds. In fact, we show that with high probability, whereas .
For any , there exists a sufficiently small such that
with high probability.
Let us write , where is a standard Gaussian vector. If we fix , then if and only if . In particular this yields
By the law of large numbers, we have that, conditioned on ,
and applying Lemma 7 yields .
In the other direction, we employ a method based on . The proof of Proposition 3 establishes that for all with high probability. Since and , Lemma 8 implies that . Since by assumption, we obtain that with high probability.
Note that is a random subset of . For now, let us condition on this random variable. If we write for the Gaussian CDF, then by the same reasoning as above, for any fixed ,
Therefore, if , then for any , we have
By construction, is -Lipschitz and satisfies
for all . In particular, we have
Denote the set of satisfying by . Combining the last display with (14) yields
To control the last term, we employ symmetrization and contraction (see [21, Chapter 4]) to obtain
where are independent Rademacher random variables. The final quantity is easily seen to be at most . Therefore we have
and a standard application of Azuma’s inequality implies that this bound also holds with high probability. Since and with high probability, there exists a positive constant such that
By choosing sufficiently small and sufficiently large, this quantity can be made arbitrarily close to .
Putting it all together, we have shown that with high probability, and that , which proves the desired claim. ∎
a.3 Vanishing gradients
We now show that, over the first iterations, the coefficients associated with the correctly labeled examples decrease, while the coefficients on mislabeled examples increase. For simplicity, we write .
There exists a positive constant such that, with high probability,
That is, during the first stage, the coefficients on correct examples decrease while the coefficients on wrongly labeled examples increase.
Let us first consider
For fixed initialization , the law of large numbers implies that this quantity is near
Let us write , where is a standard Gaussian vector. Then the fact that is Lipschitz implies
where we have used that . By Lemma 7, . Hence
where the equality uses the fact that for all . For sufficiently small, this quantity is strictly less than , so by choosing small enough we obtain the existence of a positive constant such that . This proves the first claim.
The second claim is established by an analogous argument: for fixed initialization , we have
so as above we can conclude that
We likewise have by another application of Lemma 9
where we again have used that