LabelNoiseCorrection
Official Implementation of ICML 2019 Unsupervised label noise modeling and loss correction
view repo
Despite being robust to small amounts of label noise, convolutional neural networks trained with stochastic gradient methods have been shown to easily fit random labels. When there are a mixture of correct and mislabelled targets, networks tend to fit the former before the latter. This suggests using a suitable two-component mixture model as an unsupervised generative model of sample loss values during training to allow online estimation of the probability that a sample is mislabelled. Specifically, we propose a beta mixture to estimate this probability and correct the loss by relying on the network prediction (the so-called bootstrapping loss). We further adapt mixup augmentation to drive our approach a step further. Experiments on CIFAR-10/100 and TinyImageNet demonstrate a robustness to label noise that substantially outperforms recent state-of-the-art. Source code is available at https://git.io/fjsvE
READ FULL TEXT VIEW PDF
We present a theoretically grounded approach to train deep neural networ...
read it
Large datasets in NLP suffer from noisy labels, due to erroneous automat...
read it
Deep neural networks trained with standard cross-entropy loss memorize n...
read it
Modern neural networks are typically trained in an over-parameterized re...
read it
We propose a method, called Label Embedding Network, which can learn lab...
read it
Early detection of suicidal ideation in depressed individuals can allow ...
read it
Official Implementation of ICML 2019 Unsupervised label noise modeling and loss correction
Convolutional Neural Networks (CNNs) have recently become the par excellence base approach to deal with many computer vision tasks
(DeTone et al., 2016; Ono et al., 2018; Beluch et al., 2018; Redmon et al., 2016; Zhao et al., 2017; Krishna et al., 2017). Their widespread use is attributable to their capability to model complex patterns (Ren et al., 2018) when vast amounts of labeled data are available. Obtaining such volumes of data, however, is not trivial and usually involves an error prone automatic or a manual labeling process (Wang et al., 2018a; Zlateski et al., 2018). These errors lead to noisy samples: samples annotated with incorrect or noisy labels. As a result, dealing with label noise is a common adverse scenario that requires attention to ensure useful visual representations can be learnt (Jiang et al., 2018b; Wang et al., 2018a; Wu et al., 2018; Jiang et al., 2018a; Zlateski et al., 2018).Automatically obtained noisy labels have previously been demonstrated useful for learning visual representations (Pathak et al., 2017; Gidaris et al., 2018); however, a recent study on the generalization capabilities of deep networks (Zhang et al., 2017) demonstrates that noisy labels are easily fit by CNNs, harming generalization. This overfitting also arises in biases that networks encounter during training, e.g., when a dataset contains class imbalances (Alvi et al., 2018). However, before fitting label noise, CNNs fit the correctly labeled samples (clean samples) even under high-levels of corruption (Figure 1, left).
Existing literature on training with noisy labels focuses primarily on loss correction approaches (Reed et al., 2015; Hendrycks et al., 2018; Jiang et al., 2018b). A well-known approach is the bootstrapping loss (Reed et al., 2015), which introduces a perceptual consistency term in the learning objective that assigns a weight to the current network prediction to compensate for the erroneous guiding of noisy samples. Other approaches modify class probabilities (Patrini et al., 2017; Hendrycks et al., 2018) by estimating the noise associated with each class, thus computing a loss that guides the training process towards the correct classes. Still other approaches use curriculum learning to formulate a robust learning procedure (Jiang et al., 2018b; Ren et al., 2018). Curriculum learning (Bengio et al., 2009) is based on the idea that ordering training examples in a meaningful (e.g. easy to hard) sequence might improve convergence and generalization. In the noisy label scenario, easy (hard) concepts are associated with clean (noisy) samples by re-weighting the loss for noisy samples so that they contribute less. Discarding noisy samples, however, potentially removes useful information about the data distribution. (Wang et al., 2018b) overcome this problem by introducing a similarity learning strategy that pulls representations of noisy samples away from clean ones. Finally, mixup data augmentation (Zhang et al., 2018) has recently demonstrated outstanding robustness against label noise without explicitly modeling it.
In light of these recent advances, this paper proposes a robust training procedure that avoids fitting noisy labels even under high levels of corruption (Figure 1, right), while using noisy samples for learning visual representations that achieve a high classification accuracy. Contrary to most successful recent approaches that assume the existence of a known set of clean data (Ren et al., 2018; Hendrycks et al., 2018)
, we propose an unsupervised model of label noise based exclusively on the loss on each sample. We argue that clean and noisy samples can be modeled by fitting a two-component (clean-noisy) beta mixture model (BMM) on the loss values. The posterior probabilities under the model are then used to implement a dynamically weighted bootstrapping loss, robustly dealing with noisy samples without discarding them. We provide experimental work demonstrating the strengths of our approach, which lead us to substantially outperform the related work. Our main contributions are as follows:
A simple yet effective unsupervised noise label modeling based on each sample loss.
A loss correction approach that exploits the unsupervised label noise model to correct each sample loss, thus preventing overfitting to label noise.
Pushing the state-of-the-art one step forward by combining our approach with mixup data augmentation (Zhang et al., 2018).
Guiding mixup data augmentation to achieve convergence even under extreme label noise.
Recent efforts to deal with label noise address two scenarios (Wang et al., 2018b): closed-set and open-set label noise. In the closed set scenario, the set of possible labels is known and fixed. All samples, including noisy ones, have their true label in this set. In the open set scenario, the true label of a noisy sample may be outside ; i.e. may be an out-of-distribution sample (Liang et al., 2018). The remainder of this section briefly reviews related work in the closed-set scenario considered in (Zhang et al., 2017), upon which we base our approach.
Several types of noise can be studied in the closed-set scenario, namely uniform or non-uniform random label noise. The former is also known as symmetric label noise and implies ground-truth labels flipped to a different class with uniform random probability. Non-uniform or class-conditional label noise, on the other hand, has different flipping probabilities for each class (Hendrycks et al., 2018). Previous research (Patrini et al., 2017) suggests that uniform label noise is more challenging than non-uniform.
A simple approach to dealing with label noise is to remove the corrupted data. This is not only challenging because difficult samples may be confused with noisy ones (Wang et al., 2018b), but also implies not exploiting the noisy samples for representation learning. It has, however, recently been demonstrated (Ding et al., 2018) that it is useful to discard samples with a high probability of being incorrectly labeled and still use these samples in a semi-supervised setup.
Other approaches seek to relabel the noisy samples by modeling their noise through directed graphical models (Xiao et al., 2015), Conditional Random Fields (Vahdat, 2017), or CNNs (Veit et al., 2017). Unfortunately, to predict the true label, these approaches rely on the assumption that a small set of clean samples is always available, which limits their applicability. Tanaka et al. (Tanaka et al., 2018) have, however, recently demonstrated that it is possible to do unsupervised sample relabeling using the network predictions to predict hard or soft labels.
Loss correction approaches (Reed et al., 2015; Jiang et al., 2018b; Patrini et al., 2017; Zhang et al., 2018) modify either the loss directly, or the probabilities used to compute it, to compensate for the incorrect guidance provided by the noisy samples. (Reed et al., 2015) extend the loss with a perceptual term that introduces a certain reliance on the model prediction. Their approach is, however, limited in that the noise label always affects the objective. (Patrini et al., 2017) propose a backward method that weights the loss of each sample using the inverse of a noise transition matrix , which specifies the probability of one label being flipped to another. (Patrini et al., 2017) presents a forward method that, instead of operating directly on the loss, goes back to the predicted probabilities to correct them by multiplying by the matrix. (Hendrycks et al., 2018) corrects the predicted probabilities using a corruption matrix computed using a model trained on a clean set of samples and their prediction on the corrupted data. Other approaches focus on re-weighting the contribution of noisy samples on the loss. (Jiang et al., 2018b) proposes an alternating minimization framework in which a mentor network learns a curriculum (i.e. a weight for each sample) to guide a student network that learns under label noise conditions. Similarly, (Guo et al., 2018) present a curriculum learning approach based on an unsupervised estimation on data complexity through its distribution in a feature space that benefits from training with both clean and noisy samples. (Ren et al., 2018) weights each sample in the loss based on the gradient directions in training compared to those on validation (i.e. in a clean set). Note that, as for relabeling approaches, the assumption of clean data availability limits the application of many of these approaches. Conversely, approaches like (Wang et al., 2018b) do not rely on clean data by performing unsupervised noise label detection to help re-weighting the loss, while not discarding noisy samples that are exploited in a similarity learning framework to pull their representations away from true samples of each class.
In contrast to the aforementioned literature, we propose to deal with noisy labels using exclusively the training loss of each sample without consulting any clean set. Specifically, we fit a two-component beta mixture model to the training loss of each sample to model clean and noisy samples. We use this unsupervised model to implement a loss correction approach that benefits both from bootstrapping (Reed et al., 2015) and mixup data augmentation (Zhang et al., 2018) to deal with the closed-set label noise scenario.
Image classification can be formulated as the problem of learning a model from a set of training examples with
being the one-hot encoding ground-truth label corresponding to
. In our case, is a CNN and represents the model parameters (weights and biases). As we are considering classification under label noise, the label can be noisy (i.e. is a noisy sample). The parametersare fit by optimizing a loss function, e.g. categorical cross-entropy:
(1) |
where are the softmax probabilities produced by the model and is applied elementwise. The remainder of this section describes our noisy sample modeling technique and how to extend the loss in Eq. (1) based on this model to handle label noise. For notational simplicity, we use and in the remainder of the paper.
We aim to identify the noisy samples in the dataset so that we can implement a loss correction approach (see Subsections 3.2 and 3.3
). Our essential observation is simple: random labels take longer to learn than clean labels, meaning that noisy samples have higher loss during the early epochs of training (see Figure
1), allowing clean and noisy samples to be distinguished from the loss distribution alone (see Figure 2). Modern CNNs trained with stochastic gradient methods typically do not fit the noisy examples until substantial progress has been made in fitting the clean ones. Therefore, one can infer from the loss value if a sample is more likely to be clean or noisy. We propose to use a mixture distribution model for this purpose.Mixture models are a widely used unsupervised modeling technique (Stauffer & Grimson, 1999; Permuter et al., 2006; Ma & Leijon, 2011)
, with the Gaussian Mixture Model (GMM)
(Permuter et al., 2006)being the most popular. The probability density function (pdf) of a mixture model of
components on the loss is defined as:(2) |
where are the mixing coefficients for the convex combination of each individual pdf . In our case, we can fit a two components GMM (i.e. and ) to model the distribution of clean and noisy samples (Figure 2
). Unfortunately, the Gaussian is a poor approximation to the clean set distribution, which exhibits high skew toward zero. The more flexible beta distribution
(Ma & Leijon, 2011) allows modelling both symmetric and skewed distributions over ; the beta mixture model (BMM) better approximates the loss distribution for mixtures of clean and noisy samples (Figure 2). Empirically, we also found the BMM improves ROC-AUC for clean-noisy label classification over the GMM by around 5 points for 80% label noise in CIFAR-10 when using the training objective in Section 3.3 (see Appendix A). The beta distribution over a (max) normalized loss is defined to have pdf:(3) |
where and is the Gamma function, and the mixture pdf is given by substituting the above into Eq. (2).
We use an Expectation Maximization (EM) procedure to fit the BMM to the observations. Specifically, we introduce latent variables
which are defined to be the posterior probability of the point having been generated by mixture component . In the E-step we fix the parameters and update the latent variables using Bayes rule:(4) |
Given fixed , the M-step estimates the distribution parameters
using a weighted version of the method of moments:
(5) |
with being a weighted average of the losses corresponding to each training sample , and
being a weighted variance estimate:
(6) |
(7) |
The updated mixing coefficients are then calculated in the usual way:
(8) |
The above E and M-steps are then iterated until convergence or a maximum number of iterations (10 in our experiments) are reached. Note that the above algorithm becomes numerically unstable when the observations are very near zero and one. Our implementation simply sidesteps this issue by bounding the observations in instead of [0, 1] ( in our experiments).
Finally, we obtain the probability of a sample being clean or noisy through the posterior probability:
(9) |
where denotes clean (noisy) classes.
Note that the loss used to estimate the mixture distribution is always the standard cross-entropy loss (Figure 1) for all samples after every epoch. This not necessarily the loss used for training, which may contain a corrective component to deal with label noise.
Carefully selecting a loss function to guide the learning process is of particular importance under label noise. Standard categorical cross-entropy loss (Eq. (1)) is ill-suited to the task as it encourages fitting label noise (Zhang et al., 2017). The static hard bootstrapping loss proposed in (Reed et al., 2015) provides a mechanism to deal with label noise by adding a perceptual term to the standard cross-entropy loss that helps to correct the training objective:
(10) |
where weights the model prediction in the loss function. (Reed et al., 2015) use . We refer to this approach as static hard bootstrapping. (Reed et al., 2015) also proposed a static soft bootstrapping loss () that uses the predicted softmax probabilities instead of the class prediction . Unfortunately, using a fixed weight for all samples does not prevent fitting the noisy ones (Table 1 in Subsection 4.2) and, more importantly, applying a small fixed weight to the prediction (probabilities) () limits the correction of a hypothetical noisy label .
We propose dynamic hard and soft bootstrapping losses by using our noise model to individually weight each sample; i.e., is dynamically set to and the BMM model is estimated after each training epoch using the cross-entropy loss for each sample . Therefore, clean samples rely on their ground-truth label ( is large), while noisy ones let their loss being dominated by their class prediction or their predicted probabilities ( is large), respectively, for hard and soft alternatives. Note that in mature stages of training the CNN model should provide a good estimation of the true class for noisy samples. Subsection 4.2 compares static and dynamic bootstrapping, showing that dynamic bootstrapping gives superior results.
Recently (Zhang et al., 2018) proposed a data augmentation technique named mixup that exhibits strong robustness to label noise. This technique trains on convex combinations of sample pairs ( and ) and corresponding labels ( and ):
(11) |
(12) |
where is randomly sampled from a beta distribution , with set to high values when learning with label noise so that tends to be close to 0.5. This combination regularizes the network to favor simple linear behavior between training samples, which reduces oscillations in regions far from them. Regarding label noise, mixup provides a mechanism to combine clean and noisy samples, computing a more representative loss to guide the training process. Even when combining two noisy samples the loss computed can still be useful as one of the noisy samples may (by chance) contain the true label of the other one. As for preventing overfitting to noisy samples, the fact that samples and their labels are mixed favors learning structured data, while hindering learning the unstructured noise.
Mixup achieves robustness to label noise by appropriate combinations of training examples. Under high-levels of noise mixing samples that both have incorrect labels is prevalent, which reduces the effectiveness of the method. We propose to fuse mixup and our dynamic bootstrapping to implement a robust per-sample loss correction approach:
(13) |
The loss defines the hard alternative, while the soft one can be easily defined by replacing and by and . These hard and soft losses exploit mixup’s advantages while correcting the labels through dynamic bootstrapping, i.e. the weights and that control the confidence in the ground-truth labels and network predictions are inferred from our unsupervised noise model: and . We compute , , and by doing an extra forward pass, as it is not straightforward to obtain the predictions for samples and from the mixed probabilities .
Ideally, the proposed loss would lead to a better model by trusting in progressively better predictions during training. For high-levels of label noise, however, the network predictions are unreliable and dynamic bootstrapping may not converge when combined with the complex signal that mixup provides. This is reasonable as under high levels of noise most of the samples are guided by the network’s prediction in the bootstrapping loss, encouraging the network to predict the same class to minimize the loss. We apply the regularization term used in (Tanaka et al., 2018), which seeks preventing the assignment of all samples to a single class, to overcome this issue:
(14) |
where
denotes the prior probability distribution for class
and is the mean softmax probability of the model for classacross all samples in the dataset. Note that we assume a uniform distribution for the prior probabilities (i.e.
), while approximating using mini-batches as done in (Tanaka et al., 2018). We add the term to (Eq. (13)) with being the regularization coefficient (set to one in all the experiments). Subsection 4.3 presents the results of this approach and Subsection 4.5 demonstrates its superior performance in comparison to the state-of-the-art.We thoroughly validate our approach in two well-known image classification datasets: CIFAR-10 and CIFAR-100. The former contains 10 classes, while the latter has 100 classes. Both have 50K color images for training and 10K for validation with resolution 32×32. We use a PreAct ResNet-18 (He et al., 2016) and train it using SGD and batch size of 128. We use two different schemes for the learning rate policy and number of epochs depending on whether mixup
is used (see Appendix B for further details). We further experiment on TinyImageNet (subset of ImageNet
(Deng et al., 2009)) and Clothing1M (Xiao et al., 2015) datasets to test the generality of our approach far from CIFAR data (Subsection 4.6). TinyImageNet contains 200 classes with 100K training images, 10K validation, 10K test with resolution , while Clothing1M contains 14 classes with 1M real-world noisy training samples and clean training subsets (47K), validation (14K) and test (10K).We follow (Zhang et al., 2017, 2018; Tanaka et al., 2018) criterion for label noise addition, which consists of randomly selecting labels for a percentage of the training data using all possible labels (i.e. the true label could be randomly maintained). Note that there is another popular label noise criterion (Jiang et al., 2018b; Wang et al., 2018b) in which the true label is not selected when performing random labeling. We also run our proposed approach under these conditions in Subsection 4.5 for comparison.
Alg./Noise level (%) | 0 | 20 | 50 | 80 | |
---|---|---|---|---|---|
CE | Best | 93.8 | 89.7 | 84.8 | 67.8 |
Last | 93.7 | 81.8 | 55.9 | 25.3 | |
ST-S | Best | 93.9 | 89.7 | 84.8 | 67.8 |
Last | 93.9 | 81.7 | 55.9 | 24.8 | |
ST-H | Best | 93.8 | 89.7 | 84.8 | 68.0 |
Last | 93.8 | 81.4 | 56.4 | 25.7 | |
DY-S | Best | 93.6 | 89.7 | 84.8 | 67.8 |
Last | 93.4 | 83.3 | 57.0 | 27.8 | |
DY-H | Best | 93.3 | 89.7 | 84.8 | 71.7 |
Last | 92.9 | 83.4 | 65.0 | 64.2 |
Alg./Noise level (%) | 0 | 20 | 50 | 80 | |
---|---|---|---|---|---|
CE | Best | 94.7 | 86.8 | 79.8 | 63.3 |
Last | 94.6 | 82.9 | 58.4 | 26.3 | |
M (Zhang et al., 2018) | Best | 95.3 | 95.6 | 87.1 | 71.6 |
Last | 95.2 | 92.3 | 77.6 | 46.7 | |
M-DYR-S | Best | 93.3 | 93.5 | 89.7 | 77.3 |
Last | 93.0 | 93.1 | 89.3 | 74.1 | |
M-DYR-H | Best | 93.6 | 94.0 | 92.0 | 86.8 |
Last | 93.4 | 93.8 | 91.9 | 86.6 | |
Alg./Noise level (%) | 0 | 20 | 50 | 80 | |
CE | Best | 76.1 | 62.0 | 46.6 | 19.9 |
Last | 75.9 | 62.0 | 37.7 | 8.9 | |
M (Zhang et al., 2018) | Best | 74.8 | 67.8 | 57.3 | 30.8 |
Last | 74.4 | 66.0 | 46.6 | 17.6 | |
M-DYR-S | Best | 71.9 | 67.9 | 61.7 | 38.8 |
Last | 67.4 | 67.5 | 58.9 | 34.0 | |
M-DYR-H | Best | 70.3 | 68.7 | 61.7 | 48.2 |
Last | 66.2 | 68.5 | 58.8 | 47.6 |
Table 1 presents the results for static (ST) and dynamic (DY) bootstrapping in CIFAR-10. Although ST achieves performance comparable to DY (except for 80% noise where DY is much better), after the final epoch (last) the performance of DY outperforms ST. The improvements are particularly remarkable for 80% of label noise (from 25.7% of ST-H to 64.2 of DY-H). Comparing soft and hard alternatives: hard bootstrapping gives superior performance, which is consistent with the findings of the original paper (Reed et al., 2015). The overall results demonstrate that applying per-sample weights (DY) benefits training by allowing to fully correct noisy labels.
The proposed dynamic hard bootstrapping exhibits better performance than the state-of-the-art static version (Reed et al., 2015). It is, however, not better than the performance of mixup data augmentation, which exhibits excellent robustness to label noise (M in Table 2). The fusion approach from Eq. (13) (M-DYR-H) and its soft alternative (M-DYR-S), which combines the per-sample weighting of dynamic bootstrapping and robustness to fitting noise labels of mixup, achieves a remarkable improvement in accuracy under high noise levels. Table 2 reports outstanding accuracy for 80% of label noise, a case where we improve upon mixup (Zhang et al., 2018) in best (last) accuracy of 71.6 (46.7) in CIFAR-10 and 30.8 (17.6) in CIFAR-100 to 86.8 (86.6) and 48.2 (47.2) using the hard alternative (M-DYR-H). It is important to highlight that we achieve quite similar best and last performance for all levels of label noise in CIFAR datasets, indicating that the proposed method is robust to varying noise levels. Figure 3 shows uniform manifold approximation and projection (UMAP) embeddings (McInnes et al., 2018) of the 512 features in the penultimate fully-connected layer of PreAct ResNet-18 trained using our method, and compares them with those found using cross-entropy and mixup. The separation among classes appears visually more distinct using the proposed objective.
(a) | (b) | (c) |
(d) | (e) | (f) |
Alg./Noise level (%) | 70 | 80 | 85 | 90 | |
---|---|---|---|---|---|
M-DYR-H | Best | 89.6 | 86.8 | 71.6 | 40.8 |
Last | 89.6 | 86.6 | 71.4 | 9.9 | |
MD-DYR-H | Best | 86.6 | 83.2 | 79.4 | 56.7 |
Last | 85.2 | 80.5 | 77.3 | 50.0 | |
MD-DYR-SH | Best | 84.6 | 82.4 | 79.1 | 69.1 |
Last | 80.8 | 77.8 | 73.9 | 68.7 | |
Alg./Noise level (%) | 70 | 80 | 85 | 90 | |
M-DYR-H | Best | 54.4 | 48.2 | 29.9 | 12.5 |
Last | 52.5 | 47.6 | 29.4 | 8.6 | |
MD-DYR-H | Best | 54.4 | 47.7 | 19.8 | 13.5 |
Last | 50.8 | 41.7 | 8.3 | 3.9 | |
MD-DYR-SH | Best | 53.1 | 41.6 | 28.8 | 24.3 |
Last | 47.7 | 35.4 | 24.4 | 20.5 |
Table 3 explores convergence under extreme label noise conditions, showing that the proposed approach M-DYR-H fails to converge in CIFAR-10 with 90% label noise. Here we propose minor modifications to achieve convergence.
When clean and noisy samples are combined by mixup they are given the same importance of approximately (as ). While noisy samples benefit from mixing with clean ones, clean samples are contaminated by noisy ones, whose training objective is incorrectly modified. We propose a dynamic mixup strategy in the input that uses a different for each sample to reduce the contribution of noisy samples when they are mixed with clean ones:
(15) |
where and , i.e. we use the noise probability from our BMM to guide mixup in the input. Note that for clean-clean and noisy-noisy cases, the behavior remains similar to mixup with , which leads to (i.e. ). This configuration simplifies the input to the network when mixing a sample whose label is potentially useless, while retaining the strengths of mixup for clean-clean and noisy-noisy combinations. This is used with the original mixup strategy (Eq. (13)) to benefit from the regularization that an additional label provides. Table 3 presents the results of this approach (MD-DYR-H), which exhibits more stable convergence for 90% label noise in both datasets.
Table 2 reported that hard bootstrapping works better than the soft alternative. Unfortunately, hard bootstrapping under high levels of label noise causes large variations in the loss that lead to drops in performance. To ameliorate such instabilities, we propose a decreasing softmax technique (Vermorel & Mohri, 2005) to progressively move from a soft to a hard dynamic bootstrapping. This is implemented by modifying the softmax temperature in:
(16) |
where denotes the score obtained in the last layer of the CNN model class of sample . By default gives the soft alternative of Eq. (13). To move from soft to hard bootstrapping we linearly reduce the temperature for and until we reach a final temperature in a certain epoch ( and epoch 200 in our experiments). We experimented with linear, logarithmic, tanh, and step-down temperature decays with similar results. This decreasing softmax MD-DYR-SH obtains much improved accuracy for 90% of label noise (69.1 for CIFAR-10 and 24.3 for CIFAR-100), while slightly decreasing accuracy compared to M-DYR-H and MD-DYR-H at lower noise levels. Note that we significantly outperform the best state-of-the-art we are aware for 90% of label noise, which is 58.3% and 58.0% for best and last validation accuracies (reported in (Tanaka et al., 2018) with a PreAct ResNet-32 on CIFAR-10). The training process is slightly modified to introduce dynamic mixup (epoch 106) before bootstrapping (epoch 111) for MD-DYR-H and MD-DYR-SH.
Alg./Noise level (%) | 0 | 20 | 50 | 80 | 90 | |
---|---|---|---|---|---|---|
(Reed et al., 2015)* | Best | 94.7 | 86.8 | 79.8 | 63.3 | 42.9 |
Last | 94.6 | 82.9 | 58.4 | 26.8 | 17.0 | |
(Patrini et al., 2017)* | Best | 94.7 | 86.8 | 79.8 | 63.3 | 42.9 |
Last | 94.6 | 83.1 | 59.4 | 26.2 | 18.8 | |
(Zhang et al., 2018)* | Best | 95.3 | 95.6 | 87.1 | 71.6 | 52.2 |
Last | 95.2 | 92.3 | 77.6 | 46.7 | 43.9 | |
M-DYR-H | Best | 93.6 | 94.0 | 92.0 | 86.8 | 40.8 |
Last | 93.4 | 93.8 | 91.9 | 86.6 | 9.9 | |
MD-DYR-SH | Best | 93.6 | 93.8 | 90.6 | 82.4 | 69.1 |
Last | 92.7 | 93.6 | 90.3 | 77.8 | 68.7 | |
Alg./Noise level (%) | 0 | 20 | 50 | 80 | 90 | |
(Reed et al., 2015)* | Best | 76.1 | 62.1 | 46.6 | 19.9 | 10.2 |
Last | 75.9 | 62.0 | 37.9 | 8.9 | 3.8 | |
(Patrini et al., 2017)* | Best | 75.4 | 61.5 | 46.6 | 19.9 | 10.2 |
Last | 75.2 | 61.4 | 37.3 | 9.0 | 3.4 | |
(Zhang et al., 2018)* | Best | 74.8 | 67.8 | 57.3 | 30.8 | 14.6 |
Last | 74.4 | 66.0 | 46.6 | 17.6 | 8.1 | |
M-DYR-H | Best | 70.3 | 68.7 | 61.7 | 48.2 | 12.5 |
Last | 66.2 | 68.5 | 58.8 | 47.6 | 8.6 | |
MD-DYR-SH | Best | 73.3 | 73.9 | 66.1 | 41.6 | 24.3 |
Last | 71.3 | 73.4 | 65.4 | 35.4 | 20.5 |
Algorithm | Architecture | Noise level (%) | |||
---|---|---|---|---|---|
20 | 40 | 60 | 80 | ||
(Jiang et al., 2018b) | WRN-101 | 92.0 | 89.0 | - | 49.0 |
(Ma et al., 2018) | GCNN-12 | 85.1 | 83.4 | 72.8 | - |
(Ren et al., 2018) | WRN-28 | - | 86.9 | - | - |
(Wang et al., 2018b) | GCNN-7 | 81.4 | 78.2 | - | - |
M-DYR-H | PRN-18 | 94.0 | 92.8 | 90.3 | 46.3 |
MD-DYR-SH | PRN-18 | 93.8 | 92.3 | 86.1 | 74.1 |
Algorithm | Architecture | Noise level (%) | |||
20 | 40 | 60 | 80 | ||
(Jiang et al., 2018b) | WRN-101 | 73.0 | 68.0 | - | 35.0 |
(Ma et al., 2018) | RN-44 | 62.2 | 52.0 | 42.3 | - |
(Ren et al., 2018) | WRN-28 | - | 61.3 | - | - |
M-DYR-H | PRN-18 | 70.0 | 64.4 | 58.1 | 45.5 |
MD-DYR-SH | PRN-18 | 73.7 | 70.1 | 59.5 | 39.5 |
Table 4 compares with related works for different levels of label noise using a common architecture and the 300 epochs training scheme (see Subsection 4.1) . We introduce bootstrapping in epoch 105 for (Reed et al., 2015) for the proposed methods, estimate the matrix of (Patrini et al., 2017) in epoch 75 (as done in (Hendrycks et al., 2018)), and use the configuration reported in (Zhang et al., 2018) for mixup. We outperform the related work in the presence of label noise, obtaining remarkable improvements for high levels of noise (80% and 90%) where the compared approaches do not learn as well from the noisy samples (see best accuracy) and do not prevent fitting noisy labels (see last accuracy).
As noted in Subsection 4.1, when introducing label noise the true label can be excluded from the candidates. In this case label noise is defined as the percentage of incorrect labels instead of random ones (i.e. the criterion followed in previous experiments), a criterion adopted by several other authors (Jiang et al., 2018b; Ma et al., 2018; Ren et al., 2018; Wang et al., 2018b). We also run our proposed approach under this setup to allow quantitative comparison (Table 5). The proposed method outperforms all related work in CIFAR-10 and CIFAR-100 with MD-DYR-SH, while the results for M-DYR-H are slightly below those of (Jiang et al., 2018b) for low label noise levels in CIFAR-100. Nevertheless, these results should be interpreted with care due to the different architectures employed and the use of sets of clean data during training in (Jiang et al., 2018b) and (Ren et al., 2018).
Alg./Noise level (%) | 20 | 50 | 80 | |
---|---|---|---|---|
(Zhang et al., 2018)* | Best | 53.2 | 41.7 | 18.9 |
Last | 49.4 | 31.1 | 8.7 | |
M-DYR-H | Best | 51.8 | 44.4 | 18.3 |
Last | 51.6 | 43.6 | 17.7 | |
MD-DYR-SH | Best | 60.0 | 50.4 | 24.4 |
Last | 59.8 | 50.0 | 19.6 |
Table 6 shows the results of the proposed approaches M-DYR-H and MD-DYR-SH compared to mixup (Zhang et al., 2018) on TinyImageNet to demonstrate that our approach is useful far from CIFAR data. The proposed approach clearly outperforms (Zhang et al., 2018) for different levels of label noise, obtaining consistent results with the CIFAR experiments. Note that we use the same network, hyperparameters, and learning rate policy as with CIFAR. Furthermore, we tested our approach in real-world label noise by evaluating our method on Clothing1M (Xiao et al., 2015), which contains non-uniform label noise with label flips concentrated in classes sharing similar visual patterns with the true class. We followed a similar network and procedure as (Tanaka et al., 2018) with ImageNet pre-trained weights and ResNet-50, obtaining over 71% test accuracy, which falls short of the state-of-the-art (72.23% (Tanaka et al., 2018)). We found that finetuning a pre-trained network for one epoch, as done in (Tanaka et al., 2018), easily fits label noise limiting our unsupervised label noise model. We believe this occurs due to the structured noise and the small learning rate. Training with cross-entropy alone gives test accuracy over 69%, suggesting that the configurations used might be suboptimal.
This paper presented a novel approach on training under label noise with CNNs that does not require any set of clean data. We proposed to fit a beta mixture model to the cross-entropy loss of each sample and model label noise in an unsupervised way. This model is used to implement a dynamic bootstrapping loss that relies either on the network prediction or the ground-truth (and potentially noisy) labels depending on the mixture model. We combined this dynamic bootstrapping with mixup data augmentation to implement an incredibly robust loss correction approach. We conducted extensive experiments on CIFAR-10 and CIFAR-100 to show the strengths and weaknesses of our approach demonstrating outstanding performance. We further proposed to use our beta mixture model to guide the combination of mixup data augmentation to assure reliable convergence under extreme noise levels. The approach generalizes well to TinyImageNet but shows some limitations under non-uniform noise in Clothing1M that we will explore in future research.
This work was supported by Science Foundation Ireland (SFI) under grant numbers SFI/15/SIRG/3283 and SFI/12/RC/2289.
The Power of Ensembles for Active Learning in Image Classification.
InIEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, 2018.CurriculumNet: Weakly Supervised Learning from Large-Scale Web Images.
In European Conference on Computer Vision (ECCV), 2018.Learning to Reweight Examples for Robust Deep Learning.
In International Conference on Machine Learning (ICML), 2018.The Devil of Face Recognition is in the Noise.
In European Conference on Computer Vision (ECCV), 2018a.This section extends the discussion of the proposed unsupervised BMM in the main paper providing detail on several more aspects.
We seek robust representation learning in the presence of label noise, which may occur when images are automatically labeled. Performance will likely drop in carefully annotated datasets with near 0% noise because the loss distribution is not a two-component mixture. In this situation the BMM classifies almost all samples as clean, but some estimation errors may occur, which lead to a reliance on the sometimes incorrect network prediction instead of the true clean label. Nevertheless, for 20% noise, we outperform the compared state-of-the-art at the end of the training, demonstrating improved robustness for low noise levels.
The BMM parameters are re-estimated after every epoch once the loss correction begins (i.e. there is an initial warm-up as noted in Subsection 4.1 with no loss correction) by computing the cross-entropy loss from a forward pass with the original (potentially noisy) labels. We also tested our approach M-DYR-H (CIFAR-10, 80% of label noise) changing the estimation period to 5 and 0.5 epochs, observing no decrease in accuracy. While the original configuration presented in Figure 4(a) reaches 86.8 (86.6) for best (last), every 5 epochs leads to (86.9) 86.8 and every 0.5 to 88.0 (87.5).
Figure 4(b) shows the clean/noisy classification capabilities of the BMM in terms of Area Under the Curve (AUC) evolution during training, demonstrating that performance and robustness are consistent across noise levels. In particular, the experiment on CIFAR-10 with M-DYR-H exceeds 0.98 AUC for 20, 50 and 80% label noise. AUC increases during training and increases faster for lower noise levels, showing increasingly better clean/noisy discrimination related to consistent BMM predictions over time.
(a) |
(b) |
(c) |
BMM prediction accuracy is essential for high image classification accuracy, as demonstrated by the tendency for both image classification and BMM accuracy to increase together in Figure 4(a) and (b), especially for higher noise levels. Figure 4(c) further verifies this relationship by comparing the BMM with a GMM (Gaussian Mixture Model) on CIFAR-10 with M-DYR-H and 80% label noise. The GMM gives both less accurate clean/noisy discrimination and worse image classification results (clean/noisy AUC drops from 0.98 to 0.94, while image classification accuracy drops from 86.6 to 83.5).
Incorporating the BMM results in a loss that goes beyond mere regularization. This can be verified by removing the BMM and assigning fixed weights in the bootstrapping loss (0.8 to GT and 0.2 to network prediction, keeping mixup for robustness). This leads to a drop from 86.6 for M-DYR-H to 74.6 in the last epoch (80% of label noise on CIFAR10).
We stress that experiments across all datasets share the same hyperparameter configuration and lead to consistent improvements over the state-of-the-art, demonstrating that the general approach does not require carefully tuned hyperparams. Indeed, we are likely reporting suboptimal results that could be improved with a label noise free validation set, though availability of this set is not assumed in this paper.
Starting training with high learning rates is important: training more epochs leads to better performance, as mixup together with a high learning rate helps prevent fitting label noise. This warm-up learns the structured data (mainly associated to clean samples) and helps separate the losses between clean/noisy samples for a better BMM fit.
All experiments used the following setup and hyperparameter configuration:
Images are normalized and augmented by random horizontal flipping. We use 32×32 random crops after zero padding with 4 pixels on each side.
A PreAct ResNet-18 is trained from scratch using PyTorch 0.4.1. Default PyTorch initialization is used on all layers.
SGD with momentum (0.9), weight decay of , and batch size 128.
Training for 120 epochs in total. We reduce the initial learning rate (0.1) by a factor of 10 after 30, 80, and 110 epochs. Warm-up for 30 epochs, i.e. bootstrapping (when used) starts in epoch 31. This configuration is used in all experiments in Table 1.
Training for 300 epochs in total. We reduce the initial learning rate (0.1) by a factor of 10 after 100 and 250 epochs. Warm-up for 105 epochs, i.e. bootstrapping starts in epoch 106 when used (note: the warmup period can be much longer when using mixup because it mitigates fitting label noise. Mixup . This configuration is used for all experiments excluding those in Table 1.
Regarding BMM parameter estimation: parameters are fit automatically using 10 EM iterations as noted in the paper. We also ran M-DYR-H (80% of label noise, CIFAR-10) using 5 and 20 EM iterations, obtaining 87.4 (87.2) and 86.9 (86.3) for best (last) epoch, suggesting that the method is relatively robust to this hyperparameter.
Comments
There are no comments yet.