DeepAI
Log In Sign Up

Dynamic Loss For Robust Learning

11/22/2022
by   Shenwang Jiang, et al.
0

Label noise and class imbalance commonly coexist in real-world data. Previous works for robust learning, however, usually address either one type of the data biases and underperform when facing them both. To mitigate this gap, this work presents a novel meta-learning based dynamic loss that automatically adjusts the objective functions with the training process to robustly learn a classifier from long-tailed noisy data. Concretely, our dynamic loss comprises a label corrector and a margin generator, which respectively correct noisy labels and generate additive per-class classification margins by perceiving the underlying data distribution as well as the learning state of the classifier. Equipped with a new hierarchical sampling strategy that enriches a small amount of unbiased metadata with diverse and hard samples, the two components in the dynamic loss are optimized jointly through meta-learning and cultivate the classifier to well adapt to clean and balanced test data. Extensive experiments show our method achieves state-of-the-art accuracy on multiple real-world and synthetic datasets with various types of data biases, including CIFAR-10/100, Animal-10N, ImageNet-LT, and Webvision. Code will soon be publicly available.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

11/08/2022

Learning advisor networks for noisy image classification

In this paper, we introduced the novel concept of advisor network to add...
08/21/2022

Combating Noisy-Labeled and Imbalanced Data by Two Stage Bi-Dimensional Sample Selection

Robust learning on noisy-labeled data has been an important task in real...
09/07/2021

Learning Fast Sample Re-weighting Without Reward Data

Training sample re-weighting is an effective approach for tackling data ...
03/19/2021

MetaLabelNet: Learning to Generate Soft-Labels from Noisy-Labels

Real-world datasets commonly have noisy labels, which negatively affects...
08/08/2020

Meta Feature Modulator for Long-tailed Recognition

Deep neural networks often degrade significantly when training data suff...
04/19/2021

Do We Really Need Gold Samples for Sample Weighting Under Label Noise?

Learning with labels noise has gained significant traction recently due ...
06/10/2022

Lightweight Conditional Model Extrapolation for Streaming Data under Class-Prior Shift

We introduce LIMES, a new method for learning with non-stationary stream...

Related Works

Long-Tailed Learning. Existing methods of long-tailed learning are generally divided into three categories: data re-sampling chawla2002smote; buda2018systematic, adjusting the classification boundary tang2020long, and re-weighting shu2019meta; lin2017focal; huang2016learning. The re-sampling’s idea is to rebalance the class distribution by over-sampling the tail classes, which is effective but prone to overfitting on the tail classes. Methods of the second category enlarges the classification boundary of the tail classes while narrowing that of the head classes, by modifying the classification threshold menon2020long, or by adjusting the weights of the output layer through normalization kang2019decoupling. The re-weighting strategy aims to assign larger loss weights to the tail classes. Conventional approaches of this category huang2016learning; huang2019deep

directly impose weights on each training sample, which is sensitive to outliers and causes unstable training 

ren2020balanced. Some recent works tan2020equalization achieve re-weighting by modifying the predicted scores in the Softmax function, which yields more stable training and promising performance. This work inherits the merit of re-weighting strategy by adapting it to more challenging long-tailed scenarios with label noise.

Learning under Label Noise. Methods of learning under label noise can be categorized into two main types: sample re-weighting and relabeling. The re-weighting strategy treats samples with larger loss values as noise, and reduce their influence by giving them lower weights kumar2010self; huang2019o2u. MentorNet jiang2018mentornet learns data-driven curriculums for deep CNNs trained on corrupted labels. Meta-Weight-Net shu2019meta learns an explicit weighting function directly from a small clean data set. The relabeling strategy leverages noisy samples by refining their labels. Bootstrapping reed2014training

integrates assigned labels and model’s predictions by interpolation. Some works divide clean and noisy samples based on the priors learned from a manually generated noisy set 

chen2021sample, and then take advantage of noisy samples berthelot2019mixmatch.

Long-tailed Learning under Label Noise. Recently, some works have emerged to cope with long-tailed learning under label noise. HAR cao2020heteroskedastic regularizes different regions of the input space differently through data-dependent regularization technique. CurveNet jiang2021delving learns to assign proper weights to different samples according to sample’s loss curve. ROLT wei2021robust combines DivideMix and LDAM to correct noisy labels and improve tail-class performance. Different from these methods, this work makes the first trial to correct noisy labels and adjust per-class classification margin simultaneously in a learnable and adaptive manner according to the training data.

Figure 2:

Overall learning paradigm with our dynamic loss. Each training epoch begins with dividing the entire data into a small unbiased meta set and a biased train set. At each iteration, we first update the label corrector and the margin generator jointly through meta-learning on mini-batch meta and train data; then the classifier is updated on mini-batch train data by minimizing the dynamic loss with corrected per-sample labels and generated per-class margins.

Methods

This section describes the details of our dynamic loss and its optimization through meta-learning.

Overview

Given a noisy and imbalanced train set with image and its assigned one-hot class label , our goal is to learn a classifier with learnable parameters that maps an input image to class confidence scores. Despite of the label noise and class imbalance existed in the train set, the classifier is required to accurately recognizing all classes, so a balanced and clean test set is employed.

The parameters are optimized by minimizing the classification loss on the train set:

(1)

where

is the cross-entropy loss. Since the train set is with both label noise and class imbalance, optimizing with such naive cross-entropy loss suffers two major drawbacks: i) the assigned labels of noisy samples do not match their ground-truths, which result in high loss values and force the model to memorize noisy labels; ii) tail classes have much lower occurrence probabilities but share the same classification margins as that of head classes and thus are prone to poor generalization.

To tackle the above two problems, we present a novel dynamic loss that simultaneously corrects noisy labels and adjusts the classification margin for different classes in an adaptive and learnable manner:

(2)

where and denote the reassigned label and the addictive classification margin for , respectively.

Concretely, as illustrated in Figure 2, the dynamic loss is equipped with a learnable label corrector parameterized by and a margin generator parameterized by , which are respectively in charge of correcting per-sample labels and per-class classification margins. We continuously optimize them jointly along with the classifier through meta-learning. Next, we describe the two counterparts as well as their optimization in detail.

Label Corrector

The label corrector identifies noisy samples and corrects their wrongly assigned labels in a class-wise manner. For identifying noisy samples, it divides all samples into groups by class and sorts the samples in each group separately according to the loss value, evenly divides the sorted samples in each group into bins, and employs a lightweight class-wise meta net to learn to identify whether the bin is dominated by noisy or clean samples. Consequently, the loss bin index for sample can server as a reliable indicator for identifying label noise.

For label correction, as long as the classifier has not severely over-fitted on biased data, it learns mainly from the dominated clean samples and transfers the learned knowledge to noisy ones. Hence, the classifier’s predictions on noisy samples are close to their ground-truths and can be used to correct the wrongly assigned labels.

Based on the above, here comes our label corrector that reassigns a ground-truth label for sample as a weighted sum of its assigned label and the classifier’s prediction based on the loss bin index :

(3)

where is a class-dependent weighting function that maps the bin index

to a balance weight. It is learned by a small meta net, implemented by a one-hot encoder followed by a two-layer perceptron (MLP) with sibling Sigmoid activation. If sample

is noisy with a high loss value, is large and the computed is close to , hence the label corrector corrects its wrongly assigned label with the classifier’s prediction, and vice versa.

Margin Generator

For the design of the margin generator, we begin with revisiting Label-Distribution-Aware Margin Loss (LDAM) from the perspective of generalization error bound. Due to fewer training samples, tail classes have larger generalization error bounds compared with head classes. Since the generalization error bound usually negatively correlates to the magnitude of classification margin, increasing the classification margins for the tail classes will minimize their generalization error bounds.

In light of the above, Balanced Meta-Softmax, an unbiased extension of standard Softmax, adjusts the classification margin for class based on its sample number , and poses the addictive margin to the confidence score predicted by the classifier:

(4)

However, for long-tailed data with noisy labels, can no longer reflect the real sample number of class due to the existence of label noise. In addition, manually pre-defining the margin based solely on the sample number largely ignores the distinct classification difficulties of different classes.

We hence present a learnable margin generator

, implemented by a two-layer MLP, to dynamically adjust the margin for each class by optimizing a learnable margin vector

from an initial all-ones vector during classifier training:

(5)

By integrating the margin vector into the standard Softmax loss, we have:

(6)

Since the classification margin is in our formulation, the learned margin for class should be positively correlated to its sample number.

Hence the margin generator is capable of adjusting per-class margins automatically by adapting to the true class distribution underlying the long-tailed noisy data and the classification difficulty of each class in a learnable manner, with no manual interventions nor prior information required.

Hierarchical Sampling Strategy

We integrate the label corrector and the margin generator into a unified meta net , the key component of our dynamic loss. We apply meta-learning to optimize and guide the learning of the classifier to well adapt to balanced and clean test data.

Performing meta-learning requires to build a meta set comprised of a small amount of balanced and clean data . Intuitively, samples with lower classification loss computed by tend to have correctly assigned labels. Hence we can build the meta set simply by selecting samples with the lowest loss values from each class in . However, since easier samples usually have lower loss values throughout the training, such a sampling strategy tend to select fixed easy samples at each epoch, making the model prone to overfitting to easy samples.

Hereby, we design a hierarchical sampling strategy to build through a two-step process: i) construct a primary set by randomly sampling samples from each class in ; ii) select low-loss samples from each class in the primary set to form the final meta set. The remaining samples in make up the counterpart set , as illustrated in Fig. 2.

The benefits of introducing the additional primary set are twofold: i) the samples in the primary set are randomly sampled at each epoch, which guarantees the resulting meta set to be distinct across different epochs; ii) the primary set has fewer samples than , which enables a larger probability of selecting hard yet clean samples near the decision boundary into the meta set. As a result, our hierarchical sampling strategy ensures both the dynamism and diversity of the meta set, preventing the model from overfitting to biased data.

Optimization

and respectively denote the parameters of the meta net and the classifier at iteration . We randomly sample two mini-batches and from and , respectively, and update and alternatively as follows.

Update . The meta net is trained to guide the learning of the classifier on by correcting per-sample labels and adjusting per-class margins, such that can well adapt to the balanced and clean . We first virtually update on the dynamic loss with :

(7)

where is the learning rate for the classifier and is the batch size. Then we update by minimizing the loss of the virtually updated classifier on :

(8)

where is the learning rate for the meta net.

Update . We update by minimizing the loss of on with corrected labels and adjusted margins by the updated :

(9)

The above two steps are repeated over iterations so that and are optimized alternatively until convergence.

Methods CIFAR-10-N-LT CIFAR-100-N-LT WebVision ImageNet
Last Best Last Best Top-1 Top-5 Top-1 Top-5
Cross Entropy 52.73 60.43 25.41 26.67 71.08 88.48 65.08 87.48
DivideMix 58.19 60.74 41.15 41.75 77.32 91.64 75.20 90.84
Balanced-Softmax 69.66 76.37 41.30 41.82 63.48 84.56 61.80 85.32
DivideMix+Balanced-Softmax 71.60 72.75 46.11 46.92 78.76 92.52 76.60 92.76
CurveNet 69.36 71.36 29.83 32.72 - - - -
HAR 76.52 77.17 42.77 43.98 75.50 90.70 70.30 90.00
ROLT+ 75.31 - 38.94 - 77.64 92.44 74.64 92.48
ELR+ 69.31 70.34 34.96 36.20 77.78 91.68 70.29 89.76
FaMUS 39.12 46.98 30.72 30.81 79.40 92.80 77.00 92.76
Dynamic Loss (Ours) 79.73 80.55 48.56 48.98 80.12 93.64 74.76 93.08
Table 1: Accuracy (%) on CIFAR-N-LT, WebVision, and ImageNet.

Experiments

We test our method on both synthetic and real-world long-tailed noisy data, and then present ablations to validate our design choices. More quantitative and qualitative results and training details are provided in supplementary materials.

Methods in Comparison. We compare with three types of methods that are respectively designed to address: i) both label noise and class imbalance, such as HAR, ROLT, FaMUS xu2021fasterand CurveNet; ii) label noise, such as Co-teaching han2018co, SELFIE song2019selfie, DivideMix, ELR+ liu2020early, NCT chen2022compressing, PLC prog_noise_iclr2021,GJS englesson2021generalized, and CMW-Net shu2022cmw; and iii) class imbalance, such as Focal Loss, CB focal cui2019class, LDAM and Balanced Softmax.

Experiments on Long-Tailed Noisy Data

Results on CIFAR-N-LT. We evaluate our method on CIFAR-N-LT including CIFAR-10 and CIFAR-100 with simulated label noise and class imbalance. We first simulate the long-tailed dataset by following the exponential profile cao2019learning, with imbalance ratio . The long-tailed imbalance follows an exponential decay in the sample number across different classes. We then inject label noise to the long-tailed dataset to form the training set. In particular, the label of each sample is independently changed to class with probability , where is the total number of training samples and denotes the frequency of class . Following ROLT, we consider the imbalance ratio of and the noise rate of . We adopt the ResNet-32 he2016deep as the classifier.

Table 1 depicts the average accuracy on CIFAR-10 and CIFAR-100 with various imbalance ratios and noise rates, respectively. Our method retains high accuracy under a wide range of degrees for both biases, while previous methods degrade rapidly. In particular, our dynamic loss improves the average last accuracy by and compared to HAR on CIFAR-10-N-LT and CIFAR-100-N-LT, respectively. Moreover, it significantly outperforms the baseline model that simply fuses strategies from DivideMix and Balanced-Softmax.

It is worth mentioning that previous methods rely on carefully tuned hyper-parameters based on unobservable noise rate li2020dividemix or perform two-stage training to get prior information of class distribution cao2020heteroskedastic. In comparison, our dynamic loss uses a fixed set of hyper-parameters and requires only one-round end-to-end training without any manual interventions in all the experiments.

Results on Webvision. We also evaluate our dynamic loss on WebVision, a real-word long-tailed noisy dataset. We adopt Inception-ResNet V2 szegedy2017inception as the classifier by following priors li2020dividemix. Table 1 shows the results on WebVision and ImageNet deng2009imagenet validation set. Our method significantly outperforms other competing methods despite most of them are equipped with additional model cotraining and ensembling strategies. Notably, compared to HAR and ROLT+ that are dedicated to dealing with long-tailed noisy data, our method boosts the accuracy by at least on WebVision, which clearly demonstrates its superiority.

Figure 3: Visualizing the learned label weights and noise percentage for each class on CIFAR-10-N (Asym. ).
Methods CIFAR-10-N CIFAR-100-N Animal-10N
Cross Entropy 79.92 46.37 80.60
SELFIE - 50.23 81.80
PLC 79.79 47.36 83.40
NCT 87.95 56.88 84.10
Co-teaching 85.45 58.34 80.20
CMW-Net 89.41 64.29 84.70
DivdeMix 94.11 73.77 84.00
GJS 90.94 70.52 84.17
Dynamic Loss 94.62 74.24 86.54
Table 2: Accuracy (%) on CIFAR-N and Animal-10N.

Experiments on Noisy Data

Results on CIFAR-N. We test the performance of our dynamic loss in dealing with purely noisy data on CIFAR-10-N and CIFAR-100-N with symmetric noise rates and asymmetric noise rates . We adopt PreAct ResNet (PARes18) he2016identity as the classifier by following DivideMix. Table 2 depicts our method achieves the best average accuracy compared to previous methods specially designed for learning on noisy data. In contrast to DivideMix that requires to manually tune the hyper-parameters under different noise types and rates, our dynamic loss well adapts to various noisy scenarios in a fully self-adaptive manner without manual interventions.

Results on Animal-10N. We also test on Animal-10N real-world noisy dataset. For a fair comparison with priors song2019selfie, we adopt randomly initialized VGG19-BN simonyan2014very as the classifier. Table 2 shows our dynamic loss achieves state-of-the art performance among all priors, which clearly proves its superiority in dealing with real-world noisy data.

Experiments on Long-Tailed Data

Results on CIFAR-LT. We also test the performance of our dynamic loss in dealing with purely long-tailed data on clean CIFAR datasets with varying imbalance ratios . Table 3 depicts our method achieves the best performance compared to previous methods specially designed for learning on long-tailed data. Especially, compared with LDAM that adjusts classification margins based solely on the sample number, our method significantly boosts the accuracy by 4.03% and 5.75% on CIFAR-10-LT and CIFAR-100-LT, respectively. It evidences that our dynamic loss is capable of perceiving the classification difficulty of different classes and adjusting their classification margins adaptively.

Results on ImageNet-LT. Table 3 reports the results on ImageNet-LT, a large-scale long-tailed dataset with imbalance ratio . Our dynamic loss achieves the best performance, i.e., in accuracy, demonstrating its strong generalization ability.

Methods CIFAR-10-LT CIFAR-100-LT ImageNet-LT
Cross Entropy 78.45 47.26 34.01
Focal Loss 79.13 47.62 32.64
CB Focal 81.42 48.88 -
LDAM-DRW 83.21 51.36 36.03
FaMUS 84.61 52.73 -
Balanced-Softmax 87.15 57.08 38.21
Dynamic Loss 87.24 57.11 38.39
Table 3: Accuracy (%) on CIFAR-LT and ImageNet-LT.

Qualitative Analysis.

Behavior of label corrector. We analyze the behavior of the label corrector on balanced CIFAR-10-N with asymmetric noise that is designed to mimic the structure of real-world label noise by assigning distinct noise rates to different classes. Figure 3 depicts the learned weight by the label corrector and the percentage of noisy labels corresponding to increasing loss bin index on each class. For the classes that contain noisy labels, clean samples mainly appear at top-ranked (low-loss) bins while noisy samples at bottom-ranked (high-loss) bins. This validates our motivation that the loss bin index can serve as a reliable input indicator for the label corrector to distinguish between noisy and clean samples. Accordingly, the generated weight remains to be and suddenly drops to at around bin , showing that the label corrector retains the assigned ground-truth label for clean samples and turns to the predicted label that is more likely to be the ground-truth for noisy samples. While for the classes without noisy labels, remains to be . Consequently, our label corrector always outputs the correct labels for both noisy and clean samples of different classes.

Behavior of margin generator. We analyze the behavior of the margin generator on clean CIFAR-10-LT with imbalance factor . We visualize its generated margins for different classes in the left subfigure of Figure 4. Generally, as the class index increases, the sample number decreases, and the learned margin also decreases as expected. It suggests that the margin generator automatically figures out the sample number of different classes and adjusts their margins accordingly. Interestingly, we see some irregularly larger margins on class and . This can be explained by the right subfigure in which we visualize the feature distribution of meta set using T-SNE van2008visualizing. The feature distribution of these two classes correspond to the two rightmost clusters, indicating they are easier to be distinguished from the other classes. This evidences that the margin generator takes into account not only the sample number, but also the classification difficulty of each class, to generate comprehensively adaptive margins during classifier training.

Behavior of hierarchical sampling. We investigate how our hierarchical sampling strategy improves meta set construction. Figure 5 illustrates samples’ feature distribution of the meta set built by our hierarchical sampling (left) and by naive sampling (right). Compared with naive sampling, the feature distribution of samples selected by hierarchical sampling are more spread out within separate clusters. It evidences that building a primary set prior to random sampling enables more diverse meta data covering both easy and hard samples, and thus alleviates biased learning on easy samples.

Figure 4: Visualization of learned per-class margins and samples number (left), and samples’ feature distribution of meta set (right) on CIFAR-10-LT ().
Figure 5: Visualization of samples’ feature distribution of the meta set constructed by hierarchical sampling (left), and by naive sampling (right) on CIFAR-10-N-LT ( and ).
B H V Last Best
52.73 60.43
71.51 76.84
71.08 77.11
72.64 74.96
78.53 79.58
79.50 80.35
79.77 80.55
Table 4: Ablations. B: Balanced-Softmax, H: hierarchical sampling, and V: vector meta net.
Methods CIFAR-10 CIFAR-100
Last Best Last Best
HAR 71.09 71.63 39.35 39.86
Dynamic Loss 79.99 82.20 41.15 41.73
Table 5: Mean accuracy (%) of PARes18 on CIFAR-N-LT.

Ablation Studies.

Effect of label corrector. We build a model variant in which the label corrector is excluded to evaluate its effectiveness. Table 4 depicts its average accuracy decreases by compared with complete dynamic loss, which well evidences the effectiveness of the label corrector.

Effect of margin generator. As shown in Table 4, to verify the effectiveness of the margin generator, we firstly only employ the label corrector, which achieves the average last accuracy of 71.08%. Thereafter, we introduce Balanced-Softmax to deal with class imbalance, which only lifts a little accuracy by 1.56%. Finally, we boost the accuracy to 79.77% by equipping our margin generator . These evidence a dynamic margin is necessary to long-tailed noisy data.

Effect of hierarchical sampling. As shown in Table 4, replacing the hierarchical sampling with naive random sampling results in up to average last accuracy drop, which indicates the meta set constructed by hierarchical sampling has a more similar distribution to the test set.

Effect of class-specific label corrector. To valid the class-specific design of our label corrector, we build a class-agnostic variant and evaluate it on CIFAR-10-N with 40% asymmetric noise that holds different noise rates across different classes. The results show that our class-specific label corrector significantly outperforms its class-agnostic counterpart by a large margin of 4.0% in accuracy (94.51% vs. 90.56%), which clearly proves our class-specific design.

Effect of meta net architecture. To validate the architecture design of meta net, we simplify the label corrector and the margin generator to a -length and a -length learnable vector, respectively. Table 4 shows such modifications lead to a noticeable performance drop (0.27%). The explanation, which is supported by our experimental observation, is that MLPs learn proper label weights and per-class margins very quickly, while the learnable vectors, unfortunately, suffer from slow convergence.

Test on different classifier. To validate the generality of our method, we further evaluate it using the PARes18. We respectively set the imbalance ratios and noise rates to and and present the mean accuracy in Table 5. It shows our method also outperforms HAR on both CIFAR-10-N-LT and CIFAR-100-N-LT. This evidences our dynamic loss is applicable to various classifiers.

Conclusions

This work presents a new dynamic loss for robust learning from long-tailed data with noisy labels. The dynamic loss comprises a learnable label corrector and margin generator, which is capable of jointly correcting noisy labels and adjusting classification margins to guide the learning of a classifier. The meta net and the classifier are co-optimized through meta-learning, thanks to a new hierarchical sampling strategy that helps provide unbiased yet diverse meta data. Extensive evaluations on both synthetic and real-world data show our dynamic loss is effective and has high adaptability and robustness to various types of data biases.

References

Appendix A Algorithm Pseudocode

Algorithm 1 depicts the detailed learning process. We begin by a warm-up stage to pre-train on the entire training set to possess preliminary classification capability. Then we enter a robust learning stage to optimize and in an alternative manner. Concretely, at the beginning of each epoch, we first construct a small meta set from , which is balanced and almost clean, by selecting samples with low classification loss value computed by the latest . The remaining samples of form a large counterpart set . Details of the optimization are presented in manuscripts. Source code is available at code https://anonymous.4open.science/r/dynamic˙loss-7BED. Since the hyperlink is forbidden, please copy and paste the URL. There may are some mistakes directly copying from PDF, please check it.

Input: training set , learning rate , number of warm-up iterations . // Warm-up Stage: run SGD for iterations.
for  do
          Sample examples from . , where .
// Robust Learning Stage: run SGD for iterations.
for  do
          if New Epoch then
                   Compute rank of samples.
                   Split set into a meta set and a train set by hierarchical sampling.
         Optimize the via meta learning on dataset and as in Equation 7 and 8 in the manuscripts.
          Update model parameters of classifier by our dynamic loss as in Equation 9 in the manuscripts.
Algorithm 1 Robust learning with dynamic loss

Appendix B Additional Visualizations

Figure 6: The visualization of the accuracy of generated label varied with epoch of on CIFAR-10-N (left) and CIFAR-100-N (right) with noise rates ranging from 0.2 to 0.6.
(a) Epoch 10
(b) Epoch 45
(c) Epoch 50
(d) Epoch 300
Figure 7: The visualization of learnable weight of class 1 varied with training epochs on CIFAR-10-N with noise rate .

CIFAR-N. Figure 6 depicts the accuracy of corrected labels, computed as the proportion of samples with ground-truth labels after label correction. The label accuracy gradually increases as the training proceeds and the classifier becomes more trustworthy, and eventually reaches over on CIFAR-10-N with noise rates 0.2 and 0.4. Notably, the accuracy also has been greatly improved by under a noise rate of . The high accuracy of corrected labels validates our design choices from two perspectives: i) this supports our assumption that the classifier normally dedicates to fitting in the dominated clean samples, and can transfer the learned knowledge to noisy samples to predict the ground-truth labels for them; ii) the label corrector can accurately recognize noisy samples and correct their labels with the predicted correct ones.

Figure 7

shows the learnable weight varied with training epochs. From it we can know, at the beginning the label corrector tends to use the given label to train the classifier while it gradually changes to believe prediction label for the samples with large rank when the classifier is trained 45 epochs. Moreover, the label corrector accurately estimates the noise rate is about 35% (for noise rate 40%, there are actually 35% noisy samples). From this plot together with Figure 

6, we find the epoch of label corrector believing the classifier is delayed as the noise rate increases. We believe the label corrector thinks the classifier needs to be trained more epochs to generate more reliable prediction labels as noise rate of train set increases. It evidences the label corrector can dynamically relabel the noisy labels according to the status of classifier and train set.

Figure 8: The visualization of the learned class-aware margin of on CIFAR-10-LT (left) and CIFAR-100-LT (right) with imbalance factor ranging from 10 to 100.
Figure 9: Visualization of learned classification margins and feature distributions of meta set varied with training epochs.

CIFAR-LT. Figure 8 visualizes the margins generated by the margin generator under different imbalance ratios. One can see that despite of varying imbalance ratios, the generated margin consistently decays as the sample size grows (corresponding to increasing class index). Moreover, the learned margin variation across different classes tends to enlarge as the imbalance ratio increases. Both quantitative (in the manuscripts) and qualitative analyses evidence that the margin generator well respects and adapts to various class distributions by learning to assign proper margins automatically.

Figure 9 shows the classification margins and feature distributions of meta set varied with training epochs. From it we can know, the margin generator always adjusts the classification margins according to the feature distributions of meta set in the training process. Take the class as an example, the margin generator tends to give a small classification margin to it since it is hard to recognise at the epoch 50. However, recognising it becomes easy at the epoch 250 and then the margin generator gives it a more large classification margin. It evidences the margin generator can dynamically adjust the classification margin according to classification difficulty.

Figure 10: Visualization of per class learned label weights on WebVision.
Figure 11: Visualization of learned per-class margins on Webvision.

Webvision. Considering the large number of categories in WebVision, we select 10 categories in intervals of 5 to visualize their learned label weights. As shown in Figure 10,the learned label weights are varied with different classes, suggesting the noise rates of different classes are different, which is consistent with real-world dataset. Moreover, the generated margins over different classes accord with the complex variation of sample size, as shown in Figure 11. This demonstrates our method adapts well to complicated real-world biased data.

Figure 12: Visualization of per class learned label weights on Animal-10N.

Animal-10N. We also analyze the behavior of label corrector on real-word noisy data Animal-10N, as in Figure 12. Most of the learned label weight remains to be and drops to at around bin (red dotted line), suggesting the noise rate estimated by the label corrector is about , which is consistent with the well-recognized estimated noise rate on Animal-10N song2019selfie.

Datasets Cross Entopy AutoAugment DivideMix Balanced-Softmax HAR Ours
CIFAR-10 92.91 94.84 91.59 94.95 92.10 94.07
CIFAR-100 69.18 73.96 63.72 73.90 70.70 73.24
Table 6: Test accuracy (%) of ResNet32 on CIFAR datasets.

ImageNet-LT.

Figure 13: Visualization of learned class margins on ImageNet-LT.

We also analyze the behavior of margin generator on long-tailed data ImageNet-LT, as in Figure 13. Considering the large number of categories in ImageNet-LT, we select 333 categories in intervals of 3 to visualize their learned margin. As shown in Figure 13,the learned margins are consistently varied with sample number of classes, suggesting the margin generator can generate proper margin for different classes.

Appendix C Test on unbiased dataset

Methods specially designed to cope with data bias may sometimes cause performance degradation on unbiased data. We evaluate different methods for robust learning on unbiased data. Table 6 shows some priors like DivideMix and HAR suffer noticeable accuracy drop, while our method still performs well on both CIFAR-10 and CIFAR-100. This shows our dynamic loss can adaptively set up proper learning objectives depending on the situations of training data.

Appendix D Training Time Analysis

Since the training time is the most concerned issues of meta learning, we also evaluate the total training time of our methods following DivideMix. Thanks to the solution to accelerate the training speed of meta learning by FaMUS xu2021faster and CurveNet jiang2021delving, we train a model using about 7.2h with NVIDIA GTX 1080 ti, which is a little slower than DivideMix with Nvidia V100 GPU (5.2h). It evidences the efficiency of our method.

Appendix E Additional Dataset Details

CIFAR. Both of CIFAR-10 and CIFAR-100 consist of 60,000 RGB images (50,000 for training and 10,000 for testing) and are equally distributed to 10 and 100 categories respectively.

CIFAR-N. CIFAR-N is simulated noisy data set based on CIFAR. Commonly simulated label noise types include symmetric and asymmetric noise. Symmetric noise is generated by randomly changing the labels with all possible labels according to a fixed probability of (noise rate). Asymmetric noise is manually designed to mimic the real-world label noise, where labels are only changed with those in similar classes such as deer horse and dog cat.

CIFAR-LT. CIFAR-LT is simulated long-tailed data set based on CIFAR, which reduces the number of training samples per class according to an exponential function . The , and is class index, the amount of samples of -class and the maximum amount of samples in all classes.

Webvision. WebVision li2017webvision is a large-scale real-world dataset with both label noise and class imbalance. It contains 2.4 million images among which about are mislabeled chen2021two. Following MentorNet jiang2018mentornet, we create the miniWebVision dataset by selecting the images of top classes, with an observed imbalance ratio of about . We test the model on the val set of WebVision and ImageNet deng2009imagenet.

Animal-10N. ANIMAL-10N dataset contains 5 pairs of confusing animals with a total of 55,000 images (50,000 for training and 5,000 for testing), and they are equally distributed to all categories. These images are crawled from website with the given labels as the search keyword, which inevitably has a lot of noise. The noise rate is estimated to be around 8%.

ImageNet-LT. The ImageNet-LT contains 115.8K images distributed to 1,000 categories according to Pareto distribution, where the amount of images per class ranges from 5 to 1280.

Appendix F Additional Training Details

Datasets CIFAR-10 CIFAR-100 Webvision Animal-10N ImageNet-LT
Classfier Optimizer SGD
Momentum 0.9
Weight Decay 5.00e-04 5.00e-04 1.00e-4 5.00E-04 1.00e-4
Learning rate 0.1 0.1 0.02 0.02 0.1
Learning scheduler Cosine Annealing
MetaNet Optimizer Adam
Weight Decay 0
Learning rate 3.00e-03
Learning scheduler Fixed
Others M0 0.5 0.5 0.5 0.5 -
M1 0.25 0.25 0.25 0.25 -
Epoch 300 300 150 100 90
Warmup Epoch 5 5 1 5 0
Batch Size 512 512 64 128 128
Rank Number 100 50 100 100 -
Table 7: Additional Training Details

All of the training details are presented in Table 7. For the meta net, we adopt the same setting for its generalizations. Adam optimizer kingma2015adam with a fixed learning rate 3e-3 and a weight decay of 0 is employed.

CIFAR. For CIFAR, we follow balanced-softmax to train all the classifier for 300 epochs with batch size 512. The learning rate is initialized as and controlled by cosine annealing learning scheduler loshchilov2016sgdr. We train all classifiers with the same SGD optimizer for 300 epochs with a momentum of 0.9 and a weight decay of 5e-4. Following balanced-softmax, the RandomCrop, RandomFlip, AutoAugment are adopted for data augmentation. For fair comparison, we adopt the same data augmentation as in AutoAugment cubuk2019autoaugment to all these methods except for CurveNet jiang2021delving due to incompatibility.

Webvision. We train the Inception-ResNet V2 for epochs using SGD optimizer with momentum and weight decay 1e-4. The number of epochs for the warm-up stage is and the bath size is 64. The learning rate is initialized as and controlled by cosine annealing learning scheduler.

Animal-10N. We train the VGG10-BN for epochs using SGD optimizer with momentum and weight decay 5e-4. The number of epochs for the warm-up stage is and the bath size is 128. The learning rate is initialized as and controlled by cosine annealing learning scheduler.

ImageNet-LT. Following previous works liu2019large, we train the ResNet-10 for epochs using SGD optimizer with momentum and weight decay 1e-4. The number of epochs for the warm-up stage is and the bath size is 128. The learning rate is initialized as and controlled by cosine annealing learning scheduler.

Appendix G Detailed Results

Here we provide the detailed accuracy of all the settings. Tables 8 and 9 correspond to the CIFAR-10-N-LT and CIFAR-100-N-LT in Table 1 in manuscript. As shown in them, the performance of our last model is generally very close to that of our best model despite of varying bias settings. In contrast, the last models of DivideMix and Balanced-Softmax degrade significantly compared with their corresponding best models, especially on CIFAR-10 with severe imbalance and noise (e.g., and ). This indicates our method is much more resistant to overfitting on biased data than other methods.

Tables 10, 11, 12 and 13 correspond to Tables 2, 3, 4 and 5 in manuscript respectively.

Imbalance Ratio 10 100 Average
Noise Rate 0.1 0.2 0.3 0.4 0.5 0.1 0.2 0.3 0.4 0.5
Cross Entopy Last 76.63 68.89 61.21 54.01 44.09 59.99 50.56 44.81 37.13 30.02 52.73
Best 77.14 74.35 72.22 68.28 59.65 63.45 53.45 49.49 45.53 40.76 60.43
DivideMix Last 88.11 86.95 66.42 73.78 73.85 63.08 61.14 24.31 24.13 20.09 58.19
Best 88.50 87.13 67.67 74.26 74.15 63.08 61.61 33.79 31.06 26.10 60.74
Balanced-Softmax Last 87.52 84.00 79.49 73.53 65.27 76.14 69.37 62.03 52.79 46.44 69.66
Best 87.62 84.50 82.84 79.09 75.90 78.63 73.29 70.89 67.57 63.40 76.37
DivideMix Last 88.23 86.96 78.02 79.17 76.89 76.63 75.72 46.82 53.25 54.33 71.60
+Balanced-Softmax Best 88.93 86.06 79.19 79.81 78.20 79.36 76.55 48.60 54.99 55.84 72.75
CurveNet Last 84.10 81.70 78.47 78.73 75.65 65.77 66.21 62.37 48.71 51.85 69.36
Best 84.87 84.62 79.98 81.33 78.37 67.55 68.72 63.71 51.63 52.84 71.36
HAR Last 86.46 84.27 81.78 79.55 78.07 78.60 75.05 72.08 65.48 63.90 76.52
Best 87.03 84.47 81.94 79.87 78.25 79.02 76.14 72.74 67.22 65.00 77.17
FaMUS Last 65.94 43.73 56.10 52.75 69.79 23.42 26.20 22.50 18.46 12.27 39.12
Best 68.82 52.28 59.10 53.48 73.01 34.09 37.55 32.23 30.26 28.98 46.98
ELR+ Last 85.21 85.71 84.13 81.16 78.91 64.76 61.41 56.42 51.31 44.05 69.31
Best 87.04 86.14 84.38 83.30 80.23 65.73 62.12 56.42 52.26 45.73 70.34
Ours Last 89.23 88.39 86.58 84.43 83.34 77.80 76.31 74.10 69.64 67.45 79.73
Best 89.44 88.46 86.72 84.73 83.71 78.96 76.64 76.17 70.37 70.26 80.55
Table 8: Detailed Test accuracy (%) on CIFAR-10-N-LT with varying imbalance and noise.
Imbalance Ratio 10 100 Average
Noise Rate 0.1 0.2 0.3 0.4 0.5 0.1 0.2 0.3 0.4 0.5
Cross Entopy Last 43.48 37.25 31.34 25.53 19.45 29.92 21.85 19.32 13.71 12.21 25.41
Best 43.95 37.64 32.18 29.44 23.87 30.52 22.13 19.58 14.59 12.81 26.67
DivideMix Last 54.17 51.92 50.44 45.02 43.43 36.31 35.68 34.10 33.19 27.22 41.15
Best 54.94 53.35 50.93 45.36 43.44 36.99 36.24 34.87 33.64 27.74 41.75
Balanced-Softmax Last 58.38 54.59 50.49 44.83 40.45 43.17 38.67 33.27 27.08 22.10 41.30
Best 58.62 54.73 50.66 45.63 40.56 43.50 38.67 33.62 28.05 24.19 41.82
DivideMix Last 56.37 54.80 54.83 51.29 48.89 43.26 42.42 40.46 37.83 30.95 46.11
+Balanced-Softmax Best 56.85 56.06 55.64 52.30 50.01 43.67 42.79 40.99 38.50 32.38 46.92
CurveNet Last 50.41 47.14 43.18 41.23 34.85 22.10 20.44 17.80 11.87 9.24 29.83
Best 52.73 51.93 47.56 44.08 39.74 25.26 21.35 18.72 13.60 12.20 32.72
HAR Last 58.88 55.43 52.57 46.01 43.96 42.67 39.39 34.43 29.43 24.94 42.77
Best 59.32 55.80 53.44 46.75 44.61 44.45 40.98 36.09 31.17 27.15 43.98
FaMUS Last 46.07 51.59 46.07 46.93 43.83 29.33 30.22 28.53 27.83 24.57 30.72
Best 47.03 52.05 46.41 47.88 44.30 29.66 30.31 28.50 27.24 24.85 30.81
ELR+ Last 52.48 51.30 46.24 39.98 34.91 33.01 28.10 24.92 22.11 16.54 34.96
Best 53.91 51.90 47.88 42.61 37.35 33.81 28.94 26.10 22.11 17.39 36.20
Ours Last 59.24 57.57 56.85 52.07 50.74 47.23 45.74 42.72 39.58 33.87 48.56
Best 59.52 57.85 57.32 52.66 51.26 47.55 45.82 43.54 39.96 34.30 48.98
Table 9: Detailed Test accuracy (%) on CIFAR-100-N-LT with varying imbalance and noise.
Datasets Cifar-10 Cifar-100
Noise Rate 20 40 60 20(Asym.) 40(Asym.) Average 20 40 60 Average
CE 86.98 77.52 73.63 83.60 77.85 79.92 60.38 46.92 31.82 46.37
SELFIE 86.39 82.23 74.81 - - - 55.71 51.14 43.85 50.23
PLC 86.40 71.72 65.22 90.23 85.40 79.79 59.66 49.24 33.18 47.36
NCT 95.00 87.00 73.22 91.51 93.00 87.95 67.65 57.97 45.01 56.88
Coteaching 93.83 91.74 57.65 93.23 90.78 85.45 70.81 62.65 41.55 58.34
CMW-Net 91.09 86.91 83.33 93.02 92.70 89.41 70.11 65.84 56.93 64.29
DivdeMix 95.63 93.78 94.23 94.18 92.73 94.11 77.20 73.37 70.75 73.77
GJS 94.20 92.80 89.72 91.92 86.07 90.94 73.31 71.33 66.92 70.52
Ours 95.90 94.69 92.28 95.74 94.51 94.62 78.26 75.28 69.18 74.24
Table 10: Detailed accuracy (%) of PARes18 on CIFAR-N with different settings.
Datasets Cifar-10-LT Cifar-100-LT
Imbalance Ratio 10 20 50 100 Average 10 20 50 100 Average
CE LOSS 86.39 82.23 74.81 70.36 78.45 55.71 51.14 43.85 38.32 47.26
Focal Loss 86.66 82.76 76.71 70.38 79.13 55.78 51.95 44.32 38.41 47.62
CB Focal 87.49 84.36 79.27 74.57 81.42 57.99 52.59 45.32 39.6 48.88
LDAM-DRW 87.68 85.51 81.64 78.02 83.21 44.7 52.93 48.22 59.59 51.36
FaMUS 87.9 86.24 83.32 80.96 84.61 59 55.95 49.93 46.03 52.73
Balanced-Softmax 91.01 88.85 86.44 82.31 87.15 64.00 59.48 54.36 50.47 57.08
Ours 91.24 88.30 86.46 82.95 87.24 63.99 59.79 54.51 50.14 57.11
Table 11: Detailed accuracy (%) of ResNet on CIFAR-LT with different settings.
B H V Imbalance Ratio 10 100 Average
Noise Rate 0.1 0.2 0.3 0.4 0.5 0.1 0.2 0.3 0.4 0.5
Last 76.63 68.89 61.21 54.01 44.09 59.99 50.56 44.81 37.13 30.02 52.73
Best 77.14 74.35 72.22 68.28 59.65 63.45 53.45 49.49 45.53 40.76 60.43
Last 88.15 86.16 82.86 80.84 65.36 74.52 67.71 63.71 54.28 51.47 71.51
Best 88.37 86.37 83.39 81.88 75.70 77.08 72.80 71.67 67.67 63.46 76.84
Last 88.39 86.00 83.46 81.04 74.71 73.33 50.99 63.23 56.79 52.82 71.08
Best 88.51 86.15 83.90 82.01 77.68 76.49 71.28 71.03 68.04 66.05 77.11
Last 85.72 81.79 82.27 81.62 69.36 78.93 72.11 71.36 51.16 52.11 72.64
Best 87.61 82.44 83.05 82.28 69.81 79.64 73.27 71.85 59.65 59.98 74.96
Last 87.46 85.86 83.85 83.68 77.58 79.43 76.49 73.55 70.72 66.65 78.53
Best 87.79 86.24 84.77 83.91 78.08 80.44 77.91 75.13 72.56 68.92 79.58
Last 88.65 86.88 85.72 85.32 81.68 78.42 76.38 74.59 68.42 68.98 79.50
Best 89.06 87.18 86.00 85.41 82.06 79.35 77.36 75.46 71.51 70.13 80.35
Last 88.32 87.08 85.74 85.42 83.39 78.25 74.89 75.48 69.92 69.17 79.77
Best 89.44 88.46 86.72 84.73 83.71 78.96 76.64 76.17 70.37 70.26 80.55
Table 12: Detailed accuracy (%) of ablations on CIFAR-10-N-LT with different settings .
Datasets Cifar-10-N-LT Cifar-100-N-LT
Imbalance Ratio 10 50 100 Average 10 50 100 Average
Noise Rate 0.2 0.4 0.2 0.4 0.2 0.4 0.2 0.4 0.2 0.4 0.2 0.4
HAR Last 84.49 68.22 81.64 63.15 70.95 58.09 71.09 53.02 44.68 43.18 28.15 38.94 28.15 39.35
Best 84.70 68.88 82.56 63.84 71.12 58.68 71.63 53.45 45.44 43.74 28.59 39.37 28.59 39.86
Ours Last 90.06 87.20 82.41 76.08 74.41 69.80 79.99 54.53 45.01 44.19 32.26 40.50 30.43 41.15
Best 90.23 87.57 83.74 79.21 78.63 73.79 82.20 55.04 46.64 44.63 32.75 40.58 30.76 41.73
Table 13: Detailed accuracy (%) of PARes18 on CIFAR-N-LT with different settings.