The recent success of deep neural networks (DNNs) across various computer vision problems[18, 38, 17, 37] has emerged due to the access to large-scale, annotated datasets collected from our visual world [40, 30, 1]. Despite having several well-organized datasets in research, e.g., CIFAR  and ILSVRC , real-world datasets usually suffer from its expensive data acquisition process and the labeling cost. This commonly leads a dataset to have a “long-tailed” label distribution [34, 43]. Such class-imbalanced datasets make the standard training of DNN harder to generalize [44, 39, 9], particularly if one requires a class-balanced performance metric for a practical reason.
A natural approach in an attempt to bypass this class-imbalance problem is to re-balance the training objective artificially with respect to the class-wise sample sizes. Two of such methods are representative: (a) re-weighting
the given loss function by a factor inversely proportional to the sample frequency in a class-wise manner[21, 25], and (b) re-sampling the given dataset so that the expected sampling distribution during training can be balanced, either by “over-sampling” the minority classes [23, 8] or “under-sampling” the majority classes .
However, naïvely re-balancing the objective usually results in harsh over-fitting to minority classes, since they cannot handle, the lack of minority information in essence. Several attempts have been made to alleviate this issue: Cui et al.  proposed the concept of “effective number” of samples as alternative weights in the re-weighting method. Cao et al.  found that both re-weighting and re-sampling can be much more effective when applied at the later stage of training, in case of neural networks. In the context of re-sampling, SMOTE  is a widely-used variant of the over-sampling method that mitigates the over-fitting via data augmentation, and several variants of SMOTE have been suggested accordingly [14, 15, 36]. A major drawback of these SMOTE-based methods is that they usually perform poorly when there exist only a few samples in the minority classes, i.e., under regime of “extreme” imbalance, because they synthesize a new minority sample only using the existing samples of the same class.
Another line of research attempts to prevent the over-fitting with a new regularization scheme that minority classes are more penalized, where the margin-based approaches generally suit well as a form of data-dependent regularizer [46, 9, 24, 4]
. There have also been works that view the class-imbalance problem in the framework of active learning[10, 2] or meta-learning [44, 39, 41, 32].
In this paper, we revisit the over-sampling framework and propose a new way of generating minority samples, coined Major-to-minor Translation (M2m). In contrast to other over-sampling methods, e.g., SMOTE that applies data augmentation to minority samples to mitigate the over-fitting issue, we attempt to generate minority samples in a completely different way. The proposed M2m does not use the existing minority samples for the over-sampling. Instead, it use the majority (non-minority) samples and translate them to the target minority class using another classifier independently trained under the given imbalanced dataset. Our key finding is that, this method turns out to be very effective on learning more generalizable features in imbalanced learning: it does not overly use the minority samples, and leverages the richer information of the majority samples simultaneously.
Our minority over-sampling method consists of three components to improve the sampling quality. First, we propose an optimization objective for generating synthetic samples: a majority sample can be translated into a synthetic minority sample via optimizing it, while not affecting the performance of the majority class (even the sample is labeled to the minority class). Second, we design a sample rejection criterion based on the observation that generation from more majority class is more preferable. Third, based on the proposed rejection criterion, we suggest an optimal distribution for sampling a majority seed to be translated in our generation process.
We evaluate our method on various imbalanced classification problems, covering synthetically imbalanced datasets from CIFAR-10/100 
and ImageNet, and real-world imbalanced datasets including CelebA , SUN397 , Twitter  and Reuters  datasets. Despite its simplicity, our method significantly improves the balanced test accuracy compared to previous re-sampling or re-weighting methods across all the tested datasets. Our results even surpass those from LDAM , a current state-of-art margin-based method. Moreover, we found our method is particularly effective under “extreme” imbalance: in the case of Reuters of the most severe imbalance, we could improve the balanced accuracy by (relatively) and upon standard training and LDAM, respectively.
2 M2m: Major-to-minor translation
We consider a classification problem with classes from a dataset , where and denote an input and the corresponding class label, respectively. Let be a classifier designed to output logits, which we want to train against the class-imbalanced dataset . We let denote the total sample size of , where is that of class . Without loss of generality, we assume . In the class-imbalanced classification, the class-conditional data distributions are assumed to be invariant across training and test time, but they have different prior distributions, say and , respectively: is highly imbalanced while
is usually assumed to be the uniform distribution. The primary goal of the class-imbalanced learning is to trainfrom that generalizes well under compared to the standard training, e.g., empirical risk minimization (ERM) with an appropriate loss function :
Our method is primarily based on over-sampling technique , a traditional and principled way to balance the class-imbalanced training objective via sampling minority classes more frequently. In other words, we assume a “virtually balanced” training dataset made from such that the class has more samples, and the classifier is trained on instead of .
A key challenge in over-sampling is to prevent over-fitting on minority classes, as the objective modified is essentially much biased to a few samples of minority classes. In contrast to most prior works that focus on performing data augmentation directly on minority samples to mitigate this issue [5, 32, 36], we attempt to augment minority samples in a completely different way: our method does not use the minority samples for the augmentation, but the majority samples.
2.1 Overview of M2m
Consider a scenario of training a neural network on a class-imbalanced dataset . The proposed Major-to-minor Translation (M2m) attempts to construct a new balanced dataset for training , by adding synthetic minority samples that are translated from other samples of (relatively) majority classes. There could be multiple ways to perform this “Major-to-minor” translation. In particular, a recent progress on cross-domain generation via generative adversarial networks [47, 6, 35] has made this more attractive, provided that much computational cost for additional training is acceptable. In this paper, on the other hand, we explore a much simpler and efficient approach: we translate a majority sample by optimizing it to maximize the target minority confidence of another baseline classifier . Here, we assume the classifier is a pre-trained neural network on so that performs well (at least) on the training imbalanced dataset, e.g., via standard ERM training. This implies that, may be over-fitted to minority classes and does not necessarily generalize well under the balanced test dataset. We found this mild assumption on is fairly enough to capture the information in the small minority classes and could generate surprisingly useful synthetic minority samples by utilizing the diversity of majority samples. On the other hand, is the target network that we aim to train to perform well on the balanced testing criterion.
During the training of , M2m utilizes the given classifier to generate new minority samples, and the generated samples are added to to construct on the fly. To obtain a single synthetic minority of class , our method solves an optimization problem starting from another training sample of a (relatively) major class :
where denotes the cross entropy loss and
is a hyperparameter. In other words, our method “translates” a majority seedinto , so that confidently classifies it as minority class . The generated sample is then labeled to class and fed into for training to perform better on and match the prediction of to that of . We do not force in (2) to classify to class as well, but we restrict that to have lower confidence on the original class by imposing a regularization term . Here, the regularization term on the logit reduces the risk when is labeled to , whereas it may contain significant features of in the viewpoint of . Intuitively, one can regard the overall process as teaching to learn novel minority features which considers it significant, i.e., via extension of the decision boundary from the knowledge . Figure 2 illustrates the basic idea of our method.
2.2 Underlying intuition on M2m
One may understand our method better by considering the case when is an “oracle” (possibly the Bayes optimal) classifier, e.g., (roughly) humans. Here, solving (2) essentially requires a transition of the original input of class with confidence to another class with respect to : this would let “erase and add” the features related to the class and , respectively. Hence, in this case, our process corresponds to collecting more in-distribution minority data, which may be argued as the best way one could do to resolve the class-imbalance problem.
An intriguing point here is, however, that neural network models are very far from this ideal behavior, even when they achieve super-human performance. Instead, when and are neural networks, (2) often finds that is very close to , i.e., similar to the phenomenon of adversarial examples [42, 12]. Nevertheless, we found our method still effectively improves the generalization of minority classes even in such cases. This observation is, in some sense, aligned to a recent claim that adversarial perturbation is not a “bug” in neural networks, but a “generalizable” feature .
In this paper, we hypothesize this counter-intuitive effectiveness of our method comes from mainly in two aspects: (a) the sample diversity in the majority dataset is utilized to prevent over-fitting on the minority classes, and (b) another classifier is enough to capture the information in the small minority dataset. In this respect, adversarial examples from a majority to a minority can be regarded as one of natural ways to leverage the diverse features in majority examples useful to improve the generalization of the minority classes. It is also notable that our over-sampling method does not completely replace the existing dataset. Instead, our method only augment the minority classes, and our finding is that this augmentation turns out to be very effective than naïvely duplicating minority examples as done by the standard over-sampling. We further discuss a more detailed analysis to verify these claims, by performing an extensive ablation study in Section 3.4.
2.3 Detailed components of M2m
Sample rejection criterion.
An important factor that affects the quality of the synthetic minority samples in our method is the quality of , especially for : a better would more effectively “erase” important features of during the translation, thereby making the resulting minority samples more reliable. In practice, however, is not that perfect: the synthetic samples still contain some discriminative features of the original class , in which it may even harm the performance of . This risk of “unreliable” generation becomes more harsh when is small, as we assume that is also trained on the given imbalanced dataset .
To alleviate this risk, we consider a simple criterion for rejecting
each of the synthetic samples randomly with probability depending onand :
where , and is a hyperparameter which controls the reliability of : the smaller , the more reliable . For example, if , the synthetic samples are accepted with probability more than if . When , on the other hand, it requires to achieve the same goal. This exponential modeling of the rejection probability is motivated by the effective number of samples 
, a heuristic recently proposed to model the observation that the impact of adding a single data point exponentially decreases at larger datasets. When a synthetic sample is rejected, we simply replace it by an existing minority sample from the original datasetto obtain the balanced dataset .
Optimal seed sampling.
Another design choice of our method is how to choose a (majority) seed sample with class for each generation in (2). Based on the rejection criterion proposed in (3), we design a sampling distribution for selecting the class of initial point given target class , by considering two aspects: (a) maximizes the acceptance probability under our rejection criterion, and (b) chooses diverse classes as much as possible, i.e., the entropy is maximized. Namely, we are interested in the following optimization:
It is elementary to check that is the solution of the above optimization. Hence, due to the rejection probability (3), we choose:
Once is selected, a sample is sampled uniformly at random among samples having the class . The overall procedure of M2m is summarized in Algorithm 1.
Practical implementation via re-sampling.
In practice of training a neural network , e.g
., stochastic gradient descent (SGD) with a mini-batch sampling, M2m is implemented using a batch-wise re-sampling. More precisely, in order to simulate the generation ofsamples for any , we perform the generation with probability , for all in a given class-balanced mini-batch .111Obtaining such a class-balanced mini-batch can be done via standard re-sampling. For a single generation at index , we first sample following (5) until , and select a seed of class randomly inside . Then, we solve the optimization (2) from toward class via gradient descent for a fixed number of iterations with a step size . We accept the result sample only if is less than for stability. Finally, if accepted, we replace in by .
We evaluate our method on various class-imbalanced classification tasks: synthetically-imbalanced variants of CIFAR-10/100 , ImageNet-LT222Results on ImageNet-LT can be found in the supplementary material. , CelebA , SUN397 , Twitter , and Reuters  datasets.333Code is available at https://github.com/alinlab/M2m Figure 3 illustrates the class-wise sample distributions for the datasets considered in our experiments. The more details on the tested datasets are given in the supplementary material. To evaluate the classification performance of the models on the balanced test distribution, we mainly report two popular metrics: the balanced accuracy (bACC) [21, 44] and the geometric mean scores (GM) [27, 3]
, which are defined by the arithmetic and geometric mean over class-wise sensitivity (i.e
., recall), respectively. We remark that bACC is essentially equivalent to the standard accuracy metric for balanced datasets. All the values and error bars in this section are mean and standard deviation across three random trials, respectively. Overall, our results clearly demonstrate that minority synthesis via translating from majority consistently improves the efficiency of over-sampling, in terms of the significant improvement of the generalization in minority classes compared to other re-sampling baselines, across all the tested datasets. We also perform an ablation study to verify the effectiveness of our main ideas.
3.1 Experimental setup
We consider a wide range of baseline methods, as listed in what follows: (a) empirical risk minimization (ERM): training on the cross-entropy loss without any re-balancing; (b) re-sampling (RS) : balancing the objective from different sampling probability for each sample; (c) SMOTE : a variant of re-sampling with data augmentation; (d) re-weighting (RW) : balancing the objective from different weights on the sample-wise loss; (e) class-balanced re-weighting (CB-RW) : a variant of re-weighting that uses the inverse of effective number for each class, defined as . Here, we use ; (f) deferred re-sampling (DRS)  and (g) deferred re-weighting (DRW) : re-sampling and re-weighting is deferred until the later stage of the training, repsectively; (h) focal loss (Focal) : the objective is up-weighted for relatively hard examples to focus more on the minority; (i) label-distribution-aware margin (LDAM) : the classifier is trained to impose larger margin to minority classes. Roughly, the considered baselines can be classified into three categories: (i) “re-sampling” based methods - (b, c, f), (ii) “re-weighting” based methods - (d, e, g), and (iii) different loss functions - (a, h, i).
We train every model via stochastic gradient descent (SGD) with momentum of weight . The initial learning rate is set to , and “step decay” is performed during training where the exact scheduling across datasets is specified in the supplementary material. Although it did not affect much to our method, we also adopt the “linear warm-up” learning rate strategy 
in the first 5 epochs, as the performance of some baseline methods,e.g., re-weighting, highly depends on the use of this strategy. For CIFAR-10/100 and CelebA, we train ResNet-32  for 200 epochs with mini-batch size 128, and set a weight decay of . In case of SUN397, the pre-activation ResNet-18 model is used instead.444We remark this model is larger than ResNet-32 used for CIFAR and CelebA datasets, as it has roughly 4 more channels. We ensure that all the input images are normalized over the training dataset, and have the size of 3232 either by cropping or re-sizing, to be compatible with the given architectures. For Twitter and Reuters datasets, we train 2-layer fully-connected networks for 15 epochs with mini-batch size 64, and with a weight decay of .
Details on M2m.
When our method is applied, we use another classifier of the same architecture to that is pre-trained on the given (imbalanced) dataset via standard ERM training. Also, in a similar manner to that of , we use the deferred scheduling to our method, i.e., we start to apply our method after the standard ERM training for a fixed number of epochs. The actual scheduling across datasets is specified in the supplementary material. We choose hyperparameters in our method from a fixed set of candidates, namely , and based on the validation set. Unless otherwise stated, we fix and when performing a single generation step.
3.2 Long-tailed CIFAR datasets
We consider a “synthetically long-tailed” variant of CIFAR  datasets (CIFAR-LT-10/100) in order to evaluate our method on various levels of imbalance, where the original datasets are class-balanced. To simulate the long-tailed distribution frequently appeared in imbalanced datasets, we control the imbalance ratio and artificially reduce the training sample sizes of each class except the first class, so that: (a) equals to , and (b) in between and follows an exponential decay across . We keep the test dataset unchanged during this process, i.e., it is still perfectly balanced, thereby measuring accuracy on this dataset is equivalent to measuring the balanced accuracy. We consider two imbalance ratios each for CIFAR-LT-10 and 100. See Figure 3(a) and 3(b) for a detailed illustration of the sample distribution.
Table 1 summarizes the main results. In overall, the results show that our method consistently improves the bACC by a large margin, across all the tested baselines. These results even surpass the “LDAM+DRW” baseline , which is known to be the state-of-the-art to the best of our knowledge. Moreover, we point out, in most cases, our method could further improve bACC when applied upon the LDAM training scheme (see “LDAM+AMO”): this indicates that the performance gain from our method is fairly orthogonal to that of LDAM, i.e., the margin-based approach, which suggests a new promising direction of improving the generalization when a neural network model suffers from a problem of small data.
3.3 Real-world imbalanced datasets
We further verify the effectiveness of M2m on four well-known, naturally imbalanced datasets: CelebA , SUN397 , Twitter  and Reuters  datasets. More detailed information for each of these datasets is demonstrated in Figure 3 and the supplementary material.
CelebA is originally a multi-labeled dataset, and we port this to a 5-way classification task by filtering only the samples with five non-overlapping labels about hair colors. We also subsampled the full dataset by while maintaining the imbalance ratio , in attempt to make the task more difficult. We denote the resulting dataset by CelebA-5.
Although Twitter and Reuters datasets are from natural language processing, we also evaluate our method on them to test the effectiveness under much extreme imbalance. Here, we remark that the imbalance ratioof these two datasets are about 150 and 710, respectively, which are much higher than the other image datasets tested. In case of Reuters, we exclude the classes having less than 5 samples in the test set for more reliable evaluation, resulting a dataset of 36 classes.
Table 2 shows the results. Again, M2m performs best amongst other baseline methods, demonstrating the effectiveness of our method under natural imbalance, as well as wider applicability of our algorithm beyond image classification. Remarkably, the significant results on Reuters dataset compared to the others suggest that our method can be even more effective under a regime of “extremely” imbalanced datasets, as Reuters has a much larger imbalance ratio than the others.
3.4 Ablation study
We conduct an extensive ablation study to present a detailed analysis of the proposed method. All the experiments in this section are performed with ResNet-32 models, trained on CIFAR-LT-10 with the imbalance ratio . We additionally report the balanced test accuracy over majority and minority classes, namely “Major” and “Minor” respectively, to further identify the relative impacts on those two classes separately. We divide the whole classes into “majority” and “minority” classes, so that the majority classes consist of top- frequent classes with respect to the training set where is the minimum number that exceeds of the total. We denote the minority classes as the remaining classes. We provide more discussion in the supplementary material.
|# Seeds||bACC ()||GM ()|
|10||74.90.29 (-4.34%)||73.70.33 (-5.27%)|
|50||76.20.30 (-2.68%)||75.30.29 (-3.21%)|
|100||76.50.34 (-2.30%)||75.60.41 (-2.83%)|
|200||76.70.51 (-2.04%)||75.90.59 (-2.44%)|
|500||77.40.38 (-1.15%)||76.80.31 (-1.29%)|
|Full||78.30.16 (-0.00%)||77.80.16 (-0.00%)|
Diversity on seed samples.
In Section 2.1, we hypothesize that the effectiveness of our method mainly comes from utilizing a much diversity in the majority samples to prevent the over-fitting to the minority classes. To verify this, we consider an ablation that the candidates of “seed samples” are limited: more concretely, we control the size of seed sample pools per each class to a fixed subset of the training set, made before training . In Table 3, the accuracy of minority classes is progressively increased as seed sample pools become diverse. This clear trend indicates that M2m makes use of the diversity of majority classes for preventing the over-fitting to the minority classes.
The effect of .
In the optimization objective (2) for the generation step in M2m, we impose a regularization term to improve the quality of synthetic samples: they might confuse if themselves still contain important features of the original class in a viewpoint of . To verify the effect of this term, we consider an ablation that is set to , and compare the performance to the original method. As reported in Table 4, we found a certain level of degradation in the balanced test accuracy at this ablation, which shows the effectiveness of the proposed regularization.
Over-sampling from the scratch.
As specified in Section 3.1, we use the “deferred” scheduling to our method by default, i.e., we start to apply our method after the standard ERM training for a fixed number of epochs. We have also considered a simple ablation where this strategy is not used, namely “M2m-RS”. The results in Table 4 show that M2m-RS still outperforms any other baselines (reported in Table 1) except the ones that the deferred scheduling is used, i.e., DRS and DRW, and this further verifies the effectiveness of our method.
Labeling as a targeted class.
Our primary assumption on the pre-trained classifier does not require that itself to generalize well on the minority classes (see Section 2.1). This implies that solving (2) with may not end up with a synthetic sample that contains generalizable features of the target minority class. To examine how much the generated samples would be correlated to the target classes, we consider another ablation upon M2m-RS:555Here, we attempt to opt out any potential effect from using DRS, for more clearer evaluation. instead of labeling the generated sample as the target class, the ablated method “M2m-RS-Rand” labels it to a “random” class chosen from all the possible classes (except for the target and original classes). The results shown in Table 4 indicate that M2m-RS-Rand generalizes much worse than its counterpart M2m-RS on the minority classes, which indeed confirms that the correctly-labeled synthetic samples could improve the generalization of the minority classes.
Comparison of t-SNE embeddings.
To further validate the effectiveness of our method, we visualize and compare the penultimate features learned from various training methods (including ours) using t-SNE . Each embedding is computed from a randomly-chosen subset of training samples in the CIFAR-LT-10 (), so that it consists of 50 samples per each class. Figure 4 illustrates the results, and shows that the embedding from our training method (M2m) is of much separable features compared to other methods: one could successfully distinguish each cluster under the M2m embedding (even though they are from minority classes), while others have some obscure region.
., the sum of off-diagonal entries in the confusion matrix.
Comparison of cumulative false positive.
In Figure 5, we plot how the number of false positive (FP) samples increases as summed over classes, namely , from the most frequent class to the least one. Here, indicates the number of misclassified samples by predicting them to class in the test set. We compute each plot with the balanced test sets of CIFAR-LT-10/100, thereby a well-trained classifier would show a plot close to linear: it indicates the classifier mistakes more evenly over the classes. Overall, one could see that the curve made by our method consistently below the others with much linearity. This implies our method makes less false positives, and even better, they are more uniformly distributed over the classes. This is a desirable property in the context of imbalanced learning.
The use of adversarial examples.
As mentioned in Section 2.2, the generation under M2m often ends up with a synthetic minority sample that is very close to the original (before translation) as like the adversarial example. This indeed happens when and are neural networks as assumed here, i.e., ResNet-32, as illustrated in Figure 6. To understand more on how such adversarial perturbations affect our method, we consider a simple ablation, which we call “M2m-Clean”: recall that our method synthesizes a minority sample from a seed majority sample . This ablation uses the “clean” instead of for over-sampling. Under the identical training setup, we notice a significant reduction in the balanced accuracy of M2m-Clean compared to the original M2m (see Table 4). This observation reveals that the adversarial perturbations ablated are extremely crucial to make our method to work, regardless of a small noise.
We propose a new over-sampling method for imbalanced classification, called Major-to-minor Translation (M2m). We found the diversity in majority samples could much help the class-imbalanced training, even with a simple translation method using a pre-trained classifier. This suggests a promising way to overcome the long-standing class-imbalance problem, and exploring more powerful methods to perform this Major-to-minor translation, e.g., CycleGAN , would be an interesting future research. The problems we explored in this paper also lead us to an essential question that whether an adversarial perturbation could be a good feature. Our findings suggest that it could be, at least for the purpose of imbalanced learning, where the minority classes suffer over-fitting due to insufficient data. We believe our method could open up a new direction of research both in imbalanced learning and adversarial examples.
This work was supported by Samsung Electronics and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)).
-  Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, and Sudheendra Vijayanarasimhan. Youtube-8M: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675, 2016.
-  Josh Attenberg and Seyda Ertekin. Class imbalance and active learning. Imbalanced Learning: Foundations, Algorithms, and Applications, 2013.
-  Paula Branco, Luís Torgo, and Rita P Ribeiro. A survey of predictive modeling on imbalanced domains. ACM Computing Surveys (CSUR), 49(2):31, 2016.
-  Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning imbalanced datasets with label-distribution-aware margin loss. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
-  Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. SMOTE: synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 2002.
Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul
Stargan: Unified generative adversarial networks for multi-domain image-to-image translation.In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 8789–8797, 2018.
-  Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. Class-balanced loss based on effective number of samples. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
Yin Cui, Yang Song, Chen Sun, Andrew Howard, and Serge Belongie.
Large scale fine-grained categorization and domain-specific transfer learning.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
Qi Dong, Shaogang Gong, and Xiatian Zhu.
Imbalanced deep learning by minority class incremental rectification.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018.
-  Seyda Ertekin, Jian Huang, Leon Bottou, and Lee Giles. Learning on the border: active learning in imbalanced data classification. In Proceedings of the ACM International Conference on Information and Knowledge Management (CIKM), 2007.
-  Kevin Gimpel, Nathan Schneider, Brendan O’Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A Smith. Part-of-speech tagging for twitter: Annotation, features, and experiments. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2011.
-  Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), 2015.
-  Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
-  Hui Han, Wen-Yuan Wang, and Bing-Huan Mao. Borderline-SMOTE: a new over-sampling method in imbalanced data sets learning. In International Conference on Intelligent Computing (ICIC), pages 878–887. Springer, 2005.
-  Haibo He, Yang Bai, Edwardo A Garcia, and Shutao Li. Adasyn: Adaptive synthetic sampling approach for imbalanced learning. In Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN), pages 1322–1328. IEEE, 2008.
-  Haibo He and Edwardo A Garcia. Learning from imbalanced data. IEEE Transactions on Knowledge and Data Engineering (TKDE), 2008.
-  Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask R-CNN. In Proceedings of the IEEE international Conference on Computer Vision (ICCV), pages 2961–2969, 2017.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In Proceedings of the European Conference on Computer Vision (ECCV), 2016.
-  Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In International Conference on Learning Representations (ICLR), 2016.
-  Chen Huang, Yining Li, Chen Change Loy, and Xiaoou Tang. Learning deep representation for imbalanced classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
-  Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
-  Nathalie Japkowicz. The class imbalance problem: Significance and strategies. In Proceedings of the International Conference on Artificial Intelligence (ICAI), 2000.
-  Salman Khan, Munawar Hayat, Syed Waqas Zamir, Jianbing Shen, and Ling Shao. Striking the right balance with uncertainty. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
Salman H Khan, Munawar Hayat, Mohammed Bennamoun, Ferdous A Sohel, and Roberto
Cost-sensitive learning of deep feature representations from imbalanced data.IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2017.
-  Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, Department of Computer Science, University of Toronto, 2009.
Addressing the curse of imbalanced training sets: One-sided
Proceedings of the International Conference on Machine Learning (ICML), pages 179–186, 1997.
-  David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. Rcv1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 2004.
-  Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
-  Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Proceedings of the European Conference on Computer Vision (ECCV), pages 740–755. Springer, 2014.
-  Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015.
-  Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X Yu. Large-scale long-tailed recognition in an open world. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
-  Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 2008.
-  Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. Exploring the limits of weakly supervised pretraining. In Proceedings of the European Conference on Computer Vision (ECCV), 2018.
-  Sangwoo Mo, Minsu Cho, and Jinwoo Shin. InstaGAN: Instance-aware image-to-image translation. In International Conference on Learning Representations (ICLR), 2019.
-  Sankha Subhra Mullick, Shounak Datta, and Swagatam Das. Generative adversarial minority oversampling. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), October 2019.
-  Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. PointNet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 652–660, 2017.
-  Joseph Redmon and Ali Farhadi. Yolo9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
-  Mengye Ren, Wenyuan Zeng, Bin Yang, and Raquel Urtasun. Learning to reweight examples for robust deep learning. In Proceedings of the International Conference on Machine Learning (ICML), 2018.
-  Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015.
-  Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, and Deyu Meng. Meta-weight-net: Learning an explicit mapping for sample weighting. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
-  Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), 2014.
-  Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, and Serge Belongie. The inaturalist species classification and detection dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
-  Yu-Xiong Wang, Deva Ramanan, and Martial Hebert. Learning to model the tail. In Advances in Neural Information Processing Systems (NeurIPS), 2017.
Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba.
Sun database: Large-scale scene recognition from abbey to zoo.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010.
Xiao Zhang, Zhiyuan Fang, Yandong Wen, Zhifeng Li, and Yu Qiao.
Range loss for deep face recognition with long-tailed training data.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
-  Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 2223–2232, 2017.
Appendix A Details on the datasets
CIFAR-10/100 datasets  consist of 60,000 RGB images of size , 50,000 for training and 10,000 for testing. Each image in the two datasets is corresponded to one of 10 and 100 classes, respectively. In our experiments, we construct “synthetically long-tailed” variants of CIFAR-10/100, namely CIFAR-LT-10/100, respectively . We hold-out 10% of the test set to construct a validation set, and use the remaining for testing. We use ResNet-32  with a mini-batch size 128, and set a weight decay of . We train the network for 200 epochs with an initial learning rate of 0.1. We follow the learning rate schedule used by  for fair comparison: the initial learning rate is set to , and we decay it by a factor of at -th and -th epoch. When the deferred scheduling  is used, e.g., DRS, DRW and our method, it is applied after 160 epochs of standard training.
CelebFaces Attributes (CelebA) dataset  is a multi-labeled face attributes dataset. It is originally composed of 202,599 number of RGB face images with 40 binary attributes annotations per image. We port this CelebA to a 5-way classification task by filtering only the samples with five non-overlapping labels about hair colors: namely, “blonde”, “black”, “bald”, “brown”, and “gray”. This is in a similar manner as done in . We denote the resulting dataset by CelebA-5. We pick out 50 and 100 samples per each class for validation and testing. We use ResNet-32  with a mini-batch size 128, and set a weight decay of . We train the network for 90 epochs with an initial learning rate of 0.1. We decay the learning rate by 0.1 at epoch 30 and 60. When the deferred scheduling is used, it is applied after 60 epochs of standard training.
Scene UNderstanding (SUN)  is a dataset for a scene categorization. It originally consists of 108,754 RGB images which are labeled with 397 classes. For the inputs, center patches are first extracted and they are resized to 3232. We hold-out 10 and 40 samples per each class for validation and testing, respectively, as the dataset itself does not provide any separated split for testing. We use pre-activation ResNet-18  which roughly has more channels with a mini-batch size 128, and set a weight decay of . We train the network for 90 epochs with an initial learning rate of 0.1. We decay the learning rate by 0.1 at epoch 30 and 60. When the deferred scheduling is used, it is applied after 60 epochs of standard training.
Twitter  is a dataset for a part-of-speech (POS) tagging task in social media text with 25 classes. Each sample is a pair of a token and a tag, e.g
., “(books, common noun)” and “(#acl, hashtag)”, where each token is embedded into a 50-dimensional vector via a pre-defined word-embedding
. We discarded two classes with zero test samples and obtained 14,614 training samples with 23 classes. We use 2-layer fully-connected network with a hidden layer size of 256 and a ReLU nonlinearity. We set a mini-batch size 64 and a weight decay of. We train the network for 15 epochs with an initial learning rate 0.1 and decay the learning rate by 0.1 at epoch 10. When the deferred scheduling is used, it is applied after 10 epochs of standard training.
Reuters  is a dataset for a text categorization task which predicts the subject of a given text. As an input, 1000-dimensional bag-of-words vectors are given, which are processed from a news story document. It is originally composed of 52 classes, but we discarded the classes that have less than 5 test samples for a reliable evaluation, obtaining a subset of the full dataset of 36 classes with 6436 training samples. We hold-out 10% of training samples to construct a validation set. We use 2-layer fully-connected network with a hidden layer size of 256 and a ReLU nonlinearity. We set a mini-batch size 64 and a weight decay of . We train the network for 15 epochs with an initial learning rate 0.1 and decay the learning rate by 0.1 at epoch 10. When the deferred scheduling is used, it is applied after 10 epochs of standard training.
Appendix B More results from ablation study
Generation from another classifier .
As mentioned, our method introduces another classifier to generate synthetic minority independently from the training classifier . This is because using itself instead of in the optimization objective (2) would let the synthetic samples already confident in the target minority class to , and this makes the overall training process redundant. To further validate the importance of using , we consider an ablation called “M2m-Self”: instead of using , “M2m-Self” uses for generating minority samples. As reported in Table 6, one could immediately see that M2m-Self only shows marginal improvement from DRS, which is much inferior than the original M2m.
Using multiple classifiers for generation.
Since our method is not restricted to use the only one pre-trained classifier in the optimization (2), the multiple classifiers for can be used to improve the quality of generation. To verify the additional gain from multiple classifiers, we consider an ablation called “M2m-Ensemble”: use the ensemble of the classifiers () for generation instead of the single classifier. Here, we use the same architecture ResNet-32 for and and use a higher due to the smoothed prediction from the ensemble. The results in Table 6 show that M2m-Ensemble slightly perform better than M2m. It indicates that our method can benefit from the stronger classifier.
We also propose a sample rejection criteria to alleviate the risk of unreliable generation, possibly due to a weak generalization of . To verify the effect of this rejection criteria, we consider an ablation, namely “M2m-No-Reject”, which does not use this rejection policy in training. In other words, all the generated samples are used to train . The results in Table 6 show that M2m-No-Reject performs significantly worse than M2m. This indeed confirms the gain from using the proposed rejection criteria.
The effect of .
As specified in Algorithm 1 in the main paper, we set a threshold to filter out the synthetic samples which the generation objective is not sufficiently minimized, mainly due to the limited budget. To evaluate the practical effectiveness of using , here we consider an ablation that this thresholding is not used, equivalently when . As reported in Table 6, we indeed observe a performance degradation by not using . This reveals that the confidence level in affects the final quality of the generation.
Appendix C Results on ImageNet-LT
We additionally evaluate our method on ImageNet-LT  dataset, a subset of ImageNet dataset  with a synthetic imbalance following the Pareto distribution of the power . It is composed of 115,846 training samples with 1,000 categories, 1,280 images in the maximal class and 5 images in the minimal class. A more detailed distribution is presented in Figure 7. We use the randomly-resized cropping and the horizontal flipping as a data augmentation, and all the images are resized to 128128. We hold-out 20 samples per class randomly from the original ImageNet training set to form a validation set, and the original (roughly balanced) ImageNet validation set is used for testing. We use ResNet-50  with a mini-batch size 256 and set a weight decay of . We train the network for 200 epochs with an initial learning rate of 0.1 and it is decayed by 0.1 at epoch 160 and 180. When the deferred scheduling is used, e.g., DRS, DRW and our method, it is applied after 160 epochs of standard training. We evaluate our method with followings which show the best performance among the baselines in the experiments in the main paper: (a) ERM-DRS and (b) LDAM-DRW . We report the balanced accuracy (bACC) and the geometric mean scores (GM). As reported in Table 6, our method, M2m, significantly outperforms the baselines. In the case of ERM loss, compare to DRS, M2m shows 3.43 % and 4.75 % relative gains in bACC and GM, respectively. Furthermore, with a margin-based loss function LDAM, the improvement is much enlarged.