1 Introduction
Most realworld data comes with a longtailed nature: a few highfrequency classes (or head classes) contributes to most of the observations, while a large number of lowfrequency classes (or tail classes) are underrepresented in data. Taking an instance segmentation dataset, LVIS (Gupta et al., 2019), for example, the number of instances in banana class can be thousands of times more than that of a bait class. In practice, the number of samples per class generally decreases from head to tail classes exponentially. Under the power law, the tails can be undesirably heavy. A model that minimizes empirical risk on longtailed training datasets often underperforms on a classbalanced test dataset. As datasets are scaling up nowadays, the longtailed nature poses critical difficulties to many vision tasks, e.g., visual recognition and instance segmentation.
An intuitive solution to longtailed task is to rebalance the data distribution. Most stateoftheart (SOTA) methods use the classbalanced sampling or loss reweighting to “simulate" a balanced training set (Byrd and Lipton, 2018; Wang et al., 2017). However, they may underrepresent the head class or have gradient issues during optimization. Cao et al. (Cao et al., 2019) introduced LabelDistributionAware Margin Loss (LDAM), from the perspective of the generalization error bound. Given fewer training samples, a tail class should have a higher generalization error bound during optimization. Nevertheless, LDAM is derived from the hinge loss, under a binary classification setup and is not suitable for multilabel classification.
We propose Balanced MetaSoftmax (BALMS) for longtailed visual recognition. We first show that the Softmax function is intrinsically biased under the longtailed scenario. We derive a Balanced Softmax function from the probabilistic perspective that explicitly models the testtime label distribution shift. Theoretically, we found that optimizing for the Balanced Softmax crossentropy loss is equivalent to minimizing the generalization error bound. Balanced Softmax generally improves longtailed classification performance on datasets with moderate imbalance ratios, e.g., CIFAR10LT (Krizhevsky, 2009) with a maximum imbalance factor of 200. However, for datasets with an extremely large imbalance factor, e.g., LVIS (Gupta et al., 2019)
with an imbalance factor of 26,148, the optimization process becomes difficult. Complementary to the loss function, we introduce the
Meta Sampler, which learns to resample for achieving high validation accuracy by metalearning. The combination of Balanced Softmax and Meta Sampler could efficiently address longtailed classification tasks with high imbalance factors.We evaluate BALMS on both longtailed image classification and instance segmentation on five commonly used datasets: CIFAR10LT (Krizhevsky, 2009), CIFAR100LT (Krizhevsky, 2009)
, ImageNetLT
(Liu et al., 2019), PlacesLT Zhou et al. (2017) and LVIS (Gupta et al., 2019). On all datasets, BALMS outperforms stateoftheart methods. In particular, BALMS outperforms all SOTA methods on LVIS, with an extremely high imbalanced factor, by a large margin.We summarize our contributions as follows: 1) we theoretically analyze the incapability of Softmax function in longtailed tasks; 2) we introduce Balanced Softmax function that explicitly considers the generalization error bound during optimization; 3) we present the Meta Sampler, a metalearning based resampling strategy for longtailed learning.
2 Related Works
Data ReBalancing. Pioneer works focus on rebalancing during training. Specifically, resampling strategies (Byrd and Lipton, 2018; Kubat and Matwin, 1997; Chawla et al., 2002; Han et al., 2005; He and Garcia, 2009; Shen et al., 2016; Buda et al., 2018; Barandela et al., 2009) try to restore the true distributions from the imbalanced training data. Reweighting, i.e., costsensitive learningWang et al. (2017); Huang et al. (2016, 2019); Khan et al. (2015); Mikolov et al. (2013), assigns a cost weight to the loss of each class. However, it is argued that oversampling inherently overfits the tail classes and undersampling underrepresents head classes’ rich variations. Meanwhile, reweighting tends to cause unstable training especially when the class imbalance is severe because there would be abnormally large gradients when the weights are very large.
Loss Function Engineering. Tan et al. (Tan et al., 2020) pointed out that randomly dropping some scores of tail classes in the Softmax function can effectively help, by balancing the positive gradients and negative gradients flowing through the score outputs. Cao et al. (Cao et al., 2019) showed that the generalization error bound could be minimized by increasing the margins of tail classes. Khan et al. (Hayat et al., 2019) modify the loss function based on Bayesian uncertainty. Li et al. (Li et al., 2019) proposes two novel loss functions to balance the gradient flow. Nevertheless, as we show in this paper, Softmax function, which is commonly adopted in these methods, is biased under the longtailed scenarios. Our proposed method with Balanced Softmax addresses the issue and significantly improves the standard lossbased methods.
MetaLearning. Many approaches (Jamal et al., 2020; Ren et al., 2018; Shu et al., 2019) have been proposed to tackle the longtailed issue with metalearning. Many of them focused on optimizing the weightpersample as a learnable parameter, which appears as a hyperparameter in the reweight approach. This group of methods requires a clean and unbiased dataset as a meta set, i.e., development set, which is usually a fixed subset of the training images and use bilevel optimization to estimate the weight parameter.
Decoupled Training. A few recent works Kang et al. (2020); Zhou et al. (2019) point out that decoupled training, a simple yet effective solution, could significantly improve the generalization issue on longtailed datasets. The classifier is the only underperformed component when training in imbalanced datasets. However, in our experiments, we found this technique is not adequate for datasets with extremely high imbalance factors, e.g., LVIS (Gupta et al., 2019). Interestingly in our experiments, we observed that decoupled training is complementary to our proposed BALMS, and combining them results in additional improvements.
3 Balanced MetaSoftmax
The major challenge for longtailed visual recognition is the mismatch between training data distribution and test data distribution. Let be the balanced testing set, where denotes a data point and denotes its label. Let be the number of classes, be the number of samples in class , where . Similarly, we denote the longtailed training set as . Normally, we have . Specifically, for a tail class , , which makes the generalization under longtailed scenarios extremely challenging.
We introduce Balanced MetaSoftmax (BALMS) for longtailed visual recognition. It has two components: 1) a Balanced Softmax function that accommodates the label distribution shift between training and testing; 2) a Meta Sampler that learns to resample training set by metalearning. We denote a feature extractor function as and a linear classifier’s weight as .
3.1 Balanced Softmax
Label Distribution Shift.
We begin by revisiting the multiclass Softmax regression, where we are generally interested in estimating the conditional probability
, which can be modeled as a multinomial distribution :(1) 
where is the indicator function and Softmax function maps a model’s class output to the conditional probability .
From the Bayesian inference’s perspective,
can also be interpreted as:(2) 
where is in particular interest under the classimbalanced setting. Assuming that all instances in the training dataset and the test dataset are generated from the same process , there could still be a discrepancy between training and testing given different label distribution and evidence . With a slight abuse of the notation, we redefine to be the conditional distribution on the balanced test set and define to be the conditional probability on the imbalanced training set. As a result, standard Softmax provides a biased estimation for .
Balanced Softmax. To eliminate the discrepancy between the posterior distributions of training and testing, we introduce the Balanced Softmax. We use the same model outputs to parameterize two conditional probabilities: for testing and for training.
Theorem 1.
Assume to be the desired conditional probability of the balanced data set, with the form , and to be the desired conditional probability of the imbalanced training set, with the form . If is expressed by the standard Softmax function of model output , then can be expressed as
(3) 
We use the exponential family parameterization to prove Theorem 1. The proof can be found in the supplementary materials. Theorem 1 essentially shows that applying the following Balanced Softmax function can naturally accommodate the label distribution shifts between the training and test sets. We define the Balanced Softmax function as
(4) 
We further investigate the improvement brought by the Balanced Softmax in the following sections.
Many vision tasks, e.g., instance segmentation, might use multiple binary logistic regressions instead of a multiclass Softmax regression. By virtue of Bayesian law, a similar strategy can be applied to the multiple binary logistic regressions. The detailed derivation is left in the supplementary materials.
Generalization Error Bound
Generalization error bound gives the upper bound of a model’s test error, given its training error. With dramatically fewer training samples, the tail classes have much higher generalization bounds than the head classes, which make high classification accuracy on tail classes unlikely. In this section, we show that optimizing Eqn. 4 is equivalent to minimizing the generalization upper bound.
Margin theory provides a bound based on the margins (Kakade et al., 2009). Margin bounds usually negatively correlate to the magnitude of the margin, i.e., larger margin leads to lower generalization error. Consequently, given a constraint on the sum of margins of all classes, there would be a tradeoff between minority classes and majority classes (Cao et al., 2019).
Locating such an optimal margin for multiclass classification is nontrivial. The bound investigated in (Cao et al., 2019) was established for binary classification using hinge loss. Here, we try to develop the margin bound for the multiclass Softmax regression. Given the previously defined and , we derive by minimizing the margin bound. Margin bound commonly bounds the 01 error:
(5) 
However, directly using the 01 error as the loss function is not ideal for optimization. Instead, negative log likelihood (NLL) is generally considered more suitable. With continuous relaxation of Eqn. 5, we have
(6) 
where is any threshold, and is a standard negative loglikelihood with Softmax. This new error is still a counter, but describes how likely the test loss will be larger than a given threshold. Naturally, we define our margin for class to be
(7) 
where is the set of all class samples. If we force a large margin during training, i.e., force the training loss to be much lower than , then will be reduced. The Theorem 2 in (Kakade et al., 2009) can then be directly generalized as
Theorem 2.
Let be any threshold, for all , with probability at least , we have
(8) 
where is the error on the balanced test set, is used to hide constant terms and is some measure on complexity. With a constraint on , CauchySchwarz inequality gives us the optimal .
The optimal suggests that we need larger for the classes with fewer samples. In other words, to achieve the optimal generalization ability, we need to focus on minimizing the training loss of the tail classes. Therefore, for each class , the desired training loss is
(9) 
Corollary 2.1.
can be approximated by when:
(10) 
We provide a sketch of proof to the corollary in supplementary materials. Notice that compared to Eqn. 4, we have an additional constant before . We empirically find that setting to leads to the optimal results, which may suggest that Eqn. 41 is not necessarily tight. To this point, the label distribution shift and generalization bound of multiclass Softmax regression lead us to the same loss form: Eqn. 4.
3.2 Meta Sampler
Although the Balanced Softmax theoretically optimizes the generalization error bound, given larger datasets with extremely imbalanced data distribution, the optimization is still challenging.
Classbalanced sampler (CBS) is used to tweak the minibatches’ sampling process and finetune the classifier in the decoupled training setup Kang et al. (2020); Zhou et al. (2019). It potentially helps to simplify the optimization landscape by choosing classbalanced samples. However, in our experiments, we found that naively combining CBS with Balanced Softmax worsens the performance.
We first theoretically analyze the cause of the performance drop. When the linear classifier’s weight for class has converged, i.e., , we have:
(11) 
where is the batch size and is the number of classes. Samples per class have been ensured to be by CBS. When the classification loss converges to 0, the conditional probability of the correct class is expected to be close to 1. For any positive sample and negative sample of class , we have and , when . Eqn. 11 can be rewritten as
(12) 
where is the training set. The formal derivation of Eqn. 12 is in the supplementary materials.
Compared to the inverse loss weight, i.e., for class , combining Balanced Softmax with CBS leads to the overbalance problem, i.e., for class , which deviates from the optimal distribution.
Meta Sampler
To simplify the gradient descent process with CBS, we introduce a learnable version of it based on metalearning, which is named Meta Sampler. We first define the empirical loss by sampling from dataset as for standard Softmax, and for Balanced Softmax, where is defined previously in Eqn. 4.
To estimate the optimal sample rates for different classes, we adopt a bilevel metalearning strategy: we update the parameter of sample distribution in the inner loop and update the classifier parameters in the outer loop,
(13) 
where is the sample rate for class , is the training set with class sample distribution , and is a meta set we introduce to supervise the outer loop optimization. We create the meta set by classbalanced sampling from the training set . Empirically, we found it sufficient for inner loop optimization. An intuition to this bilevel optimization strategy is that: we want to learn best sample distribution parameter such that the network, parameterized by , outputs best performance on meta dataset when trained by samples from .
We first compute the perinstance sample rate , where denotes the label class for instance and is total number of training samples, and sample a training batch from a parameterized multinomial distribution . Then we optimize the model in a metalearning setup by

sample a minibatch given distribution and perform one step gradient descent to get a surrogate model parameterized by by .

compute the of the surrogate model on the meta dataset and optimize the sample distribution parameter by with crossentropy loss with standard Softmax

update the model parameter with Balanced Softmax
However, sampling from a discrete distribution is not differentiable by nature. To allow endtoend training for the sampling process, when forming the minibatch , we apply the GumbelSoftmax reparameterization trick (Jang et al., 2017). A detailed explanation can be found in the supplementary materials.
If we apply Meta Sampler with standard crossentropy loss with Softmax, the convergence would be relatively slow when the classimbalance factor is high. In addition, it might overfit the tail class and underfit the head class. In BALMS, the Balanced Softmax alleviates this issue because it naturally balances the distribution, as shown in Eqn. 12. Thus, the Meta Sampler could output a relatively more balanced sample distribution. We provide additional discussions in the experiments.
4 Experiments
4.1 Exprimental Setup
Datasets. We perform experiments on longtailed image classification datasets, including CIFAR10LT (Krizhevsky, 2009), CIFAR100LT (Krizhevsky, 2009), ImageNetLT (Liu et al., 2019) and PlacesLT (Zhou et al., 2017) and one longtailed instance segmentation dataset, LVIS (Gupta et al., 2019). We define the imbalance factor of a dataset as the number of training instances in the largest class divided by that of the smallest. Details of datasets are in Table 1.
Dataset  #Classes  Imbalance Factor 

CIFAR10LT (Krizhevsky, 2009)  10  10200 
CIFAR100LT (Krizhevsky, 2009)  100  10200 
ImageNetLT (Liu et al., 2019)  1,000  256 
PlacesLT (Zhou et al., 2017)  365  996 
LVIS (Gupta et al., 2019)  1,230  26,148 
Evaluation Setup. For classification tasks, after training on the longtailed dataset, we evaluate the models on the corresponding balanced test/validation dataset and report top1 accuracy. We also report accuracy on three splits of the set of classes: Manyshot (more than 100 images), Mediumshot (20
100 images), and Fewshot (less than 20 images). Notice that results on small datasets, i.e., CIFARLT 10/100, tend to show large variances, we report the mean and standard error under 3 repetitive experiments. We show details of longtailed dataset generation in supplementary materials. For LVIS, we use official training and testing splits. Average Precision (AP) in COCO style
(Lin et al., 2014) for both bounding box and instance mask are reported. Our implementation details can be found in the supplementary materials.Imbalance Factor = 10  Imbalance Factor = 100  Imbalance Factor = 200 
4.2 LongTailed Image Classification
We present the results for longtailed image classification in Table 2 and Table 3. On all datasets, BALMS achieves SOTA performance compared with all endtoend training and decoupled training methods. In particular, we notice that BALMS demonstrates a clear advantage under two cases: 1) When the imbalance factor is high. For example, on CIFAR10 with an imbalance factor of 200, BALMS is higher than the SOTA method, LWS (Kang et al., 2020), by 3.4%. 2) When the dataset is large. BALMS achieves comparable performance with cRT on ImageNetLT, which is a relatively small dataset, but it significantly outperforms cRT on a larger dataset, PlacesLT.
In addition, we study the robustness of the proposed Balanced Softmax compared to standard Softmax and SOTA loss function for longtailed problems, EQL (Tan et al., 2020). We visualize the marginal likelihood trained with a different loss given different imbalance factors in Fig. 1. Balanced Softmax clearly gives a smoother and more balanced likelihood under different imbalance factors.
Dataset  CIFAR10LT  CIFAR100LT  

Imbalance Factor  200  100  10  200  100  10 
Endtoend training  
Softmax  71.2 0.3  77.4 0.8  90.0 0.2  41.0 0.3  45.3 0.3  61.9 0.1 
CBW  72.5 0.2  78.6 0.6  90.1 0.2  36.7 0.2  42.3 0.8  61.4 0.3 
CBS  68.3 0.3  77.8 2.2  90.2 0.2  37.8 0.1  42.6 0.4  61.2 0.3 
Focal Loss (Lin et al., 2017)  71.8 2.1  77.1 0.2  90.3 0.2  40.2 0.5  43.8 0.1  60.0 0.6 
Class Balance Loss (Cui et al., 2019)  72.6 1.8  78.2 1.1  89.9 0.3  39.9 0.1  44.6 0.4  59.8 1.1 
LDAM (Cao et al., 2019)  71.2 0.3  77.2 0.2  90.2 0.3  41.0 0.3  45.4 0.1  62.0 0.3 
Equalization Loss (Tan et al., 2020)  72.8 0.2  76.7 0.1  89.9 0.3  43.3 0.1  47.3 0.1  59.7 0.3 
Decoupled training  
cRT (Kang et al., 2020)  76.6 0.2  82.0 0.2  91.0 0.0  44.5 0.1  50.0 0.2  63.3 0.1 
LWS (Kang et al., 2020)  78.1 0.0  83.7 0.0  91.1 0.0  45.3 0.1  50.5 0.1  63.4 0.1 
BALMS  81.5 0.0  84.9 0.1  91.3 0.1  45.5 0.1  50.8 0.0  63.0 0.1 
Dataset  ImageNetLT  PlacesLT  

Accuracy  Many  Medium  Few  Overall  Many  Medium  Few  Overall 
Endtoend training  
Lifted Loss (Song et al., 2016)  35.8  30.4  17.9  30.8  41.1  35.4  24  35.2 
Focal Loss (Lin et al., 2017)  36.4  29.9  16  30.5  41.1  34.8  22.4  34.6 
Range Loss (Zhang et al., 2017)  35.8  30.3  17.6  30.7  41.1  35.4  23.2  35.1 
OLTR (Liu et al., 2019)  43.2  35.1  18.5  35.6  44.7  37.0  25.3  35.9 
Equalization Loss (Tan et al., 2020)        36.4         
Decoupled training  
cRT (Kang et al., 2020)        41.8  42.0  37.6  24.9  36.7 
LWS (Kang et al., 2020)        41.4  40.6  39.1  28.6  37.6 
BALMS  50.3  39.5  25.3  41.8  41.2  39.8  31.6  38.7 
4.3 LongTailed Instance Segmentation
LVIS dataset is one of the most challenging datasets in the vision community. As suggested in Tabel 1, the dataset has a much higher imbalance factor compared to the rest (26148 vs. less than 1000) and contains many very fewshot classes. Compared to the image classification datasets, which are relatively small and have lower imbalance factors, the LVIS dataset gives a more reliable evaluation of the performance of longtailed learning methods.
Since one image might contain multiple instances from several categories, we hereby use Meta Reweighter instead of Meta Sampler. As shown in Table 4, BALMS achieves the best results among all the approaches and outperform others by a large margin, especially on rare classes, where BALMS achieves an average precision of 19.6 while the best of the rest is 14.6. The results suggest that with the Balanced Softmax function and learnable Meta Reweighter, BALMS is able to give more balanced gradients and tackles the extremely imbalanced longtailed tasks.
In particular, LVIS is composed of images of complex daily scenes with natural longtailed categories. To this end, we believe BALMS is applicable to realworld longtailed visual recognition challenges.
Method  

Softmax  23.7  27.3  24.0  13.6  24.0 
Sigmoid  23.6  27.3  24.0  12.7  24.0 
Focal Loss (Lin et al., 2017)  23.4  27.5  23.5  12.8  23.8 
Class Balance Loss (Cui et al., 2019)  23.3  27.3  23.8  11.4  23.9 
LDAM (Cao et al., 2019)  24.1  26.3  25.3  14.6  24.5 
LWS (Kang et al., 2020)  23.8  26.8  24.4  14.4  24.1 
Equalization Loss (Tan et al., 2020)  25.2  26.6  27.3  14.6  25.7 
BALMS  27.0  27.5  28.9  19.6  27.6 
Dataset  CIFAR10LT  CIFAR100LT  

Imbalance Factor  200  100  10  200  100  10 
Endtoend training  
(1) Softmax  71.2 0.3  77.4 0.8  90.0 0.2  41.0 0.3  45.3 0.3  61.9 0.1 
(2) Balanced Softmax  71.6 0.7  78.4 0.9  90.5 0.1  41.9 0.2  46.4 0.7  62.6 0.3 
(3) Balanced Softmax  79.0 0.8  83.1 0.4  90.9 0.4  45.9 0.3  50.3 0.3  63.1 0.2 
Decoupled training  
(4) Balanced Softmax + DT  72.2 0.1  79.1 0.2  90.2 0.0  42.3 0.0  46.1 0.1  62.5 0.1 
(5) Balanced Softmax + DT + MS  76.2 0.4  81.4 0.1  91.0 0.1  44.1 0.2  49.2 0.1  62.8 0.2 
(6) Balanced Softmax+DT  78.6 0.1  83.7 0.1  91.2 0.0  45.1 0.0  50.4 0.0  63.4 0.0 
(7) Balanced Softmax+CBS+DT  80.6 0.1  84.8 0.0  91.2 0.1  42.0 0.0  47.4 0.2  62.3 0.0 
(8) DT+MS  73.6 0.2  79.9 0.4  90.9 0.1  44.2 0.1  49.2 0.1  63.0 0.0 
(9) Balanced Softmax+DT+MR  79.2 0.0  84.1 0.0  91.2 0.1  45.3 0.3  50.8 0.0  63.5 0.1 
(10) BALMS  81.5 0.0  84.9 0.1  91.3 0.1  45.5 0.1  50.8 0.0  63.0 0.1 
4.4 Component Analysis
We conduct an extensive component analysis on CIFAR10/100LT dataset to further understand the effect of each proposed component of BALMS. The results are presented in Table 5.
Balanced Softmax. Comparing (1), (2) with (3), and (5), (8) with (10), we observe that Balanced Softmax gives a clear improvement to the overall performance, under both endtoend training and decoupled training setup. It successfully accommodates the distribution shift between training and testing. In particular, we observe that Balanced Softmax , which we derive in Eqn. 10, cannot yield ideal results, compared to our proposed Balanced Softmax in Eqn. 4.
MetaSampler. From (6), (7), (9) and (10), we observe that MetaSampler generally improves the performance, when compared with no MetaSampler, and variants of MetaSampler. We notice that the performance gain is larger with higher imbalance factor, which is consistent with our observation in LVIS experiments. In (9) and (10), MetaSampler generally outperforms the MetaReweighter and suggests the discrete sampling process gives a more efficient optimization process. Comparing (7) and (10), we can see MetaSampler addresses the overbalancing issue discussed in Section 3.2.
Decoupled Training. Comparing (2) with (4) and (3) with (6), decoupled training scheme and Balanced Softmax are two orthogonal components and we can benefit from both at the same time.
5 Conclusion
We have introduced BALMS for longtail visual recognition tasks. BALMS tackles the distribution shift between training and testing, combining meta learning with generalization error bound theory: it optimizes a Balanced Softmax function which theoretically minimizes the generalization error bound; it improves the optimization in large longtailed datasets by learning an effective Meta Sampler. BALMS generally outperforms SOTA methods on 4 image classification datasets and 1 instance segmentation dataset by a large margin, especially when the imbalance factor is high.
However, Meta Sampler is computationally expensive in practice and the optimization on large datasets is slow. In addition, the Balanced Softmax function only approximately guarantees a generalization error bound. Future work may extend current framework to a wider range of tasks, e.g., machine translation, and correspondingly design tighter bounds and computationally efficient metalearning algorithms.
Broader Impact
Due to the Zipfian distribution of categories in real life, algorithms, and models with exceptional performance on research benchmarks may not remain powerful in the real world. BALMS, as a lightweight method, only adds minimal computational cost during training and is compatible with most of the existing works for visual recognition. As a result, BALMS could be beneficial to bridge the gap between research benchmarks and industrial applications for visual recognition.
However, there can be some potential negative effects. As BALMS empowers deep classifiers with stronger recognition capability on longtailed distribution, the application of such a classification algorithm can be further extended to more reallife scenarios. We should be cautious about the misuse of the method proposed. Depending on the scenario, it might cause negative effect on democratic privacy, e.g., Person ReID, Detection and etc.
References

Restricted decontamination for the imbalanced training sample problem.
Iberoamerican Congress on Pattern Recognition
21 (9), pp. 1263–1284. Cited by: §2. 
A systematic study of the class imbalance problem in convolutional neural networks
. Neural Networks 106, pp. 249–259. Cited by: §2. 
What is the effect of importance weighting in deep learning?
. arXiv preprint arXiv:1812.03372. Cited by: §1, §2.  Learning imbalanced datasets with labeldistributionaware margin loss. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'AlchéBuc, E. Fox, and R. Garnett (Eds.), pp. 1567–1578. Cited by: A.3 Proof to Theorem 2, §1, §2, §3.1, §3.1, Table 2, Table 4.

SMOTE: synthetic minority oversampling technique.
Journal of Artificial Intelligence Research
16, pp. 321–357. Cited by: §2. 
Classbalanced loss based on effective number of samples.
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
, pp. 9260–9269. Cited by: C.3 Training Details, D.2 Longtailed Datasets Generation, Table 2, Table 4.  ImageNet: A LargeScale Hierarchical Image Database. In CVPR09, Cited by: D.2 Longtailed Datasets Generation.
 Generalized inner loop metalearning. arXiv preprint arXiv:1910.01727. Cited by: C.2 Software.
 LVIS: a dataset for large vocabulary instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: C.3 Training Details, D.2 Longtailed Datasets Generation, Table 6, §1, §1, §1, §2, §4.1, Table 1.
 Borderlinesmote: a new oversampling method in imbalanced data sets learning. International Conference on Intelligent Computing 16, pp. 321–357. Cited by: §2.
 Gaussian affinity for maxmargin class imbalanced learning. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §2.
 Learning from imbalanced data. IEEE Transactions on Knowledge and Data Engineering 21 (9), pp. 1263–1284. Cited by: §2.
 Learning deep representation for imbalanced classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5375–5384. Cited by: §2.

Deep imbalanced learning for face recognition and attribute prediction
. IEEE transactions on pattern analysis and machine intelligence. Cited by: §2.  Rethinking classbalanced methods for longtailed visual recognition from a domain adaptation perspective. ArXiv abs/2003.10780. Cited by: §2.
 Categorical reparametrization with gumbelsoftmax. In Proceedings International Conference on Learning Representations 2017, Cited by: B.1 Meta Sampler, §3.2.
 On the complexity of linear prediction: risk bounds, margin bounds, and regularization. In Advances in neural information processing systems, pp. 793–800. Cited by: A.3 Proof to Theorem 2, A.3 Proof to Theorem 2, §3.1, §3.1.
 Decoupling representation and classifier for longtailed recognition. International Conference on Learning Representations abs/1910.09217. Cited by: C.3 Training Details, C.3 Training Details, C.3 Training Details, C.3 Training Details, §2, §3.2, §4.2, Table 2, Table 3, Table 4.

Costsensitive learning of deep feature representations from imbalanced data
. IEEE Transactions on Neural Networks and Learning Systems PP. Cited by: §2.  Adam: a method for stochastic optimization. International Conference on Learning Representations. Cited by: C.3 Training Details, C.3 Training Details.
 Learning multiple layers of features from tiny images. Technical report . Cited by: Table 6, §1, §1, §4.1, Table 1.

Addressing the curse of imbalanced training sets: onesided selection.
In
In Proceedings of the Fourteenth International Conference on Machine Learning
, pp. 179–186. Cited by: §2.  Gradient harmonized singlestage detector. In AAAI Conference on Artificial Intelligence, Cited by: §2.
 Focal loss for dense object detection. 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2999–3007. Cited by: Table 2, Table 3, Table 4.
 Microsoft coco: common objects in context. ArXiv abs/1405.0312. Cited by: §4.1.
 Largescale longtailed recognition in an open world. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2532–2541. Cited by: D.2 Longtailed Datasets Generation, Table 6, §1, §4.1, Table 1, Table 3.

SGDR: stochastic gradient descent with warm restarts
. In International Conference on Learning Representations, Cited by: C.3 Training Details, C.3 Training Details.  Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger (Eds.), pp. 3111–3119. Cited by: §2.
 PyTorch: an imperative style, highperformance deep learning library. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'AlchéBuc, E. Fox, and R. Garnett (Eds.), pp. 8024–8035. Cited by: C.2 Software.
 Learning to reweight examples for robust deep learning. In ICML, Cited by: §2.

Relay backpropagation for effective learning of deep convolutional neural networks
. In European conference on computer vision, pp. 467–482. Cited by: §2.  Metaweightnet: learning an explicit mapping for sample weighting. In Advances in Neural Information Processing Systems, pp. 1917–1928. Cited by: §2.
 Deep metric learning via lifted structured feature embedding. In Computer Vision and Pattern Recognition (CVPR), Cited by: Table 3.
 Equalization loss for longtailed object recognition. ArXiv abs/2003.05176. Cited by: C.3 Training Details, §2, §4.2, Table 2, Table 3, Table 4.
 Learning to model the tail. In Advances in Neural Information Processing Systems, pp. 7029–7039. Cited by: §1, §2.
 Range loss for deep face recognition with longtailed training data. In 2017 IEEE International Conference on Computer Vision (ICCV), Vol. , pp. 5419–5428. Cited by: Table 3.

Places: a 10 million image database for scene recognition
. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: Table 6, §1, §4.1, Table 1.  BBN: bilateralbranch network with cumulative learning for longtailed visual recognition. ArXiv abs/1912.02413. Cited by: §2, §3.2.
Appendix A: Proofs and Derivations
A.1 Proof to Theorem 1
Observe that the conditional probability of a categorical distribution can be parameterized as an exponential family. It gives us a standard Softmax function as an inverse parameter mapping
(14) 
and also a canonical link function:
(15) 
A.2 Derivation for the Multiple Binary Logistic Regression variant
Definition. Multiple Binary Logisitic Regression uses binary logistic regression to do multiclass classification. Same as Softmax regression, the predicted label is the class with the maximum model output,
(24) 
The only difference is that is expressed by a logistic function of
(25) 
and the loss function sums up binary classification loss on all classes
(26) 
where
(27) 
Setup. By the virtue of Bayes Rule, and can be decomposed as
(28) 
and for and ,
(29) 
Derivation. Again, we introduce the exponential family parameterization and have the following link function for
(30) 
Bring the decomposition Eqn. 28 and Eqn.29 into the link function above
(31) 
(32) 
Simplify the above equation
(33) 
Substitute the in to the equation above
(34) 
Then
(35) 
Finally, we have
(36) 
A.3 Proof to Theorem 2
Setup. Firstly, we define as,
(37) 
where and is previously defined in submission. However, does not have a specific semantic meaning as it is defined only to keep consistent with notations in Kakade et al. [2009].
Let be the 01 loss on example from class
(38) 
and be the 01 margin loss on example from class
(39) 
Let denote the empirical variant of .
Proof. For any and with probability at least , for all , and , Theorem 2 in Kakade et al. [2009] directly gives us
(40) 
where and denotes the empirical Rademacher complexity of function family . By applying Cao et al. [2019]’s analysis on the empirical Rademacher complexity and union bound over all classes, we have the generalization error bound for the loss on a balanced test set
(41) 
where
(42) 
is a loworder term of . To minimize the generalization error bound Eqn. 41, we essentially need to minimize
(43) 
By constraining the sum of as , we can directly apply CauchySchwarz inequality to solve the optimal
(44) 
A.4 Proof to Corollary 2.1
Preliminary. Notice that can not be achieved, because and implies
(45) 
The equation above contradicts with the definition that sum of should be exactly equal to 1. To solve the contradiction, we introduce a term , such that
(46) 
To justify the new term , we recall the definition of error
(47) 
If we tweak the threshold with the term
(48) 
(49) 
As is not a function of , the value of will not be affected by the tweak. Thus, instead of looking for that minimizes the generalization bound for , we are in fact looking for that minimizes generalization bound for
Proof. In this section, we show that in the corollary is an approximation of .
(50)  
(51)  
(52)  
(53)  
(54)  
(55)  
(56)  
(57)  
(58) 
where for some in between and , is close to a constant when the model converges. Although the approximation holds under some constraints, we show that it approximately minimizes the generalization bound derived in the last section.
A.5 Derivation for Eqn.12
Gradient for positive samples:
(60)  
(61)  
Comments
There are no comments yet.