1 Introduction
Imbalanced datasets are often encountered in realworld problems such computer vision
[Buda et al.2018, Huang et al.2016], healthcare informatics [Purushotham et al.2018, Li et al.2018]and natural language processing
[Li et al.2011, Liu et al.2006]. In those scenarios, certain classes have abundant training data while some classes only have limited amount of data. The imbalance in training data makes learning classification models a challenging task. Conventional machine learning models treat each training sample equally and be trained by minimizing the overall classification error. Consequently, learned models bias toward correct classification in the majority class over the minority class, resulting in significant performance degradation in the minority class. This phenomenon is not desirable, particularly in domains such as predictions in intensive care units where correct classification of minority samples is critical. Hence, effective learning from imbalanced data is of great importance in machine learning applications.
To tackle the class imbalance problem, different effective strategies have been developed [He and Garcia2008]. At the data level, sampling techniques are used as a preprocessing step. Training data are resampled to generate a balanced dataset, either by oversampling the minority class, or undersampling the majority class (or combine both). Then classification models are trained on the balanced data [Fernandez et al.2018]
. However, sampling methods modify the original data distribution which introduces risks into the following model training: oversampling is at the risk of overfitting while undersampling loses information of majority class. To alleviate the information loss, undersampling can be further combined with ensemble methods where each base classifier is trained on undersampled balanced data.
At the algorithm level, instead of assigning equal weight on all samples, costsensitive learning assigns heavier costs on the misclassification of minority samples than majority ones. Classification models are then trained to minimize the total cost. In general, assignment of cost parameters heavily relies on the specific considered problems and there are no general rules for assignment. Common strategies include dynamic cost generation within boosting ensemble methods [Galar et al.2012], and classwise reweighting according to class frequency [Aurelio et al.2019, Castro and Braga2013]. In particular, as mentioned in [Castro and Braga2013], the classreweighting costsensitive methods are theoretically appealing as the class prior information can be potentially incorporated through the weighting mechanism, consequently improving learning on the minority class.
Recently, solving imbalance problem using (deep) neural networks has attracted much attention . Those works exploit deep neural network’s (DNN) merit of effective feature learning to enhance predictive performances. DNNs are deployed as classifier combining with sampling techniques, costsensitive learning or new loss functions [Wang et al.2016, Chung et al.2015]
. In computer vision where highlevel structured information can be learned by convolutional neural networks, properties of learned feature representations are further exploited to improve DNN’s performance
[Huang et al.2016, Dong et al.2018, Khan et al.2018].In this paper, motivated by [Castro and Braga2013]
, we propose a novel costsensitive neural networks, Classwise Reweighted CrossEntropy Network (CRCEN), to address the imbalanced binary classification problem. In CRCEN, a neural network (in our case, MLP) is used to transform the raw input features into highlevel feature representations, based on which final predictions are made using sigmoid function. Different from the conventional cross entropy loss for binary classification where each training sample is equally weighted, CRCEN imposes different weights on different classes. The reweighting mechanism is capable of promoting CRCEN’s learning on the minority class, enhancing the overall predictive performance. The main contributions of our proposed model are:

We propose a novel loss function of classwise reweighted cross entropy based on neural networks to address imbalance classification problem.

We provide a theoretical derivation on the relation in sample’s predicted probability (once neural network is trained), class weights of loss function and class imbalance ratio. This relation can be generalizable and hold valid for deep neural networks.

We analyze the generalized relation to gain insights on model performance in imbalanced learning.

We conduct extensive experiments to demonstrate the effectiveness of our method.
The rest of the paper is organized as follows. In Section 2, we review related works in imbalanced classification problem. Section 3 provide details of the proposed method and analysis on the weighting mechanism. In Section 4, our method is evaluated on several benchmark datasets. Finally, Section 5 concludes the paper with discussion.
2 Related Work
Various approaches on the class imbalance problem have been developed. Here, we focus on two widelyused approaches: sampling methods and costsensitive learning. This section also reviews applications of neural networks as our proposed method combines the costsensitive learning and MLP. For details of other methods, we refer [He and Garcia2008, He and Ma2013, Fernandez et al.2018]
Sampling methods
Sampling techniques reduce data imbalance by either oversampling the minority class or undersampling the majority class. [Chawla et al.2002]
proposes the synthetic minority oversampling technique (SMOTE) to oversample by linearly interpolating a pair of close minority samples. ADASYN
[He et al.2008] oversamples according to the local inclass density for each minority sample. To overcome information loss of discarding samples, undersampling is often combined with ensemble methods. [Khoshgoftaar et al.2007]explores random forest with balanced undersampled dataset in the bagging stage.
[Liu et al.2009] develops balanced bagging and BalanceCascade that dynamically removes easy samples from base classification models.Costsensitive learning
As an alternative to sampling methods, costsensitive (CS) learning tackle the imbalance problem by imposing heavier costs on misclassified samples. [Tang et al.2009] develops costsensitive SVM with repetitive undersampling to improve the detection of informative samples. Boosting is another popular strategy, due to its internal reweighting strategy in the learning process: boosting dynamically adjusts sample weights for the next iteration according to its previous classification error [Freund and Schapire1997]. In imbalanced learning, models tends to misclassify minority samples at the early stage of boosting. With sample reweighting, minority samples are paid more attention in the later stage and hence performance on minority class can be improved [Sun et al.2007]. Other variants of boosting are further combined with sampling techniques to improve boosting performance [Galar et al.2013, Seiffert et al.2010].
Neural networks in imbalanced learning
Neural network is a powerful machine learning model for its merit of highlevel feature representation learning. [Dumpala et al.2018] pairs training samples and uses MLP to predict their labels simultaneously. [Castro and Braga2013] develops costsensitive MLPs, where loss functions are squared error (L2 loss) and cross entropy respectively, both with each class’s log probability reweighted by inverse class frequency.[Wang et al.2016]
designs new loss functions that encourage equal classification error for majority and minority class for image classification. Additionally, with effective feature extraction of convolutional neural networks,
[Huang et al.2016, Khan et al.2018, Dong et al.2018] solve the imbalanced problem in computer vision by exploiting the structured information in highlevel features.Our proposed CRCEN for binary imbalanced classification is a neural network based CS learning method motivated by CS’s theoretical appeal, as mentioned in [Castro and Braga2013]. The loss function in CRCEN is the classwise reweighted cross entropy (CE), and the choice of weight is related to the model’s predictive behavior.
Previous work that is closest to our work in this paper is [Castro and Braga2013]. [Castro and Braga2013] proposes costsensitive MLP (CSMLP) and its loss function is the classwise weighted squared error loss (i.e. treat classification problem as regression, and label is coded as for majority class and for minority class). Under that specific setting, they derive the same weighting strategy of inverse class frequency to improve learning on the minority class. However, dependence of their weight derivation on the label coding is unclear; it is also difficult to understand the effect if a different weight setting is used. Different from their regression treatment, CRCEN takes the natural probabilistic approach (CE loss) to imbalanced classification problem, which does not rely on a specific label coding. Under some moderate conditions, a relation on MLP’s predicted probability, weight choice and class imbalance ratio is theoretically derived. That relation is further qualitatively analyzed to gain insights on MLP’s predictive performance as well as understand the effect of class weights from a probabilistic perspective. We have noticed that [Aurelio et al.2019]
recently uses weighted CE loss for imbalanced learning, incorporating prior class information heuristically. This is a special case of CRCEN. Furthermore, we provide a theoretical support for this particular weight choice under CRCEN framework.
3 Proposed Method
Notation
Let be the imbalanced set of training samples, where is the
dimensional feature vector and
is the class label.We use to represent the minority class and the majority. Further, and represent the sample sets of majority and minority class respectively with and . Imbalance ratio (IR) is defined as .3.1 The CRCEN Loss Function
For the binary classification problem, minimizing cross entropy (CE) (equivalent to maximum likelihood) is an attractive approach due to its modeling of uncertainty, where each training sample is weighted equally. In imbalanced learning, direct use of CE often leads to poor predictive performance. The learning algorithm favors correct classification of samples in the majority class and tends to misclassify the minority samples.
To overcome this limitation and improve correct classification on the minority class, we propose CRCEN to optimize the following classwise reweighted version of CE loss:
(1) 
or the regularized version of (such as norm of ):
where is the probability of belonging to the minority class, modeled by a neural network with parameter vector , is the weight parameter, the tuning parameter of . In this paper, is chosen as MLP.
Note that when , CRCEN reduces to the conventional CE loss. Then in the optimization of using gradient descent, the gradient is dominated by the error signal of majority class and decision boundary will be pushed towards minority class. As a consequence, is likely to have high classification error for minority class. Hence, when greater than , can be viewed intuitively to strengthen the error signal of minority class in the gradient. Alternatively, optimizing Equation 1 is equivalent to optimizing conventional CE loss, but where training data are rebalanced by simply duplicating each minority sample times.
3.2 A Key Equation for Weight
Reweighting minority samples has been effectively applied in practices. A common strategy is to upweight minority samples by the imbalance ratio IR. In this section, we investigate the theoretical aspect of this weighting mechanism, with neural network being the predictive models. As we shall see, the weight connects the imbalance ratio with (expected) sample’s predicted probability and , where and are minority and majority samples respectively. In the following subsections, we remove in analysis for notational brevity.
Assume the output layer of MLP
consists of only one neuron, then the predicted probability is
(2) 
where is the input to the output neuron, is the feature embedding of from the last hidden layer, is a parameter subvector of for the output neuron and be the corresponding bias term.
After MLP is trained, the loss function is minimized with optimal solution . Here, we focus on for simplicity. By optimization theory, we have a necessary condition at :
For simplicity, we consider one component of , at :
(3)  
where is the predicted probability of sample belonging to minority class.
Since Equation (5) holds for any component of parameter , we specifically consider the case for the bias term in Equation (2). Hence . From Equation (5), we obtain the key equation of CRCEN:
(6) 
which reveals the relation of weight , training sample’s predicted probability and for the minority and majority class, after the neural network is trained.
In the neural network training, regularization is often applied to prevent overfitting. If the bias term is not regularized in term (which is the case usually), Equation (6) still holds. Let , where is the parameter vector for hidden layers and the vector of all bias terms. regularization . Then we have for any component of , . Hence Equation (5) holds for .
3.3 Theoretical Analysis on Equation (6)
The relation given by Equation (6) depends on the training dataset . For theoretical analysis, we make a moderate assumption that, in both the training and testing data, the majority and minority samples are generated from the same classconditional distribution and respectively (i.e no distribution shift between training and testing). Hence, Equation (6) can be generalized as
(7)  
where and are the distributions of the minority and majority class, E is the expectation operator. Hence, is the expected probability of a minority sample with which the trained neural network predicts it in the majority class; be the expected probability of a majority sample being predicted as a minority sample. Here, we use to emphasize the dependence of (7) on the trained neural network model. Note that Equation (7) is a general relation regardless of data imbalance.
Predictive performance of the classifier involves a decision making process given . The conventional approach is to set a probability threshold , such that if , otherwise. Here, we take for the following analysis to understand model performance. By assuming that training and testing data follow the same distribution, Equation (7) is generalizable from training to testing.
3.3.1 When
CRCEN reduces to the conventional cross entropy loss. When imbalance ratio is high in the training data, say , Equation (7) is Since , we must have
If is the decision making threshold, this implies that the trained neural network can correctly predict a majority sample, confidently (at least) with probability 0.9, on average.
For prediction on minority class, model performance is more complex. We illustrate the idea with two cases for . Since , again, take ,

if , then we have . That implies the predicted probability of a minority sample being minority is 0.2 on average. Hence, the classifier must misclassify most minority samples (), resulting in very poor predictive accuracy for minority class.

if , then . Hence, the classifier can achieve good performance for minority samples. But the extent of “goodness” depends on the trained network and
, i.e. the variance
. If the variance is small (relative to the average distance from the decision boundary in the latent feature space), classifier can still achieve very high accuracy. If large, performance would degrade.
Geometrically, the imbalance of training data would push decision boundary toward to minority class in the latent feature space learned by neural network.
The analysis above theoretically explains why classifier always has good performance for majority class, as well as how performance for minority class is connected with model training and data distribution in imbalanced learning.
3.3.2 When
This strategy is equivalent to the empirical weighting by inverse class frequency deployed in [Aurelio et al.2019, Castro and Braga2013]. With this choice of , Equation (7) is . That is, the probability of predicting a minority sample as minority is equal to the probability of predicting a majority sample as majority. Taking , good predictive performance for both minority and majority class is guaranteed. However, the extent of “goodness” depends on and .
3.3.3 For General Choice of
is the parameter controlling the ratio between probabilities and . For example, when , . The loss function in CRCEN then deploys a weight setting equivalent to:
Since , . We are guaranteed to have good predictive performance for minority class (when ). Assume ^{1}^{1}1The true value of is generally unknown as it depends on the true data distribution. (i.e. , we obtain , which implies the prediction accuracy for minority class can be possibly boosted at the cost of a small accuracy degradation, but still maintaining good performance, for majority class. For the exact relation, we plan to investigate this problem in our future studies.
RHS  10  1  0.5 

LHS (Sim1)  10.05  1.00  0.50 
(1.13)  (0.09)  (0.04)  
LHS (Sim2)  10.12  1.01  0.50 
(0.67)  (0.05)  (0.03) 
Simulation results (along with standard deviation) for Equation (
7) over 100 runs. RHS represents theoretical value on the righthand side of (7); LHS the simulated value on the left hand side.3.4 Simulations for Correctness of Equation (7)
In order to check the correctness of Equation (7), we conduct simulations under two settings. The imbalance ratio is 10 in training data (), testing data size is ; both training and testing data follow the same data distribution.

Sim1: ,
. Logistic regression is fitted (as a special case of MLP).

Sim2: , , where , , , . A onehiddenlayer MLP of layer size and sigmoid activation is fitted.
is the weight used in CRCEN and the latter is then trained on the training data. The predicted probability for testing data is calculated and used in the LHS of Equation (7) to approximate and . Table 1 shows simulation results under three settings. We see from the table that simulated values match with the theoretical values accurately, demonstrating the correctness of Equation (7).
Dataset  # Sample  # Feature  IR 

Abalone  4177  10  9.7 
Coil  9822  85  16 
Satimage  6435  36  9.3 
Scene  2407  294  13 
Solar  1389  32  19 
UScrime  1994  100  12 
Abalone  Coil  Satimage  Scene  Solar  UScrime  

F1  Gm  F1  Gm  F1  Gm  F1  Gm  F1  Gm  F1  Gm  
MLP  0  0  0.107  0.273  0.554  0.668  0.211  0.341  0  0  0.475  0.597 
CSMLP  0.392  0.798  0.200  0.658  0.611  0.890  0.259  0.662  0.240  0.655  0.472  0.707 
CRCEN  0.400  0.808  0.205  0.678  0.626  0.894  0.298  0.701  0.264  0.690  0.525  0.726 
ADASYN  0.389  0.804  0.166  0.411  0.583  0.898  0.253  0.512  0.240  0.643  0.448  0.652 
SMOTE  0.400  0.808  0.161  0.406  0.624  0.891  0.234  0.487  0.232  0.647  0.466  0.675 
RUSB  0.385  0.792  0.163  0.624  0.518  0.874  0.163  0.561  0.174  0.705  0.407  0.831 
EE  0.377  0.789  0.200  0.670  0.537  0.869  0.256  0.707  0.169  0.670  0.434  0.858 
BRF  0.393  0.794  0.196  0.674  0.579  0.890  0.256  0.709  0.180  0.699  0.459  0.865 
4 Experiments
In this section, we evaluate CRCEN on realworld datasets with various imbalance degrees that are widely used in imbalanced learning. All datasets tested in the experiments are extracted from the “imbalancedlearn” Python package [Lemaitre et al.2017]. Table 2 shows the details of the datasets used in the experiments.
For performance comparison, we test several baseline models for imbalanced learning, including sampling methods SMOTE [Chawla et al.2002] and ADASYN [He et al.2008], ensemble methods Rusboost (RUSB) [Seiffert et al.2010], balanced random forest (BRF) [Khoshgoftaar et al.2007] and EasyEnsemble (EE) [Liu et al.2009], and neural networkbased costsensitive method CSMLP [Castro and Braga2013]
. As sampling methods are a data preprocessing step, we use MLP as the classifier after training data are resampled. For ensemble methods, default base classifiers are used. In addition, a MLP classifier trained on the original data with conventional cross entropy loss is tested. Most methods are implemented in imbalancedlearn and sklearn package. For our proposed CRCEN, we use MLP as the classifer and implement it in Pytorch. All MLPs are fixed with 3 layers with number of hidden neurons selected for each dataset.
In the experiments, each dataset is divided into training and testing sets by a stratified split 0.75/0.25 to ensure imbalance ratio is maintained. All models are trained on the training data and model performance is evaluated on the testing data. We repeats train/test split 4 times. We select model parameters (including regularization, number of hidden neurons) using 4fold cross validation with a grid search on the training data in the first run of the experiment and then fix them in the subsequent runs. This procedure is used to test model robustness to the variations in train/test splits.
As the overall accuracy is known misleading for imbalanced datasets, we use Fmeasure (F1) and Gm as evaluation metrics. Fmeasure and Gmean(Gm) are defined as follows:
where Precision =TP/(TP+FP), Recall = TP/(TP+FN) and Specificity = TN/(FP+TN), TP represents true positive, FP false positive, FN false negative and TN ture negative.
4.1 Predictive Results
Predictive performance on the testing data is reported in Table 3. As we can see from the table, CRCEN has an overall better performance than other methods. Compared with MLP trained directly on the training data, all methods achieve great improvements in Gmean. Since MLP tends to misclassify minority samples into majority class, this results in a large error and low recall score for minority class. Consequently, the Gmean of MLP is low. On the contrary, this demonstrates that all those techniques can effectively strengthen classifier’s detection for minority class and improve overall performance. CRCEN and CSMLP are both costsensitive methods based on MLP with the same weighting mechanism. We see that in the experiments CRCEN has slightly better but comparable performances with CSMLP. As explained in previous sections, both methods increase learning of minority samples with theoretical guarantees, by learning a balanced boundary between two classes. However, CRCEN’s probabilistic approach is more appropriate and effective for classification problems [Lee et al.2015]. Rusboost (RUSB) is an ensemble costsensitive method that costs are dynamically assigned to misclassified samples. From the table, RUSB’s performance is not as good as CRCEN and CSMLP. By checking its prediction (results not shown here), RUSB achieves highest classification accuracy for minority class, at the cost of significant amount of misclassification in majority class. This is because samples on the boundary can be easily misclassified, RUSB would assign a large cost to the boundary samples. Consequently, decision boundary will be pushed towards majority class and results in decreasing in precision and specificity, hence degradation in overall performance (F1 and Gmean).
4.2 Effect of
In many realworld applications, correct classification of minority samples () is of high priority over misclassification of majority samples (), making imbalanced learning extremely useful. In terms of evaluation metrics, a predictive model that has high recall is preferred. For costsensitive learning, imposing more cost on minority samples would generally improve recall. However, since there is tradeoff between recall and precision (or specificity), higher cost will decrease precision and (or specificity). Under CRCEN, the weight parameter controls the tradeoff. In this section, we investigate this relation.
We set and , to see to what extent of increase in weight can lead to performance gain on classifying the minority class, and how much performance loss on magnitude of the majority class. Note that when , CRCEN is equivalent to conventional cross entropy loss. For each value of , we train a MLP for classification. Once the models are trained, predictions are made on testing data. To quantify the tradeoff between recall and precision, we define expense as the ratio of change of FPs to that of FNs between two consecutive s (since increases, number of FN decreases and FP increases). For example, and are two pairs of (FN, FP) corresponding to and respectively, predicted on the testing data of Abalone. Then the budget is . This ratio can be viewed as the budget that one can afford for detecting a false negative sample. The decision making threshold is
Figure 1 shows the expense plots for six datasets. From the table, we can observe a trend that the expense increases as (namely ) increases. This implies that by imposing heavier costs, we can improve detection of true positives however at an increasing cost of false positives. When , in Scene, Solar, Satimage and UScrime datasets, heavier costs don’t improve classifier’s performance on minority class. With limited amount of training data, this in turn increases the risk of overfitting, resulting in performance degradation. We see that for datasets Satimage and UScrime, the expense is relatively small (less than 2 when ), compared with Abalone, Scene, Coil and Solar. In Table 3, MLP already has good performance in those two datasets. With a low expense and high class imbalance ratio, costsensitive can further improve the model overall performance, as confirmed in Table 3. From the same perspective, CRCEN is also effective for more complex datasets Abalone, scene, Coil and Solar.
5 Conclusion and Discussion
In this paper, we proposed a novel neuralnetwork based model CRCEN for imbalanced learning problem. The object function in CRCEN is a classwise reweighed version of the cross entropy loss. With this simple form, under some mild conditions, we derive a nontrivial probabilistic relation that can help us understand model’s predictive behavior. When the weights are set to inverse class frequency as a heuristic, the derived relation provides explanation for the effectiveness of this approach with theoretical guarantees. Extensive experiments are conducted to demonstrate the effectiveness of CRCEN. For future studies, we plan to investigate the relation for a general choice of to understand how model performance is affected accordingly.
References
 [Aurelio et al.2019] Yuri Sousa Aurelio, Gustavo Matheus de Almeida, Cristiano Leite de Castro, and Antonio Padua Braga. Learning from imbalanced data sets with weighted crossentropy function. Neural Processing Letters, pages 1–13, 2019.
 [Buda et al.2018] Mateusz Buda, Atsuto Maki, and Maciej A Mazurowski. A systematic study of the class imbalance problem in convolutional neural networks. Neural Networks, 106:249–259, 2018.

[Castro and Braga2013]
Cristiano L Castro and Antônio P Braga.
Novel costsensitive approach to improve the multilayer perceptron performance on imbalanced data.
IEEE transactions on neural networks and learning systems, 24(6):888–899, 2013. 
[Chawla et al.2002]
Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer.
Smote: synthetic minority oversampling technique.
Journal of artificial intelligence research
, 16:321–357, 2002.  [Chung et al.2015] YuAn Chung, HsuanTien Lin, and ShaoWen Yang. Costaware pretraining for multiclass costsensitive deep learning. arXiv preprint arXiv:1511.09337, 2015.

[Dong et al.2018]
Qi Dong, Shaogang Gong, and Xiatian Zhu.
Imbalanced deep learning by minority class incremental rectification.
IEEE transactions on pattern analysis and machine intelligence, 2018.  [Dumpala et al.2018] Sri Harsha Dumpala, Rupayan Chakraborty, Sunil Kumar Kopparapu, and TCS Reseach. A novel data representation for effective learning in class imbalanced scenarios. In IJCAI, pages 2100–2106, 2018.
 [Fernandez et al.2018] Alberto Fernandez, Salvador Garcia, Mikel Galar, Ronaldo Prati, Bartosz Krawczyk, and Francisso Herrera. Learning from imblanced data sets. Springer, 2018.
 [Freund and Schapire1997] Yoav Freund and Robert E Schapire. A decisiontheoretic generalization of online learning and an application to boosting. Journal of computer and system sciences, 55(1):119–139, 1997.
 [Galar et al.2012] Mikel Galar, Alberto Fernandez, Edurne Barrenechea, Humberto Bustince, and Francisco Herrera. A review on ensembles for the class imbalance problem: bagging, boosting, and hybridbased approaches. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 42(4):463–484, 2012.
 [Galar et al.2013] Mikel Galar, Alberto Fernández, Edurne Barrenechea, and Francisco Herrera. Eusboost: Enhancing ensembles for highly imbalanced datasets by evolutionary undersampling. Pattern Recognition, 46(12):3460–3471, 2013.
 [He and Garcia2008] Haibo He and Edwardo A Garcia. Learning from imbalanced data. IEEE Transactions on Knowledge & Data Engineering, (9):1263–1284, 2008.
 [He and Ma2013] Haibo He and Yunqian Ma. Imbalanced learning: foundations, algorithms, and applications. John Wiley & Sons, 2013.
 [He et al.2008] Haibo He, Yang Bai, Edwardo A Garcia, and Shutao Li. Adasyn: Adaptive synthetic sampling approach for imbalanced learning. In 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), pages 1322–1328. IEEE, 2008.
 [Huang et al.2016] Chen Huang, Yining Li, Chen Change Loy, and Xiaoou Tang. Learning deep representation for imbalanced classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5375–5384, 2016.

[Khan et al.2018]
Salman H Khan, Munawar Hayat, Mohammed Bennamoun, Ferdous A Sohel, and Roberto
Togneri.
Costsensitive learning of deep feature representations from imbalanced data.
IEEE transactions on neural networks and learning systems, 29(8):3573–3587, 2018.  [Khoshgoftaar et al.2007] Taghi M Khoshgoftaar, Moiz Golawala, and Jason Van Hulse. An empirical study of learning from imbalanced data using random forest. In 19th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2007), volume 2, pages 310–317. IEEE, 2007.
 [Lee et al.2015] ChenYu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeplysupervised nets. In Artificial Intelligence and Statistics, pages 562–570, 2015.
 [Lemaitre et al.2017] Guillaume Lemaitre, Fernando Nogueira, and Christos K Aridas. Imbalancedlearn: A python toolbox to tackle the curse of imbalanced datasets in machine learning. The Journal of Machine Learning Research, 18(1):559–563, 2017.
 [Li et al.2011] Shoushan Li, Zhongqing Wang, Guodong Zhou, and Sophia Yat Mei Lee. Semisupervised learning for imbalanced sentiment classification. In TwentySecond International Joint Conference on Artificial Intelligence, 2011.
 [Li et al.2018] Xiangrui Li, Dongxiao Zhu, and Ming Dong. Multinomial classification with classconditional overlapping sparse feature groups. Pattern Recognition Letters, 101:37–43, 2018.
 [Liu et al.2006] Yang Liu, Nitesh V Chawla, Mary P Harper, Elizabeth Shriberg, and Andreas Stolcke. A study in machine learning from imbalanced data for sentence boundary detection in speech. Computer Speech & Language, 20(4):468–494, 2006.
 [Liu et al.2009] XuYing Liu, Jianxin Wu, and ZhiHua Zhou. Exploratory undersampling for classimbalance learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 39(2):539–550, 2009.
 [Purushotham et al.2018] Sanjay Purushotham, Chuizheng Meng, Zhengping Che, and Yan Liu. Benchmarking deep learning models on large healthcare datasets. Journal of biomedical informatics, 83:112–134, 2018.
 [Seiffert et al.2010] Chris Seiffert, Taghi M Khoshgoftaar, Jason Van Hulse, and Amri Napolitano. Rusboost: A hybrid approach to alleviating class imbalance. IEEE Transactions on Systems, Man, and CyberneticsPart A: Systems and Humans, 40(1):185–197, 2010.
 [Sun et al.2007] Yanmin Sun, Mohamed S Kamel, Andrew KC Wong, and Yang Wang. Costsensitive boosting for classification of imbalanced data. Pattern Recognition, 40(12):3358–3378, 2007.
 [Tang et al.2009] Yuchun Tang, YanQing Zhang, Nitesh V Chawla, and Sven Krasser. Svms modeling for highly imbalanced classification. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 39(1):281–288, 2009.
 [Wang et al.2016] Shoujin Wang, Wei Liu, Jia Wu, Longbing Cao, Qinxue Meng, and Paul J Kennedy. Training deep neural networks on imbalanced data sets. In 2016 International Joint Conference on Neural Networks (IJCNN), pages 4368–4374. IEEE, 2016.
Comments
There are no comments yet.