Influence of Resampling on Accuracy of Imbalanced Classification

07/12/2017
by   Evgeny Burnaev, et al.
0

In many real-world binary classification tasks (e.g. detection of certain objects from images), an available dataset is imbalanced, i.e., it has much less representatives of a one class (a minor class), than of another. Generally, accurate prediction of the minor class is crucial but it's hard to achieve since there is not much information about the minor class. One approach to deal with this problem is to preliminarily resample the dataset, i.e., add new elements to the dataset or remove existing ones. Resampling can be done in various ways which raises the problem of choosing the most appropriate one. In this paper we experimentally investigate impact of resampling on classification accuracy, compare resampling methods and highlight key points and difficulties of resampling.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

02/28/2021

A Minimax Probability Machine for Non-Decomposable Performance Measures

Imbalanced classification tasks are widespread in many real-world applic...
01/06/2020

Identifying and Compensating for Feature Deviation in Imbalanced Deep Learning

We investigate learning a ConvNet classifier with class-imbalanced data....
11/30/2020

Binary Classification: Counterbalancing Class Imbalance by Applying Regression Models in Combination with One-Sided Label Shifts

In many real-world pattern recognition scenarios, such as in medical app...
04/05/2021

Procrustean Training for Imbalanced Deep Learning

Neural networks trained with class-imbalanced data are known to perform ...
04/16/2020

Diversity-Aware Weighted Majority Vote Classifier for Imbalanced Data

In this paper, we propose a diversity-aware ensemble learning based algo...
11/16/2019

An "outside the box" solution for imbalanced data classification

A common problem of the real-world data sets is the class imbalance, whi...
10/05/2021

Tradeoffs in Streaming Binary Classification under Limited Inspection Resources

Institutions are increasingly relying on machine learning models to iden...

1 Introduction

In this paper, we focus on binary classification tasks with imbalanced datasets, i.e., two-class datasets having much less representatives of one class (minor class) than of another (major class). Many real-world data analysis problems have inherent peculiarities which lead to unavoidable imbalances in available datasets. Examples of such problems include detection of whether a patient is “cancerous” or “healthy” from mammography image [1], oil spill detection from satellite images [2], network intrusion detection [3], detection of fraudulent transactions on credit cards [4], diagnosis of rare diseases [5], prediction and localization of failures in technical systems [6, 7], etc. Indeed, target events (frauds, failures, diseases, etc.) in these problems are rare, so there is much less information about them than about non-events (normal transactions, correct behavior, etc.). Hence an attempt to formulate any of these problems as a binary classification task leads to the class imbalance in the available dataset. Note that this effect is unavoidable since it is caused by the nature of the problem.

Moreover, the minor class is often the one of prime interest [8]

. E.g., in the examples above it corresponds to target events whose accurate prediction is crucial for applications. However, standard classification models (e.g., logistic regression, SVM, decision trees, nearest neighbors) treat all classes as equally important and thus tend to be biased towards the major class in imbalanced problems 

[9, 10, 11]. This may lead to poor prediction of minor class elements with the average quality of prediction being good. For example, consider a process with target events occurring just % of all cases. If a classification model always gives a ‘no-event’ answer it is wrong just % of all cases which seems to be a good quality on average. But such model is absolutely useless for the minor class prediction. Thus imbalanced classification problems require a special treatment.

There are several ways [8] to increase importance of the minor class.

  • Adapt a probability threshold for classifiers which yield probabilities of belonging to classes.

  • Modify a loss function, e.g., assign more weight to misclassification of minor class elements compared to misclassification of the major class.

  • Resample a dataset in order to soften or remove class imbalance. Resampling may include:

    • oversampling, i.e. addition of synthesized elements to the minor class;

    • undersampling, i.e. deletion of particular elements from the major class;

    • combined resampling, i.e. both oversampling and undersampling.

Below we focus on the latter approach. It is convenient and widely used since it allows to tackle imbalanced tasks using standard classification techniques. On the other hand, this approach comprises problem of resampling method and resampling amount (i.e., how many observations to add or drop) selection.

In this paper, we consider three widely used resampling methods (see section 3) and two simplest strategies of resampling amount selection (see section 4.2). We experimentally explore their influence on quality of classification of more than imbalanced datasets. The exploration shows that resampling is capable of improving the quality for most datasets, however, resampling method and amount have to be selected properly, see section 5 for more detailed discussion.

2 Notations and Problem Statement

In this section, we introduce notations which will be used further. Consider dataset of  elements with binary labels: , where , . Denote and . Let label corresponds to the major class, label corresponds to the minor class, then . To measure a degree of a class imbalance for a dataset, we introduce an imbalance ratio . Note that and the higher it is, the stronger imbalance of is.

The final goal is to learn a classifier using imbalanced training sample . This is done in two steps. First, the dataset is resampled using a resampling method : some observations in are dropped or some new synthetic observations are added to . The result of resampling is a dataset with . Next, some standard classification model is learned on , which gives classifier as a result.

Performance of a classifier is determined by a predefined classifier quality metrics which takes as input classifier , testing dataset  and yields higher value for better classification. In order to determine performance of and on the whole dataset  regarding metrics , we use a standard procedure based on -fold cross-validation [12] which yields .

3 Overview of Resampling Methods

Every resampling method , considered in this article, works according to the following scheme.

  1. Takes input:

    • dataset as described in section 2;

    • resampling multiplier which determines resulting imbalance ratio as and thereby controls amount of resampling;

    • additional parameters, which are specific for every particular method.

  2. Modifies given dataset by adding synthesized objects to the minor class (oversampling), or by dropping objects from the major class (undersampling), or both. Details depend on the method used.

  3. Outputs resampled dataset with features and imbalance ratio .

In this paper, we consider the most widely used resampling methods: Random Oversampling, Random Undersampling and Synthetic Minority Oversampling Technique (SMOTE).

3.1 Random Oversampling

Random oversampling [8] (ROS, also known as bootstrap oversampling) takes no additional input parameters. It adds to the minor class new

objects. Each of them is drawn from uniform distribution on

.

3.2 Random Undersampling

Random Undersampling [8] (RUS) is an undersampling method, it takes no additional parameters. It chooses random subset of with elements and drops it from the dataset. All subsets of have equal probabilities to be chosen.

3.3 Smote

Synthetic Minority Oversampling Technique (SMOTE) [13] is an oversampling method, it takes one additional integer parameter (number of neighbors). It adds to the minor class new synthesized objects, which are constructed in the following way.

  1. Initialize set as empty:

  2. Repeat the following steps times:

    1. [label=()]

    2. Select one random element .

    3. Find minor class elements which are nearest neighbors of . Randomly select one of them (call it ).

    4. Select random point on the segment connecting and .

    5. Assign minor class label to the newly generated element and store it: .

  3. Add generated objects to the dataset: .

3.4 Other Resampling Methods

There are several other resampling methods which are less widely used: Tomek Link deletion [14], One-Sided Selection [14], Evolutionary Undersampling [15], borderline-SMOTE [16], Neighborhood Cleaning Rule [17]. There exist also procedures combining resampling and classification in boosting: SMOTEBoost [18], RUSBoost [19], EUSBoost [20]. These methods are not examined in this paper because we aimed to explore only the most widespread resampling methods.

4 Methodology of Comparison

4.1 Data, Classifiers, Quality Evaluation

We used two pools of datasets for experimental comparison: artificial pool with datasets and real pool [21, 22] with datasets. Artificial datasets were drawn from a Gaussian mixture distribution. Each of two classes is represented as a Gaussian mixture with not more than components. Number of features varies from to , size of dataset from to , from to . Real-world datasets have come from different areas: biology, medicine, engineering, sociology. All features are numeric or binary, their number varies from to . Size of dataset varies from to , from to .

We ran learning process as described in section 2, for each dataset we varied classification model, resampling method and resampling multiplier. Bootstrap, RUS and SMOTE with (as taken in [13]) were considered as resampling methods. We varied resampling multiplier from to . As classification model we used Decision Trees, -Nearest Neighbors, and Logistic Regression with regularization. Optimal parameters of classification models were selected by cross-validation.

Area under precision-recall curve was used as a quality metric. To evaluate quality of resampling and classification, we performed -fold cross-validation and calculated — average of . Results of experiments are described by values of for each dataset, resampling method, resampling multiplier and classification model.

4.2 Resampling Multiplier Selection

We considered two strategies of resampling multiplier selection:

  • equalizing strategy, EqS: select multiplier providing balanced classes () in resulting dataset;

  • CV-search, CVS: select optimal multiplier (i.e., providing maximum of ) by cross-validation.

The equalizing strategy seems to be reasonable as it removes class imbalance which we initially tried to tackle. It is quick and widely used. CV-search may provide better quality but it is more time-consuming.

4.3 Dolan-More Curves

To compare resampling methods, we use Dolan-More curves [23] which are built in the following way. Let be the set of considered resampling methods, be the set of tasks (datasets), be the quality of the method on the dataset . For each method we introduce , a fraction of datasets, on which the method is worse than the best one not more than times:

For example, is a fraction of datasets where the method  is the best.

A graph of is called Dolan-More curve for the method . This definition implies that the higher the curve, the better the method. Note that Dolan-More curve for a particular method depends on other methods considered in comparison.

5 Results

Figure 1 provides Dolan-More curves for quality of classification with: no resampling, ROS, SMOTE (=5) and RUS. Here we consider both multiplier selection strategies (see section 4.2). In order to compare resampling methods, the curves are plotted separately for each classification model and also for real and artificial data.

Figure 1: Dolan-More curves for metric
Figure 2: Value of vs. resulting for dataset “Delft pump 1x3” from [22]

First of all, these curves show that influence of resampling on the quality strongly depends on resampling multiplier. That is, all resampling methods with CV-search of multiplier improve the quality on most datasets, especially for Decision trees and Logistic regression. In contrast, the equalizing strategy of multiplier selection (EqS) shows much lower quality, and it is even worse than no resampling for -Nearest neighbors and Logistic regression. Such low performance of EqS is not surprising because dependence of quality on the multiplier may be sophisticated: see for example figure 2, which represents this dependence for a certain real dataset.

Secondly, performance of resampling method depends on the classifier used, and there is no method that would always outperform the others. For example, RUS with CVS strategy is best when used with -Nearest neighbors on real datasets, but with Decision tree on the same datasets it performs worse than other methods with CVS.

Thirdly, impact of resampling on quality depends on the data it is applied to. RUS EqS used with Decision tree demonstrates this distinctly: it is worse than no resampling for the real datasets but outperforms it on the artificial data. Since artificial datasets are quite similar (see section 4.1), this means that they all have similar characteristics that result in higher quality for Decision tree with RUS EqS. At the same time, real datasets are more diverse and most of them have another characteristics that result in lower quality.

Finally, classification without resampling is the best choice in some cases. E.g., for Logistic regression it is about % of real datasets and % of artificial. Therefore not all imbalanced datasets have to be resampled to achieve better classification quality.

The overall conclusion is the following. Resampling improves classification of imbalanced datasets in most cases if a method and a multiplier are selected properly. But if not, resampling may have negative effect on quality of classification. Thus if one decides to resample imbalanced dataset, one has to select a method and a multiplier in order to get an actual quality improvement. Moreover, there is no universally good choice of how to resample the dataset. That is, the best resampling method and multiplier for one dataset can be worse than no resampling for another. E.g., equalizing classes with resampling doesn’t always improve the quality. Unexpectedly, but in some cases the best choice of resampling is not to resample at all. So, to improve quality of classification, one has to determine optimal resampling method (also considering no resampling) and multiplier in every particular imbalanced task.

Acknowledgement: The research was conducted in IITP RAS and supported solely by the Russian Science Foundation grant (project 14-50-00150).

References

  • [1]

    K. Woods, et al., “Comparative Evaluation of Pattern Recognition Techniques for Detection of Microcalcifications in Mammography”, Int’l J. Pattern Recognition and Artificial Intelligence, 7(6), p. 1417-1436 (1993)

  • [2]

    M. Kubat, R. C. Holte, S. Matwin. “Machine learning for the detection of oil spills in satellite radar images”, Machine Learning, 30(2-3), p. 195-215 (1998)

  • [3] C. Kruegel, D. Mutz, W. Robertson, F. Valeur “Bayesian event classification for intrusion detection”, Proceedings of Computer Security Applications Conference, 2003, p. 14-23
  • [4] P. K. Chan, S. J. Stolfo. “Toward scalable learning with non-uniform class and cost distributions: A case study in credit card fraud detection”, Proc. of Int. Conf. on Knowledge Discovery and Data Mining (1998)
  • [5] N.N. Rahman, D.N Davis. “Addressing the Class Imbalance Problems in Medical Datasets”, International Journal of Machine Learning and Computing, 3(2), p. 224-228 (2013)
  • [6] M. Tremblay, R. Pater, F. Zavoda, D. Valiquette, G. Simard. “Accurate Fault-Location Technique based on Distributed Power-Quality Measurements”, 19th International Conference on Electricity Distribution (2007)
  • [7] E. Burnaev et. al. “Rare Event Prediction Techniques in Application to Predictive Maintenance of Aircraft”, Proceedings of ITaS conference, p. 32-37 (2014)
  • [8] H. He, E.A. Garcia. “Learning from imbalanced data”, IEEE Transactions on Knowledge and Data Engineering, 21 (9), p. 1263-1284 (2009)
  • [9]

    S. Ertekin, J. Huang, C. Lee Giles. “Adaptive Resampling with Active Learning”, Technical Report, Pennsylvania State University (2009)

  • [10] G. King, L. Zeng. “Logistic regression in rare events data”, Political Analysis, vol. 9, p. 137-163 (2001)
  • [11] D. Cieslak, N. Chawla. “Learning decision trees for unbalanced data”, Lecture Notes in Computer Science, p.241-256 (2008)
  • [12] T. Hastie, R. Tibshirani, J. Friedman “The Elements of Statistical Learning”, Springer (2009)
  • [13] N.V. Chawla, K.W. Bowyer, L.O. Hall, W.P. Kegelmeyer. “SMOTE: Synthetic Minority Over-Sampling Technique”, Journal of Artificial Intelligence Research, vol. 16, p. 321-357 (2002)
  • [14] M. Kubat, S. Matwin. “Addressing the Curse of Imbalanced Training Sets: One-Sided Selection”, Proceedings of the 14th International Conference on Machine Learning, p. 179-186 (1997)
  • [15]

    S. Garcia, F. Herrera. “Evolutionary undersampling for classification with imbalanced datasets: proposals and taxonomy”, Evolutionary Computation 17, p. 275-306 (2009)

  • [16] H. Han, W. Y. Wang, B. H. Mao. “Borderline-SMOTE: A New Over-Sampling Method in Imbalanced Data Sets Learning”, Proceedings of International Conference on Intelligent Computing, p. 878-887 (2005)
  • [17] J. Laurikkala. “Improving Identification of Difficult Small Classes by Balancing Class Distribution”, Artificial Intelligence in Medicine Lecture Notes in Computer Science, vol. 2101, p. 63-66 (2001)
  • [18] N. V. Chawla, A. Lazarevic, L. O. Hall, and K. Bowyer. “SMOTEBoost: Improving prediction of the minority class in boosting”, Proc. Principles Knowl. Discov. Databases, p. 107-119 (2003)
  • [19] C. Seiffert, T. Khoshgoftaar, J. Van Hulse, A. Napolitano. “RUSBoost: a hybrid approach to alleviating class imbalance”, IEEE Transactions on Systems, Man and Cybernetics, part A, p. 185-197 (2010)
  • [20] M. Galar, A. Fernandez, E. Barrenechea, F. Herrera. “EUSBoost: Enhancing ensembles for highly imbalanced data-sets by evolutionary undersampling”, Pattern Recognition, Vol. 46, Issue 12, p. 3460-3471 (2013)
  • [21] http://sci2s.ugr.es/keel/imbalanced.php
  • [22] http://homepage.tudelft.nl/n9d04/occ/index.html
  • [23] E. Dolan, J. More. “Benchmarking Optimization Software With Performance Profiles”, Mathematical Programming, 91(2), p. 201-213 (2002)