Overly Optimistic Prediction Results on Imbalanced Data: Flaws and Benefits of Applying Over-sampling

01/15/2020 ∙ by Gilles Vandewiele, et al. ∙ Ghent University 0

Information extracted from electrohysterography recordings could potentially prove to be an interesting additional source of information to estimate the risk on preterm birth. Recently, a large number of studies have reported near-perfect results to distinguish between recordings of patients that will deliver term or preterm using a public resource, called the Term/Preterm Electrohysterogram database. However, we argue that these results are overly optimistic due to a methodological flaw being made. In this work, we focus on one specific type of methodological flaw: applying over-sampling before partitioning the data into mutually exclusive training and testing sets. We show how this causes the results to be biased using two artificial datasets and reproduce results of studies in which this flaw was identified. Moreover, we evaluate the actual impact of over-sampling on predictive performance, when applied prior to data partitioning, using the same methodologies of related studies, to provide a realistic view of these methodologies' generalization capabilities. We make our research reproducible by providing all the code under an open license.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Giving birth before 37 weeks of pregnancy, which is referred to as preterm birth, has a significant negative impact on the expected outcome of the neonate. According to the World Health Organization (WHO), preterm birth is one of the leading causes of death among young children, and its prevalence ranges from 5% to 18% globally [34]. As preterm labor is currently not yet fully understood, a gynecologist experiences difficulties to assess whether a patient presenting with symptoms suggestive for preterm labor, will actually deliver preterm or not. In order to support these experts in their assessment, several studies have already investigated the added value of a predictive model [35, 49, 11, 20, 48]. These models are based on a large number of variables extracted from clinical sources, such as the electronic health record, including the gestational age, results of a biomarker, cervical length, clinical history.

One interesting alternative that could be used as an additional source of valuable information for preterm birth risk prediction is electrohysterography (EHG). EHG is a technique that measures the electronic potentials of the uterine muscle by attaching patches around the abdomen of the woman. The technique can be seen as a new and better alternative to measuring uterine activity. As opposed to the intrauterine pressure catheter (IUPC), it is non-invasive and has no infection, perforation or hemorrhage risk [13]

. Moreover, EHG possesses a higher signal-to-noise ratio than tocography, especially in certain groups, such as obese women 

[14, 10].

This paper analyzes a large set of studies providing results using a public resource, further detailed in Section 2, called the ‘Term/Preterm ElectroHysteroGram DataBase’ (TPEHGDB)  [16]

. While the problem of predicting preterm birth is far from solved in reality, many of these studies report near-perfect prediction results. After carefully considering the presented machine learning methodologies, we argue that specific subtle, yet critical, choices in the data pre-processing setup may lead to information leakage from the held-out evaluation set into the training set. In particular, we often observed the incorrect application of over-sampling techniques to circumvent data imbalance issues 

[23], which often arise in medical applications as the number of healthy (negative) cases often greatly exceeds the number of cases with a certain disorder or disease (positive). As a consequence, the evaluation results no longer represent an evaluation on actually unseen data.

We do not intend to point the finger at any individual study or methodology. However, in order to provide clear insights into the potential pitfalls in data pre-processing and model evaluation, we try to divide the referred works according to different potential issues, to the best of our ability (see Section 2). After a short discussion on the use of over-sampling techniques (Section 3), the remainder of the paper (Section 4) is devoted to reporting our own results on the TPEHGDB dataset for the various models, engineered features, and over-sampling techniques available in literature. Moreover, we investigate the actual impact of applying over-sampling, when performed correctly, and the added value of tuning the over-sampling algorithm and its corresponding hyper-parameters.

The contributions of this paper are summarized as follows:

  • We provide an exhaustive overview of studies using the TPEHGDB dataset of EHG recordings which tackle the challenge of preterm birth risk estimation. We elaborate on three types of potential issues in the evaluation methodology of studies reporting near-perfect prediction results. We then focus on one specific issue, i.e. applying over-sampling before partitioning the data into a mutually exclusive training and testing set.

  • We explore the individual quality of each of the features discussed in these studies by measuring their capability to distinguish between term and preterm recordings.

  • We experimentally point out the consequences of incorrectly applying over-sampling (i.e., before data partitioning).

  • We discuss the actual impact of existing over-sampling techniques to counteract data imbalance, when applied prior to data partitioning, using the same methodologies as related studies in order to provide a realistic view of the performance of their proposed methodologies. Moreover, we study whether tuning the over-sampling algorithm and its corresponding hyper-parameters can result in an increase in the predictive performance.

This work extends Vandewiele et al. [47]

, where we first pointed out the potential issues of the studies reporting near-perfect results on the TPEHGDB dataset. In particular, for the current paper, we implemented all features discussed in the studies using TPEHGDB, for reproducing reported results, and we conducted a large experiment to assess the actual impact of oversampling, when applied prior to vs. after data partitioning. We provide the source code for the feature extraction and experiments to serve as a basis for future research endeavours

111https://github.com/GillesVandewiele/EHG-oversampling/.

2 Studies estimating preterm birth risk using the TPEHGDB

In 2008, a public dataset called TPEHGDB (Term/Preterm ElectroHysteroGram DataBase), containing 300 records that correspond to 300 different patients and pregnancies, was released on PhysioNet [16, 22]. Each record consists of three raw bipolar signals that express the difference in electric potentials, measured by four electrodes placed on the abdomen. In addition, each record is accompanied by clinical variables, such as the gestational age at recording time, the age and weight of the mother, and whether or not an abortion had occurred in the patient’s medical history. The recordings can be categorized as being captured at an earlier stage in pregnancy (gestational age of weeks) or at a later stage ( weeks). Recordings were captured at a frequency of 20 Hz for about 30 minutes. In Figure 1 the number of weeks till birth is shown as a function of the gestational age at the time of recording and displayed according to term or preterm delivery. A clear imbalance is present in the dataset with significantly more term deliveries (262 in total, green area) than preterm ones (38, red area).

Figure 1: The number of weeks till birth expressed as a function of the gestational age in weeks at the time of recording. All data points within the red area correspond to preterm deliveries, while the ones within the green area correspond to term deliveries.

We now review machine learning studies using the TPEHGDB dataset. Screening for the term ‘machine learning’, we selected a subset 82 studies from the 160 citations, present in Google Scholar on January 2020, to the original paper. These studies were then manually double-checked on whether they were a machine learning study and presented clear prediction results on the TPEHGDB dataset. In total 24 studies were identified using machine learning on the TPEGHDB. While many of these studies are reporting near-perfect results, these should be interpreted very cautiously as a flaw may be present in their methodology, leading to biased and overly optimistic reported metrics.

We can categorize these flaws into three different categories. First, studies often apply cross-validation on a subset of data subsampled from the original dataset. Performing this kind of preprocessing, in a machine learning context, without any kind of argumentation, raises doubts as it drastically increases the variance of the obtained results and avoids the problem of imbalanced data, which does not reflect reality in terms of potential applications 

[4, 37, 36, 45, 15, 7, 42, 46]. Second, there are a few studies that extract segments from the EHG signals and use these for classification. When doing this, it is very important to ensure that segments extracted from the same original signal are not divided into both training and testing set [12, 44]. Finally, there are many studies applying over-sampling before partitioning the data into two mutually exclusive sets in order to make the distribution of classes more uniform [17, 40, 25, 26, 2, 18, 1, 27, 24, 30, 39]. In this work, we focus on reproducing the results from the final category of flaws.

At the time of writing (January 2020), out of the 160 citations to the original paper which introduced the TPEHGDB dataset, we have found three machine learning studies that were accessible, tackled preterm birth risk estimation and, to the best of our knowledge, had a sound evaluation methodology [41, 29, 43]. In the study of Sadi-Ahmed et al. [43], all records taken before 26 weeks of gestation were filtered from the dataset, resulting in a dataset of 138 recordings taken after the 26th week of gestation. All of these signals were processed in order to detect contractions through Auto-Regressive Moving Averages (ARMA). From the detected contractions, features were extracted such as the total number of contractions, average duration and average time between contractions. An accuracy score of 0.89 to distinguish between term and preterm pregnancies was achieved within this study. This makes it hard to assess the clinical use of such a model, since an accuracy score of 0.86 can be achieved by always predicting term birth on this filtered dataset. Janjarasjitt et al. [29] proposed a feature type based on a wavelet decomposition of the signals. This feature was evaluated by tuning a threshold on a single feature in a leave-one-out cross-validation scheme. A sensitivity and specificity of 0.6842 and 0.7133 were achieved. While these scores are promising, they might be rather optimistic due to the fact that the evaluation happened in a leave-one-out scheme. As such, the performance of the sample entropy feature, provided along with the original data, closely matches, and sometimes even outperforms, that of the proposed feature. Nevertheless, the wavelet-based feature may be an interesting and complementary addition to the feature set. In the work of Ryu et al. [41] a similar study was performed in which they proposed a feature based on Multivariate Empirical Mode Decomposition (MEMD). They evaluated the added value of their feature, by subsampling a balanced dataset of 38 term and 38 preterm records, 100 times, from the original dataset. They found that the AUC improved from 0.5698 to 0.6049 by adding their feature to the dataset. While this subsampling strategy again avoids the problem of imbalanced data, which is reflected in the original dataset, it does show an improvement in AUC and thus indicates that adding the MEMD-based feature to the dataset could be beneficial for the predictive performance. Moreover, due to the many repetitions of the experiment, the sample mean better reflects the real mean.

3 Imbalanced learning and the impact of over-sampling prior to data partitioning

As illustrated in Figure 1

, the TPEHGDB dataset is highly imbalanced. An imbalance in the number of samples can make the majority class overly represented in the loss function, leading to a poor generalization of the minority class. In order to combat this, many authors apply

imbalanced learning [23] techniques to improve the performance of TPEHGDB classification. One of the most popular imbalanced learning approach is over-sampling which generates artificial training samples for the minority class by making relatively mild assumptions on the local distribution of the data. The popularity of over-sampling can be illustrated by the numerous variants and successful applications summarized in a recent review [19], and can be attributed to the fact that it is agnostic to the model being used since it is a preprocessing technique. Although over-sampling techniques are popular and easy to use, there are many pitfalls to avoid when they are applied. Special care must be taken when over-sampling is used in cross-validation to avoid information leakage. As the sample generation is based on the entire dataset, the generated artificial samples contain information about many (in some cases all) original samples. Thus, carrying out over-sampling before splitting into training and testing sets might leak information from the original testing samples to the artificially generated training samples, leading to overly optimistic validation scores. It is therefore of key importance to carry out the over-sampling after selecting a training and testing set.

Further, we emphasize that the correct number of artificial instances to be added depends on the distribution of the data and the subsequent data processing pipeline and should thus be tuned. For example, the performance of local classifiers (like k-Nearest Neighbors) clearly depends on the density of samples near the decision boundary, if the density of positive and negative samples differs, the classifier will be biased towards the class with more samples in a unit volume 

[23]

. However, the local density of positive and negative samples near the decision boundary is independent from the total number of samples. Even if a dataset is severely imbalanced, kNN can operate ideally if the density of points near the decision boundary is approximately equal. Similarly, balancing a dataset without checking the local densities near the decision boundary can lead to highly imbalanced local densities near the decision boundary, thus, deteriorating the performance of the classifier.


We highlight the impact of applying over-sampling prior to the data partitioning on an artificially generated dataset. We generated a binary classification problem with 100 samples. Twenty samples were marked positive (red circles), and the others negative (blue squares). The generated dataset is depicted on the left of Figure 2 (step 0). We now compare the effect of over-sampling data prior to partitioning with the effect of over-sampling after partitioning. In the former approach, we over-sample the data prior to partitioning, which introduces data leakage by generating training samples that are highly correlated with original data points that will end up in the testing set (step 1). Moreover, some of the generated artificial samples will be distributed to the testing set as well (step 2). This causes highly optimistic results that merely reflect the model’s capability to memorize samples seen during training, rather than its predictive performance if it were applied in a real-world setting on unseen data. In the latter, we first partition our data into two mutually exclusive sets (step 1). Then, we create artificial samples (red, unfilled circles) that are highly correlated to the training samples of the minority class (step 2) in order to have a similar number of samples for both classes in our training set.

Figure 2: Comparing the impact of applying over-sampling prior to data partitioning to applying over-sampling after data partitioning on an artificial two-dimensional classification problem.

To further exemplify the effect of bias, introduced by over-sampling before partitioning the data, we artificially generate 10,000 five-dimensional points, with each variable sampled from a uniform distribution (

for

). We then create an imbalanced binary classification problem by randomly labeling 90% of the data as negative and the remaining 10% as positive. As the data is randomly generated, we expect the predictive performance of a classifier to be as good as random guessing. Nevertheless, due to applying the Synthetic Minority over-sampling Technique (SMOTE) before data partitioning, an Area Under the receiver operating characteristic Curve (AUC) of

on a ‘held-out’ testing set can be achieved. This in contrast to an AUC of and when applying no SMOTE or SMOTE after data partitioning respectively, which closely resembles random guessing.

4 Results

In this section, we re-evaluate the related studies mentioned above by assessing the predictive power of the features proposed in those works, and by reproducing the results of their methodology. Afterwards, we study the true impact of over-sampling on the models’ predictive performance.

4.1 EHG Signal Preprocessing

We used the EHG signals provided by the original authors [16, 22], that were filtered using a Butterworth digital filter with cutoff frequencies 0.08 to 4.0 Hz. The first and last 150 seconds (corresponding to 3,000 measurements) of the recording, which could contain noise caused by the installation or removal of the recording setup, were removed. Two signals were discarded due to the fact that the recording length was shorter than 30 minutes. In total, this results in a dataset of 298 recordings each containing 3 signals of 30,000 measurements.

4.2 Predictive power of features

For reproducing the results from the aforementioned studies, we re-implemented all necessary features, provided they were described in sufficient detail to allow for reconstruction. In order to evaluate the capability to distinguish term from preterm signals of each feature individually, we measure the AUC by applying 10,000 bootstrapping iterations and report mean and corresponding standard deviation. This is done by varying a threshold over the full range of a feature for each of the resamplings of the entire dataset. We report this for each of the three channels, each corresponding to a bipolar signal, separately using (i) all samples, (ii) the samples with early recording times (

26 weeks) and (iii) the samples with late recording times ( 26 weeks). As there are a few thousand features per signal, we report only the 10 top performing features. As informative features can result in either high or low AUC scores (i.e. when the feature values of the positive samples are lower than the negative samples), we define top performing as the maximal difference between the measured AUC, using all samples, and , which resembles random guessing. The AUC scores, their standard deviations and the corresponding channel of these top features can be found in Table 1. For a more detailed explanation on the reported features, and how to extract them, we refer readers to the corresponding related work. A large number of features are extracted from a spectral representation, obtained by applying Empirical Mode Decomposition (EMD) and/or Wavelet Packet Decomposition (WPD). In those cases, we mention how many times the EMD has been applied and mention the level (number of A) and the type of the final coefficient (A or D) of the WPD.

From these results, we see that performing EMD and/or WPD often results in more informative features, as many of the top features come from these categories. Moreover, we see that the EHG signal from channel 3 is the most informative one. Nevertheless, the informativeness of the individual features is rather limited, with a maximal AUC ranging roughly between and , with high standard deviations, obtained by extracting the Higuchi Fractal Dimension after performing WPD on the output of the EMD.

Feature emd wpd From Ch. All Early Late
Sample Entropy 2 aaa [1] 3
Standard Deviation 2 aaaa [1] 3
Teager-Kaiser Energy 2 aaaa [1] 3
Interquartile Range 9 aaaad [1] 1
Higuchi Fractal Dim. 3 ad [1] 3
Sampen (m=4, s=5) [2] 3
1 Yule-Walker coef 2 a [24] 3
Median Frequency [27] 3
Wavelet Log Var Diff aaad [28] 3
FWL Peak Power 7 [42] 1
Table 1: The ability of the reproduced features from channel 1 to distinguish between term and preterm samples, expressed as the average AUC and its corresponding standard deviation, measured using 10,000 bootstrap iterations. When EMD has been applied, we mention the number of iterations. When WPD has been applied, we provide the level (number of A) and the type of the final coefficient (A or D).

4.3 The impact of over-sampling

In this section, we re-implement the methodologies of studies mentioned in Section 2, which report near-perfect results and that use an over-sampling algorithm within their pipeline, to conduct two experiments. On the one hand, we apply over-sampling before data partitioning to show that only then we are able to come close to the performance mentioned in the corresponding studies. On the other hand, we investigate the added value of tuning the over-sampling algorithm and the different hyper-parameters, when applied after data partitioning. We further clarify that in this experiment we merely want to reproduce the results of the published methodologies and do not intend to outperform previous solutions as no feature subset selection or advanced classification techniques have been applied.

In Table 2

, we report the classification technique, the over-sampling algorithm used, the reported evaluation metric and the reproduced evaluation metric when applying over-sampling before data partitioning for each of the studies. A feature set as similar as possible to the original study was used. In Table 

3 we use these methodologies to compare the AUC of the following cases (i) when no over-sampling is applied, (ii) when the same over-sampling algorithm as in the corresponding studies is used (with default hyper-parameters) and correctly applied after data partitioning, (iii) when the hyper-parameters of the same over-sampling algorithm are tuned, and (iv) when the algorithm itself, and its hyper-parameters are tuned. In order to tune the over-sampling algorithm itself (i.e. pick the optimal one), 19 different algorithms from the smote-variants library [33] are used. These algorithms were shown to achieve the best predictive performances according to an experiment with a large number of different datasets having varying properties [32]. All results are generated using 10-fold stratified cross-validation.

Table 2 shows that we are able to rather closely approximate the reported predictive performances of the related works. Unfortunately, this could only be achieved by performing over-sampling before data partitioning. In Section 3, we elaborated on how this causes leakage and results in overly optimistic performances. More realistic results of those methodologies can be found in the ‘Default’ column of Table 3. As can be seen, the discrepancy between those two is immense. It should also be noted that only AUC scores are reported in Table 3

, to allow for comparison between studies. While this makes the comparison for studies only reporting accuracy more difficult, we argue that accuracy is not an ideal metric to assess the predictive performance. Despite it being very comprehensible, it can give a skewed view in the context of imbalanced data. As an example, a naive model always predicting ‘term’ on this dataset would achieve a rather high accuracy of

.

The results in Table 3 show the positive effect of correctly applying over-sampling (i.e., after data partitioning), as the AUC increases roughly 3 to 10% when compared to not using over-sampling. Moreover, we can clearly see the significant positive impact of tuning the hyper-parameters of the over-sampling algorithms. Especially tuning the number of generated minority samples, as we notice increases up to roughly 6% in the AUC scores between the ‘Deafult’ and ‘Tuned’ column. This is in contrast with the reproduced studies, where the data was merely balanced. Additional improvements can be achieved by also tuning the over-sampling algorithm itself, which is similar to the model selection phase when creating a machine learning pipeline, as the differences between the ‘Tuned’ and ‘Best’ columns show. From these results, we can conclude that using SMOTE, or one of its variants, often results in good performances, but that other over-sampling algorithms such as NEATER or CBSO are worth investigating as well. Finally, we would like to highlight that the improvements in AUC scores achieved by the over-sampling techniques are aligned with the findings of Kovács [32], who evaluated the same over-sampling techniques on 104 imbalanced datasets comparable to TPEHGDB in size: the average improvements achieved in terms of AUC fall in the range 4%-10%. Therefore, improving the AUC by a range of 40%-50% by simply using over-sampling techniques, as reported in some previous works, is highly unlikely.

Study Classifier Over-sampler Metric Report. Reprod.
[17] SVM SMOTE AUC
[40] AB SMOTE AUC
[26] RF SMOTE AUC
[2] SVM ADASYN AUC
[18] NN SMOTE AUC
[27] QDA ADASYN AUC
[39] RF ADASYN AUC (Early)
[25] RF min/max Accuracy
[1] SVM ADASYN Accuracy
[24] SVM ADASYN Accuracy
[30] SVM ADASYN Accuracy
Table 2:

Comparing the AUC results (column ‘Report’) reported in published works (reference in column ‘Study’) with our own implementation that uses a similar feature set, the same classification technique (‘Classifier’) and over-sampling algorithm (‘Over-sampler’), applied (incorrectly) before data partitioning (column ‘Reprod.’). SVM = Support Vector Machine, QDA = Quadratic Discriminant Analysis, RF = Random Forest, AB = AdaBoost, NN = Neural Network.

Study Classif. Over-samp. None Default Tuned Best
[17] SVM SMOTE (NEATER [3])
[40] AB SMOTE (Cluster SMOTE [9])
[25] RF min/max (CBSO [5])
[26] RF SMOTE (SMOTE [8])
[2] SVM ADASYN (NEATER [3])
[18] NN SMOTE (LVQ SMOTE [38])
[1] SVM ADASYN (NEATER [3])
[27] QDA ADASYN (SMOTE Tomek [6])
[24] SVM ADASYN (CBSO [5])
[30] SVM ADASYN (Selected SMOTE [31])
[39] RF ADASYN (Polyn. SMOTE [21])
Table 3: The achieved AUC scores, using the methodologies of related works, but with over-sampling applied after data partitioning. We compare the AUC of (i) when no over-sampling is used (‘None’), (ii) using the same over-sampling technique as the corresponding work with default hyper-parameters (‘Default’), (iii) using the same over-sampling technique with tuned hyper-parameters (‘Tuned’), and (iv) using the best-performing over-sampling algorithm that is optimally tuned (‘Best’). SVM = Support Vector Machine, QDA = Quadratic Discriminant Analysis, RF = Random Forest, AB = AdaBoost, NN = Neural Network.

5 Conclusion and Future Work

In light of a significant body of recent literature reporting near-perfect results on the TPEHGDB dataset, we showed how subtle details in the methodology can introduce label leakage, which results in overly optimistic results. One of these potential pitfalls is over-sampling for data-augmentation purposes performed prior to partitioning data into training and evaluation sets. In order to investigate the actual impact of over-sampling, we re-implemented the features proposed in recent work and extracted them from the TPEHGDB dataset. We reproduced the results of 11 studies that reported near-perfect results and used an over-sampling algorithm in their pipeline. We demonstrated that we can only approximate their reported results when over-sampling is applied before data partitioning. Next, we assessed what the actual impact of over-sampling would be in their methodologies, when applied after data partitioning. Moreover, we investigated the added value of tuning the over-sampling algorithm and its hyper-parameters. Our results indicate that further research endeavours are required before preterm birth risk estimations based on EHG signals in a clinical setting might become useful in practice.

To support further research in this area, and in order to stimulate the correct use of over-sampling techniques in general, we make all code used for this study publicly available. We envision different research directions to be very promising. First, the collection of a larger dataset, with more uniform recording times and containing an important fraction of preterm cases will be required to build clinically useful models. Second, predicting entire curves of probabilities of a patient being pregnant at a certain point in time through the use of survival analysis instead of single-point predictions as in previous studies may increase the usefulness of a predictive model. Finally, deep learning could be an interesting future research approach as it is able to automatically learn representations of the EHG signals, as opposed to feature engineering.

Conflict of interest statement

The authors declare no competing interests.

Acknowledgements

Gilles Vandewiele (1S31417N) and Isabelle Dehaene (1700520N) are funded by a scholarship of FWO. This study has been performed in the context of the ‘Predictive health care using text analysis on unstructured data project’, funded by imec, and the PRETURN (PREdiction Tool for prematUre laboR and Neonatal outcome) clinical trial (EC/2018/0609) of Ghent University Hospital. All funding bodies played no role in the creation of this study.

Code and data availability

The code is available on Github under an open license222https://github.com/GillesVandewiele/EHG-oversampling/. The TPEHGDB dataset is available from Physionet333https://physionet.org/content/tpehgdb/1.0.1/.

References

  • [1] U. R. Acharya, V. K. Sudarshan, S. Q. Rong, Z. Tan, C. M. Lim, J. E. Koh, S. Nayak, and S. V. Bhandary (2017) Automated detection of premature delivery using empirical mode and wavelet packet decomposition techniques with uterine electromyogram signals. Computers in biology and medicine 85, pp. 33–42. Cited by: §2, Table 1, Table 2, Table 3.
  • [2] M. U. Ahmed, T. Chanwimalueang, S. Thayyil, and D. P. Mandic (2016) A multivariate multiscale fuzzy entropy algorithm with application to uterine emg complexity analysis. Entropy 19 (1), pp. 2. Cited by: §2, Table 1, Table 2, Table 3.
  • [3] B. A. Almogahed and I. A. Kakadiaris (2014-08)

    NEATER: filtering of over-sampled data using non-cooperative game theory

    .
    In

    2014 22nd International Conference on Pattern Recognition

    ,
    Vol. , pp. 1371–1376. External Links: Document, ISSN 1051-4651 Cited by: Table 3.
  • [4] S. M. Baghamoradi, M. Naji, and H. Aryadoost (2011) Evaluation of cepstral analysis of ehg signals to prediction of preterm labor. In Biomedical Engineering (ICBME), 2011 18th Iranian Conference of, pp. 81–83. Cited by: §2.
  • [5] S. Barua, Md. M. Islam, and K. Murase (2011) A novel synthetic minority oversampling technique for imbalanced data set learning. In Neural Information Processing, B. Lu, L. Zhang, and J. Kwok (Eds.), Berlin, Heidelberg, pp. 735–744. External Links: ISBN 978-3-642-24958-7 Cited by: Table 3.
  • [6] G. E. A. P. A. Batista, R. C. Prati, and M. C. Monard (2004-06) A study of the behavior of several methods for balancing machine learning training data. SIGKDD Explor. Newsl. 6 (1), pp. 20–29. External Links: ISSN 1931-0145, Link, Document Cited by: Table 3.
  • [7] M. Beiranvand, M. Shahbakhti, M. Eslamizadeh, M. Bavi, and S. Mohammadifar (2017) Investigating wavelet energy vector for pre-term labor detection using ehg signals. In Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), 2017, pp. 269–274. Cited by: §2.
  • [8] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer (2002) SMOTE: synthetic minority over-sampling technique.

    Journal of Artificial Intelligence Research

    16, pp. 321–357.
    Cited by: Table 3.
  • [9] D. A. Cieslak, N. V. Chawla, and A. Striegel (2006-05) Combating imbalance in network intrusion datasets. In 2006 IEEE International Conference on Granular Computing, Vol. , pp. 732–737. External Links: Document, ISSN Cited by: Table 3.
  • [10] G. A. Davies, C. Maxwell, L. McLeod, R. Gagnon, M. Basso, H. Bos, M. Delisle, D. Farine, L. Hudon, S. Menticoglou, et al. (2010) Obesity in pregnancy. Journal of Obstetrics and Gynaecology Canada 32 (2), pp. 165–173. Cited by: §1.
  • [11] D. A. De Silva, S. Lisonkova, P. von Dadelszen, A. R. Synnes, and L. A. Magee (2017) Timing of delivery in a high-risk obstetric population: a clinical prediction model. BMC pregnancy and childbirth 17 (1), pp. 202. Cited by: §1.
  • [12] D. Despotović, A. Zec, K. Mladenović, N. Radin, and T. L. Turukalo (2018) A machine learning approach for an early prediction of preterm delivery. In 2018 IEEE 16th International Symposium on Intelligent Systems and Informatics (SISY), pp. 000265–000270. Cited by: §2.
  • [13] T. Y. Euliano, M. T. Nguyen, S. Darmanjian, S. P. McGorray, N. Euliano, A. Onkala, and A. R. Gregg (2013) Monitoring uterine activity during labor: a comparison of 3 methods. American journal of obstetrics and gynecology 208 (1), pp. 66–e1. Cited by: §1.
  • [14] T. Y. Euliano, M. T. Nguyen, D. Marossero, and R. K. Edwards (2007) Monitoring contractions in obese parturients: electrohysterography compared with traditional monitoring. Obstetrics & Gynecology 109 (5), pp. 1136–1140. Cited by: §1.
  • [15] D. T. Far, M. Beiranvand, and M. Shahbakhti (2015) Prediction of preterm labor from ehg signals using statistical and non-linear features. In Biomedical Engineering International Conference (BMEiCON), 2015 8th, pp. 1–5. Cited by: §2.
  • [16] G. Fele-Žorž, G. Kavšek, Ž. Novak-Antolič, and F. Jager (2008) A comparison of various linear and non-linear signal processing techniques to separate uterine emg records of term and pre-term delivery groups. Medical & biological engineering & computing 46 (9), pp. 911–922. Cited by: §1, §2, §4.1.
  • [17] P. Fergus, P. Cheung, A. Hussain, D. Al-Jumeily, C. Dobbins, and S. Iram (2013) Prediction of preterm deliveries from ehg signals using machine learning. PloS one 8 (10), pp. e77154. Cited by: §2, Table 2, Table 3.
  • [18] P. Fergus, I. Idowu, A. Hussain, and C. Dobbins (2016) Advanced artificial neural network classification for detecting preterm births using ehg records. Neurocomputing 188, pp. 42–49. Cited by: §2, Table 2, Table 3.
  • [19] A. Fernandez, S. Garcia, F. Herrera, and N. V. Chawla (2018) SMOTE for learning from imbalanced data: progress and challenges, marking the 15-year anniversary. Journal of Artificial Intelligence Research 61, pp. 863–905. Cited by: §3.
  • [20] A. García-Blanco, V. Diago, V. S. De La Cruz, D. Hervás, C. Cháfer-Pericás, and M. Vento (2017) Can stress biomarkers predict preterm birth in women with threatened preterm labor?. Psychoneuroendocrinology 83, pp. 19–24. Cited by: §1.
  • [21] S. Gazzah and N. E. B. Amara (2008-Sept) New oversampling approaches based on polynomial fitting for imbalanced data sets. In 2008 The Eighth IAPR International Workshop on Document Analysis Systems, Vol. , pp. 677–684. External Links: Document, ISSN Cited by: Table 3.
  • [22] A. L. Goldberger, L. A. N. Amaral, L. Glass, J. M. Hausdorff, P. Ch. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C. Peng, and H. E. Stanley (2000-06) PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. Circulation 101 (23), pp. e215–e220. External Links: Document, ISSN 0009-7322, Link Cited by: §2, §4.1.
  • [23] H. He and E. A. Garcia (2009) Learning from imbalanced data. IEEE Trans Knowl and Data Eng 21 (9), pp. 1263–1284. Cited by: §1, §3, §3.
  • [24] S. Hoseinzadeh and M. C. Amirani (2018) Use of electro hysterogram (ehg) signal to diagnose preterm birth. In Electrical Engineering (ICEE), Iranian Conference on, pp. 1477–1481. Cited by: §2, Table 1, Table 2, Table 3.
  • [25] A. J. Hussain, P. Fergus, H. Al-Askar, D. Al-Jumeily, and F. Jager (2015) Dynamic neural network architecture inspired by the immune algorithm to predict preterm deliveries in pregnant women. Neurocomputing 151, pp. 963–974. Cited by: §2, Table 2, Table 3.
  • [26] I. O. Idowu, P. Fergus, A. Hussain, C. Dobbins, M. Khalaf, R. V. C. Eslava, and R. Keight (2015) Artificial intelligence for detecting preterm uterine activity in gynecology and obstetric care. In Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, pp. 215–220. Cited by: §2, Table 2, Table 3.
  • [27] F. Jager, S. Libensek, and K. Gersak (2018) Characterization and automatic classification of preterm and term uterine records. bioRxiv, pp. 349266. Cited by: §2, Table 1, Table 2, Table 3.
  • [28] S. Janjarasjitt (2017) Evaluation of performance on preterm birth classification using single wavelet-based features of ehg signals. In Biomedical Engineering International Conference (BMEiCON), 2017 10th, pp. 1–4. Cited by: Table 1.
  • [29] S. Janjarasjitt (2017) Examination of single wavelet-based features of ehg signals for preterm birth classification.. IAENG International Journal of Computer Science 44 (2). Cited by: §2.
  • [30] M. U. Khan, S. Aziz, S. Ibraheem, A. Butt, and H. Shahid (2019) Characterization of term and preterm deliveries using electrohysterograms signatures. In 2019 IEEE 10th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), pp. 0899–0905. Cited by: §2, Table 2, Table 3.
  • [31] F. Koto (2014) SMOTE-out, smote-cosine, and selected-smote: an enhancement strategy to handle imbalance in data level. 2014 International Conference on Advanced Computer Science and Information System, pp. 280–284. Cited by: Table 3.
  • [32] G. Kovács (2019) An empirical comparison and evaluation of minority oversampling techniques on a large number of imbalanced datasets. Applied Soft Computing 83, pp. 105662. External Links: Document Cited by: §4.3, §4.3.
  • [33] G. Kovács (2019) Smote-variants: a python implementation of 85 minority oversampling techniques. Neurocomputing 366, pp. 352–354. Cited by: §4.3.
  • [34] L. Liu, S. Oza, D. Hogan, Y. Chu, J. Perin, J. Zhu, J. E. Lawn, S. Cousens, C. Mathers, and R. E. Black (2016) Global, regional, and national causes of under-5 mortality in 2000–15: an updated systematic analysis with implications for the sustainable development goals. The Lancet 388 (10063), pp. 3027–3035. Cited by: §1.
  • [35] L. J. Meertens, P. van Montfort, H. C. Scheepers, S. M. van Kuijk, R. Aardenburg, J. Langenveld, I. M. van Dooren, I. M. Zwaan, M. E. Spaanderman, and L. J. Smits (2018) Prediction models for the risk of spontaneous preterm birth based on maternal characteristics: a systematic review and independent external validation. Acta obstetricia et gynecologica Scandinavica. Cited by: §1.
  • [36] S. Naeem, A. Ali, and M. Eldosoky (2013) Kl. comparison between using linear and non-linear features to classify uterine electromyography signals of term and preterm deliveries. In Radio Science Conference (NRSC), 2013 30th National, pp. 492–502. Cited by: §2.
  • [37] S. M. Naeem, A. F. Seddik, and M. A. Eldosoky (2014) New technique based on uterine electromyography nonlinearity for preterm delivery detection. Journal of Engineering and Technology Research 6 (7), pp. 107–114. Cited by: §2.
  • [38] M. Nakamura, Y. Kajiwara, A. Otsuka, and H. Kimura (2013) LVQ-smote – learning vector quantization based synthetic minority over–sampling technique for biomedical data. In BioData Mining, Cited by: Table 3.
  • [39] J. Peng, D. Hao, L. Yang, M. Du, X. Song, H. Jiang, Y. Zhang, and D. Zheng (2019) Evaluation of electrohysterogram measured from different gestational weeks for recognizing preterm delivery: a preliminary study using random forest. Biocybernetics and Biomedical Engineering. Cited by: §2, Table 2, Table 3.
  • [40] P. Ren, S. Yao, J. Li, P. A. Valdes-Sosa, and K. M. Kendrick (2015) Improved prediction of preterm delivery using empirical mode decomposition analysis of uterine electromyography signals. PloS one 10 (7), pp. e0132116. Cited by: §2, Table 2, Table 3.
  • [41] J. Ryu and C. Park (2015) Time-frequency analysis of electrohysterogram for classification of term and preterm birth. IEIE Transactions on Smart Processing & Computing 4 (2), pp. 103–109. Cited by: §2.
  • [42] N. Sadi-Ahmed, B. Kacha, H. Taleb, and M. Kedir-Talha (2017)

    Relevant features selection for automatic prediction of preterm deliveries from pregnancy electrohysterograhic (ehg) records

    .
    Journal of medical systems 41 (12), pp. 204. Cited by: §2, Table 1.
  • [43] N. Sadi-Ahmed and M. Kedir-Talha (2015) Contraction extraction from term and preterm electrohyterographic signals. In Electrical Engineering (ICEE), 2015 4th International Conference on, pp. 1–4. Cited by: §2.
  • [44] M. Shahrdad and M. C. Amirani (2018) Detection of preterm labor by partitioning and clustering the ehg signal. Biomedical Signal Processing and Control 45, pp. 109–116. Cited by: §2.
  • [45] S. Sim, H. Ryou, H. Kim, J. Han, and K. Park (2014) Evaluation of electrohysterogram feature extraction to classify the preterm and term delivery groups. In The 15th International Conference on Biomedical Engineering, pp. 675–678. Cited by: §2.
  • [46] K. Subramaniam, N. V. Iqbal, et al. (2018) Classification of fractal features of uterine emg signal for the prediction of preterm birth. Biomedical and Pharmacology Journal 11 (1), pp. 369–374. Cited by: §2.
  • [47] G. Vandewiele, I. Dehaene, O. Janssens, F. Ongenae, F. De Backere, F. De Turck, K. Roelens, S. Van Hoecke, and T. Demeester (2019) A critical look at studies applying over-sampling on the TPEHGDB dataset. In Conference on Artificial Intelligence in Medicine in Europe, pp. 355–364. Cited by: §1.
  • [48] G. Vandewiele, I. Dehaene, O. Janssens, F. Ongenae, F. De Backere, F. De Turck, K. Roelens, S. Van Hoecke, and T. Demeester (2019) Time-to-birth prediction models and the influence of expert opinions. In Conference on Artificial Intelligence in Medicine in Europe, pp. 286–291. Cited by: §1.
  • [49] H. Watson, J. Carter, P. Seed, R. Tribe, and A. Shennan (2017) QUiPP app: a safe alternative to a treat-all strategy for threatened preterm labor. Ultrasound in Obstetrics & Gynecology 50 (3), pp. 342–346. Cited by: §1.