Log In Sign Up

Tradeoffs in Streaming Binary Classification under Limited Inspection Resources

Institutions are increasingly relying on machine learning models to identify and alert on abnormal events, such as fraud, cyber attacks and system failures. These alerts often need to be manually investigated by specialists. Given the operational cost of manual inspections, the suspicious events are selected by alerting systems with carefully designed thresholds. In this paper, we consider an imbalanced binary classification problem, where events arrive sequentially and only a limited number of suspicious events can be inspected. We model the event arrivals as a non-homogeneous Poisson process, and compare various suspicious event selection methods including those based on static and adaptive thresholds. For each method, we analytically characterize the tradeoff between the minority-class detection rate and the inspection capacity as a function of the data class imbalance and the classifier confidence score densities. We implement the selection methods on a real public fraud detection dataset and compare the empirical results with analytical bounds. Finally, we investigate how class imbalance and the choice of classifier impact the tradeoff.


page 1

page 2

page 3

page 4


Statistical Theory for Imbalanced Binary Classification

Within the vast body of statistical theory developed for binary classifi...

Large Neural Network Based Detection of Apnea, Bradycardia and Desaturation Events

Apnea, bradycardia and desaturation (ABD) events often precede life-thre...

Taming Adversarial Robustness via Abstaining

In this work, we consider a binary classification problem and cast it in...

A Study imbalance handling by various data sampling methods in binary classification

The purpose of this research report is to present the our learning curve...

Object-centric Auto-encoders and Dummy Anomalies for Abnormal Event Detection in Video

Abnormal event detection in video is a challenging vision problem. Most ...

Influence of Resampling on Accuracy of Imbalanced Classification

In many real-world binary classification tasks (e.g. detection of certai...

Deep Q-Network-based Adaptive Alert Threshold Selection Policy for Payment Fraud Systems in Retail Banking

Machine learning models have widely been used in fraud detection systems...

1. Introduction

Automated information processing and decision-making systems used in finance, security, quality control and medical applications use machine learning models for monitoring sequentially arriving events for malicious activities or abnormalities. Identifying such events in a timely manner can be crucial in preventing unfavorable outcomes, such as monetary loss due to fraud in retail banking or data breaches due to cyber attacks. While missing an abnormal event can have adverse consequences - since such events are sporadic and investigating events entails operational costs and can lead to processing delays - these systems are restricted in the number of risky events they select for manual inspection.

Many machine learning classification algorithms predict a score (often a probability) for each data sample representing the algorithm’s confidence about its class membership. In a binary classification problem, the class labels are derived from converting the predicted scores to binary labels using a

(decision) threshold. Adjusting the threshold, especially in settings with severe class-imbalance or when the misclassification of one class outweighs the misclassification of the other class, can profoundly impact the classifier performance (e.g., True Positive (TP)-False Positive (FP) tradeoff) (Provost, 2000)

. The threshold is generally tuned using a grid search across a range of thresholds, or it is computed from the receiver operating characteristic (ROC) curve or the precision-recall curve in highly skewed datasets

(He and Garcia, 2009).

Given the imbalanced nature of data in this domain, which makes learning classifiers that efficiently discriminate among the minority and majority class difficult, and the limited resources available for inspecting time-sensitive risky events, we are interested in understanding the relationship between the rate of detection from the minority class (i.e., the fraction of samples from the minority class selected for inspection) and the inspection budget. Specifically, we focus on applications that involve real-time processing and decision-making where an abnormal event can only be inspected at the time of arrival, and we investigate how different selection policies based on classifier predictions operate in terms of the limited inspection budget rather than the decision threshold.

Point processes, such as the Poisson process, have been widely used for modeling event arrivals at random times in various applications, such as arrivals in call centers (Kim and Whitt, 2014a) system failures (Rausand and Hoyland, 2003), network traffic models (Chandrasekaran, 2009), and in financial modeling (Giesecke, 2004; Kou, 2002). We note that Poisson processes are not suitable in settings with scheduled arrivals (e.g., doctor appointments), intentionally separated events (e.g., plane landings), or events that arrive in groups, (e.g., at a restaurant where group members are not independent from one another). One can perform statistical tests on the data to confirm whether the arrivals can be modeled as a non-homogeneous Poisson process as described in (Kim and Whitt, 2014b). In the setting considered in this work, events (e.g., transactions) arrive and are processed independently from one another, and therefore, their arrival can be modeled according to a non-homogeneous Poisson process. The analysis can easily be extended to the more general renewal process (Daley and Vere-Jones, 2007).


We consider an imbalanced binary classification problem where the goal is to select a limited number of sequentially arriving data samples that are most likely to belong to the minority class. We present the problem in the context of fraud detection in financial transactions, but our results apply to the general imbalanced binary classification problem. Our contributions are as follows:

  • [leftmargin=*]

  • We break the problem into two tasks: learning a classifier from data, and using the classifier predictions to sample sequential arrivals for inspection. We focus on the second task, and study the tradeoff between the minority class detection rate and the inspection budget, for a given learned classifier. This tradeoff can be used to determine how many samples need to be inspected to achieve a certain detection rate from the minority class.

  • We assume that events arrive periodically according to a NHPP, and focus on four selection (decision) strategies: sampling based on static and dynamic thresholds, random sampling and sampling in batches. For each method, we analytically characterize the minority-class detection rate-inspection capacity tradeoff.

  • For the case of sampling with respect to a static threshold, we determine the optimal threshold value that maximizes the minority-class detection rate for a given inspection capacity.

  • We use a publicly available fraud detection dataset to learn a classifier and estimate the time-varying arrival rate function of the NHPP, and compare the empirical results from each sampling technique with our analytical bounds. We show that using dynamic thresholds operates very closely to the upper performance limit resulting from sampling in batches, especially for very small inspection capacities.

  • For this dataset, we investigate how class imbalance and the predictive power of the classifier, affects the tradeoff, and compare the empirical results against an upper bound on the end-to-end problem when considering both tasks of learning and operational decisions jointly.

2. Related Work

Most work on detecting samples from the minority class focus on learning optimal classifiers with respect to a given performance metric, such as the F1-score or the area under the ROC curve, and then satisfy the inspection constraints by adjusting the decision threshold. The work in (Koyejo et al., 2014) studies the optimal fixed threshold selection problem for binary classification with respect to various performance metrics. In (Shen and Kurshan, 2020)

, the authors consider a dynamic environment and model the threshold tuning as a sequential decision making problem. They use reinforcement learning to adaptively adjust the thresholds by maximizing a reward in terms of the net monetary value of missed and detected frauds when restricted to a fixed inspection capacity. The work in

(Li and Vorobeychik, 2015) studies the adversarial binary classification problem with operational constraints (e.g., inpsection budget), where an intelligent adversary attempts to evade the decision policies. By modeling the problem as a Stackleberg game, they determine the optimal randomized operational policy that abides by the constraints. In other related work with dynamic environments, (Houssou et al., 2019) considers a fraud detection setting where rare fraudulent events arrive from a Poisson process with a parametric arrival function estimated from the data, and the goal is to predict the arrival of a new fraudulent event.

More recently, (Dervovic et al., 2021) adopted the sequential assignment algorithm of (Albright, 1974) in the fraud detection setting such that the overall value of detected fraudulent transactions is maximized. In this paper, we use this algorithm to find adaptive thresholds for transactions arriving at random times according to a Poisson process. Given the sequential nature of arrivals in information processing and decision-making applications, this algorithm allows us to directly account for the limited inspection capacity when deciding to inspect transactions based on the output of a machine learning model.

3. Problem Formulation

Consider a transaction (e.g., payment, credit card purchase) fraud detection setting, where transactions arrive sequentially at random times over a finite time horizon according to a Non-Homogeneous Poisson Process (NHPP) with a continuous arrival rate (intensity) function , where denotes the time of arrival. To each transactions we associate a triplet , where

is a random variable representing the observed features of transaction

, indicates if the transaction is fraudulent, and denotes its random arrival time. We assume that transactions are independent from one another and that there is significant class imbalance such that , i.e., there are considerably less fraudulent transactions compared to non-fraudulent ones.

There is a binary classifier , that assigns a random value

to a transaction with feature vector

, which represents the classifier’s confidence that a transaction is fraudulent. We have an inspector (decision-maker) that can investigate a transaction, and determine whether it is fraudulent without error; however, the inspector is only able to investigate a limited number of transactions during . Therefore, the inspector needs to decide which transactions should be selected for inspection in order to detect as many fraudulent transactions as possible given its limited inspection resources. Note that since transactions and are independent, then for any classifier , the corresponding scores and are also independent.

Transaction Arrival Process:

We assume transactions arrive according to a NHPP with rate function , and cumulative rate function . The number of transactions in interval is a random variable

with Poisson distribution parametrized by

, and the expected number of arrivals in is 111A NHPP is denoted by , corresponding to the number of arrivals in . We use to denote the arrivals in .. The rate function and cumulative rate function can be estimated from several observed realizations of the process over , using non-parametric estimators as proposed in (Lewis and Shedler, 1976; Arkin and Leemis, 2000), or through parametric methods as in (Lee et al., 1991; Kuhl and Wilson, 2000). In our experiments in Sec. 5

, we use the heuristic estimator proposed in

(Lewis and Shedler, 1976).

3.1. Objective

The inspector has limited resources, and can only select and investigate a fraction of the incoming transactions in , which we refer to as the inspection capacity. We assume that for a given capacity , it selects a fraction of the expected number of arrivals equal to transactions. Our goal is to evaluate how well a given sampling method is able to choose fraudulent transactions based on the scores of a binary classifier , when there is limited inspection capacity. Specifically, we define the fraud detection rate as a function of , denoted by , to be the expected fraction of true frauds selected for inspection, given as


where is the set of sequentially arriving transaction indices selected for inspection using classifier , which depends on the arrival process . Note that in (1) is closely related to the true positive rate (TPR) of the classifier, with the slight difference that it is defined for the setting with streaming data constrained by operational resources and is defined with respect to the capacity .

3.2. Inspection Sampling Methods

We briefly describe the various decision-making methods considered that are used to select transactions for inspection.

Static Thresholds:

The inspector determines a fixed threshold , and will only inspect an arriving transaction if its score satisfies , and if it has not exhausted its inspection capacity. If a transaction is not selected for inspection at the time of arrival, it will not be inspected at a later time. The inspector determines the threshold while accounting for the arrival of transactions and with respect to the classifier performance. Note that if the threshold is set too high, then the inspector may not select enough transactions for inspection, and if it is set too low, the inspector may select more non-fraudulent transactions initially, using up its inspection capacity too early, and therefore, will not be able to inspect transactions with high scores arriving later.

Dynamic Thresholds:

In this case, the inspector determines a time-dependent threshold , and inspects a transaction arriving at time if its associated score satisfies . Similar to the static threshold, this time-varying threshold is determined such that the inspector selects the transactions that are most likely to be fraudulent given the classifier performance and the arrival process.

Random Sampling:

The inspector disregards the confidence score assigned by the classifier, and selects transactions uniformly at random. This method is equivalent to a worst-case scenario where a no-skill classifier assigns a score of to all .

Batch Processing:

Assume that the inspector can process and investigate transactions in batches, and there is no need to select transactions instantaneously at the time of arrival. Then, the inspector will select the set of transactions with the highest scores at time . Batch processing is not a practical method for the setup considered here, as there is a strict requirement for timely decision-making. This method provides an upper performance limit for any realistic method using a classifier for real-time decision-making, and is therefore included in our analysis.

The following section describes each sampling method in detail and presents its corresponding fraud detection rate.

4. Fraud Detection Rate-Inspection Capacity Tradeoff

In this section, we compute the expected fraction of frauds that are selected, and therefore detected, with each of the methods described in Sec. 3.2

. We denote the probability density function (PDF) of the classifier score

assigned to a transaction with feature vector and label by . Let and denote the PDF of the score assigned to a non-fraudulent and fraudulent transaction, respectively222We do not explicitly show random variable as a subscript of and hereafter.. Accordingly, and

denote the cumulative distribution function (CDF) of classifier scores assigned to non-fraudulent and fraudulent transactions.

Based on Proposition 1 given in Appendix A, with rate can be split into two independent sub-processes as follows:

  • Process with rate represents arrival of non-fraudulent transactions, and the random number of arrivals in is . We denote the scores corresponding to transactions from (ordered in time), by with arrival times .

  • Process with rate represents arrival of fraudulent transactions, and the random number of arrivals in is . We denote the scores corresponding to transactions arriving from (ordered in time), by with arrival times .

4.1. Static Thresholds

The inspector selects transactions as they arrive if the classifier score exceeds a predetermined threshold , and if it does not violate the capacity constraint.

Theorem 1 ().

The fraud detection rate with respect to a static threshold , denoted by , is


The proof is given in Appendix B. ∎

Theorem 2 provides the threshold that maximizes .

Theorem 2 ().

Given an inspection capacity , the optimal static threshold value that maximizes the detection rate is , for which


The proof is given in Appendix C. ∎

Note that is the threshold that satisfies the inspector capacity such that there will only be transactions (on average) with scores exceeding in , and it is independent from the rate function . For non-streaming settings without inspection restrictions, scores such as Youden’s J statistic, or the Brier score are used to determine the optimal threshold.

4.2. Dynamic Thresholds

In the case of adaptive thresholds, transactions are sequentially selected for inspection according to a time-dependent threshold , computed with respect to the arrival process. In this work, we adopt the strategy proposed in (Albright, 1974), originally designed for assigning jobs, each with an associated random value and arriving at random times, to a limited number of operatives with non-identical productivity. An optimal sequential assignment algorithm is proposed in (Albright, 1974) that maximizes the total expected reward, defined as the expected operative productivity. This algorithm was recently applied to a fraud detection problem in (Dervovic et al., 2021), where each transaction is a job arriving according to a non-homogeneous Poisson process, all operatives have identical productivity, and the value of a job is defined as a function of the transaction monetary amount and the classifier confidence score. Here, we define the job value as the classifier confidence score, but the results can be easily extended to the setting in(Dervovic et al., 2021).

The algorithm operates as follows: Let denote a time-dependent threshold, referred to as a critical curve, when the inspector can select transactions. If a transaction with score arrives at , and the inspector has inspections left in its budget, it selects the transaction if and only if . The optimal critical curves are derived from a set of differential equations given in Theorem 3.

Theorem 3 ().

(Albright, 1974, Theorem 2) For a total number of inspections, the optimal critical curves that select the transactions with the highest expected sum of scores, , satisfy the following system of differential equations

where .

In our setting, the adaptive thresholds maximize the expected sum of scores, which in turn selects the events with the highest scores, i.e., the most suspicious ones.

Theorem 4 ().

The fraud detection rate when selecting transactions as described in Sec. 4.2, denoted by , is


with , and


and where thresholds are derived from Theorem 3. For , we have

with and .


The proof is given in Appendix D. ∎

4.3. Random Sampling

The inspector selects a random subset of transactions, irrespective of the classifier scores, and as stated in the following theorem, the detection rate is a linear function of the inspection capacity.

Theorem 5 ().

The fraud detection rate using random sampling given an inspection capacity of , denoted by , is


The proof is given in Appendix E. ∎

4.4. Batch Processing

In this case, the inspector selects transactions that have the highest scores among all transactions. Therefore, a fraudulent transaction is selected, irrespective of its arrival time, if it is among the transactions with the largest scores.

Theorem 6 ().

The fraud detection rate with batch processing, denoted by , is


with and , and the PDF , , is derived by Lemma 2 in Appendix F.


The proof is given in Appendix F. ∎

For general functions of and , the detection rate may not exist in closed-form in Theorems 4 and 6. We approximate this function through Monte Carlo experiments in Sec. 5.1.

[]   []

Figure 1. IEEE-fraud- dataset: (a) ECDF of classifier scores assigned to non-fraudulent (left) and fraudulent (right) transactions. (b) Estimated time-varying arrival rate (left), dynamic thresholds for inspections derived from Theorem 3 (right).

5. Experiments

In this section, we use a public dataset to compute the detection rate-inspection capacity tradeoff for each sampling method. We compare the analytical bounds on the tradeoff, derived in Sec. 4, with the observed average tradeoff obtained empirically using each sampling method. We use the IEEE-CIS Fraud Detection (IEEE-fraud) Dataset 333Available at:, provided by Vesta Corporation containing over 1 million real-world e-commerce transactions, comprising of more than 400 feature variables, time stamps (secs) and fraud labels. The dataset contains 183 days of transactions, with of samples labeled fraudulent. In order to demonstrate how the class imbalance impacts the detection rate, we modify the imbalance by up-sampling (SMOTE (Chawla et al., 2002)) and down-sampling (uniformly at random) the minority class to make up and of the transactions, respectively. We refer to the dataset with frauds as IEEE-fraud-.

Arrival Rate, Classifier Score Densities and Dynamic Thresholds:

In our experiments, we consider each interval to be one day with seconds, and use a random -- split of the days as follows:

We use the first half of data to train three classifiers with different predictive powers: gradient boosted decision trees (GBDT), random forests (RF) and logistic regression (LR). The training results on all three datasets using the AUC of the ROC as the evaluation metric are reported in Table 

1. We use the second part of data to estimate the Empirical Cumulative Distribution Function (ECDF) of the classifier scores, and , shown in Fig. 1(a) for the IEEE-fraud- dataset. As expected, a more powerful classifier assigns higher scores to fraudulent transactions with much higher probability. We use the method in (Lewis and Shedler, 1976) to estimate the rate function , shown in Fig. 1(b), which is used to compute the time-dependent thresholds used in the Dynamic Thresholds method. Finally, we use the last part of data for our empirical sampling experiments discussed in the following. A more detailed description of the experiment setup is provided in Appendix H. Additionally, in order to investigate how estimation errors or model assumptions, e.g., independence of classifier scores and time of arrivals, affect the empirical results, we simulate data based on our estimated , and , which we refer to as simulated data.

Dataset Parameters Classifier AUC AUC
(Valid) (Test)
IEEE-fraud- RF
IEEE-fraud- RF
IEEE-fraud- RF
Table 1. Classifier learned on IEEE-fraud dataset.

[] [] [] []

Figure 2. Fraud detection rate-inspection capacity tradeoff for different sampling methods: (a) Batch Processing, (b) Dynamic Thresholds, (c) Static Thresholds, and (d) Random Sampling. The empirical results are within a small margin from the analytical bounds.

5.1. Analytical Results

Fig. 2 (a)-(d) displays the results on the IEEE-fraud- dataset with the GBDT classifier for batch processing (BP), dynamic thresholds (DT), static thresholds (ST), and random sampling (RS), which corresponds to a no-skill classifier, for equivalent to inspecting - of the expected arrivals each day. The dashed line delineates an (not necessarily achievable) upper bound on the entire tradeoff when the learning of the classifier is also taken into account, derived in Appendix G. For each method, the expected tradeoff derived analytically, is very close to the experimental results on the test data and the simulated data. Specifically, the curves match almost perfectly for batch processing given that it is independent of the arrival process. With dynamic thresholds the difference between the analytical and empirical curves is much smaller compared to the static thresholds since the mismatch between the estimated arrival rate with the actual arrivals of fraudulent transactions affects the analytical bounds more in the case of fixed thresholds. Finally, as stated in Theorem 5, for random sampling, the detection rate equals the inspection capacity . The experiments on a real dataset show that the NHPP formulation of arrivals could be used for practical applications when inspecting streaming data.

Figure 3. IEEE-fraud- dataset. For the learned GBDT classifier, the dynamic thresholds method operates very closely to batch processing, and with the optimal fixed threshold defined in Theorem 2, the static thresholds method is within a small margin. Both methods approach the upper bound for very small inspection capacities.

5.2. Empirical Results

Fig. 3 compares the tradeoff for different sampling methods on the IEEE-fraud- dataset with the GBDT classifier. As expected, given the sequential nature of arrivals, using adaptive thresholds to sample suspicious transactions outperforms using a static threshold. In fact, the dynamic thresholds method operates very closely (within a margin of ) to batch processing, which is not a practical method for timely decision-making itself and serves as an upper performance limit for all threshold-based strategies that sample transactions in real-time using a classifier. With the optimal threshold derived in Theorem 3, the static threshold curve is within a margin of from the batch processing curve, and performs competitively to the dynamic thresholds method especially for small .

We investigate how other aspects of the imbalanced binary classification problem with limited resources, such as the minority-majority imbalance and the initial phase of learning a predictive classifier impacts the tradeoff. While we illustrate the results for dynamic thresholds, the tradeoffs for other methods show similar trends, and are provided in Appendix H.

Class Imbalance:

Fig. 4 (a), shows the tradeoff on three datasets when using the GBDT classifier for predicting scores. Each curve is also compared with an upper bound (shown in dashed lines) when the end-to-end system is considered (see Appendix G). As observed, with adaptive thresholds, the tradeoff matches the upper bound for very small inspection capacities . As increases, the the detection rate diverges more from the upper bound in more imbalanced datasets, since it is much harder to learn powerful classifiers to discriminate between the minority and majority class samples.

Learning Phase:

Figs. 4 (b) shows the tradeoff on the IEEE-fraud- dataset for three classifiers with different predictive power. As expected, for the classifier with a higher AUC, which is able to better distinguish a fraudulent sample from a non-fraudulent one, the tradeoff curve is strictly superior across all inspection capacities. This is especially pronounced for very small , evident from the steep slope of the tradeoff corresponding to GBDT, which is tangent to the upper bound. A classifier with a larger AUC, with higher probability, will predict a larger score for a sample from the minority class compared to the majority class. Therefore, the most suspicious samples can be prioritized for selection across the entire interval without using up the capacity too early.

[] []

Figure 4. Tradeoff with dynamic thresholds. Impact of class imbalance (a): For very small capacities, the results are very close to the upper bound, and as the capacity increases, class imbalance impacts the detection rate more severely. Learning phase (b): a classifier with inferior predictive power (low AUC) selects non-fraudulent transactions early-on, and operates further from the upper bound.

6. Conclusions

In this paper, we study the tradeoffs for real-time identification of suspicious events when there are operational capacity restrictions. By separating the learning phase from the operational decision phase, we characterize the minority-class detection rate directly as a function of the inspection resources and the learned classifier predictions. We formulate the streaming arrival of events as a non-homogeneous Poisson process, and analytically derive this tradeoff for static and adaptive threshold-based decision making strategies. Our experiments on a public fraud detection dataset show that such formulation could be used for practical applications with limited resources for inspecting streaming data, and that using adaptive thresholds operates very closely to the upper performance limit resulting from batch processing, especially for very small inspection resources. Future work includes studying the end-to-end tradeoff while also considering the learning phase, and extensions to settings where the misclassification cost of minority-class samples are non-identical.


This paper was prepared for informational purposes by the Artificial Intelligence Research group of JPMorgan Chase & Co. and its affiliates (“JP Morgan”), and is not a product of the Research Department of JP Morgan. JP Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful.


  • Albright [1974] S Christian Albright. Optimal sequential assignments with random arrival times. Management Science, 21(1):60–67, 1974.
  • Arkin and Leemis [2000] Bradford L Arkin and Lawrence M Leemis. Nonparametric estimation of the cumulative intensity function for a nonhomogeneous poisson process from overlapping realizations. Management Science, 46(7):989–998, 2000.
  • Casella and Berger [2002] George Casella and Roger L Berger. Statistical inference, volume 2. Duxbury Pacific Grove, CA, 2002.
  • Chandrasekaran [2009] Balakrishnan Chandrasekaran. Survey of network traffic models. Waschington University in St. Louis CSE, 567, 2009.
  • Chawla et al. [2002] Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research, 16:321–357, 2002.
  • Daley and Vere-Jones [2007] Daryl J Daley and David Vere-Jones. An introduction to the theory of point processes: volume II: general theory and structure. Springer Science & Business Media, 2007.
  • Deotte [2019] Chris Deotte. XGB Fraud with Magic - [0.9600], 2019.
  • Dervovic et al. [2021] Danial Dervovic, Parisa Hassanzadeh, Samuel Assefa, and Prashant Reddy. Non-parametric stochastic sequential assignment with random arrival times. In Proceedings of the International Joint Conferences on Artifical Intelligence (IJCAI), 2021.
  • Giesecke [2004] Kay Giesecke. Credit risk modeling and valuation: An introduction. Available at SSRN 479323, 2004.
  • He and Garcia [2009] Haibo He and Edwardo A Garcia. Learning from imbalanced data. IEEE Transactions on knowledge and data engineering, 21(9):1263–1284, 2009.
  • Houssou et al. [2019] Régis Houssou, Jérôme Bovay, and Stephan Robert. Adaptive financial fraud detection in imbalanced data with time-varying poisson processes. arXiv preprint arXiv:1912.04308, 2019.
  • Kim and Whitt [2014a] Song-Hee Kim and Ward Whitt. Are call center and hospital arrivals well modeled by nonhomogeneous poisson processes? Manufacturing & Service Operations Management, 16(3):464–480, 2014.
  • Kim and Whitt [2014b] Song-Hee Kim and Ward Whitt. Choosing arrival process models for service systems: Tests of a nonhomogeneous poisson process. Naval Research Logistics (NRL), 61(1):66–90, 2014.
  • Kou [2002] Steven G Kou. A jump-diffusion model for option pricing. Management science, 48(8):1086–1101, 2002.
  • Koyejo et al. [2014] Oluwasanmi O Koyejo, Nagarajan Natarajan, Pradeep K Ravikumar, and Inderjit S Dhillon. Consistent binary classification with generalized performance metrics. Advances in Neural Information Processing Systems, 27:2744–2752, 2014.
  • Kuhl and Wilson [2000] Michael E Kuhl and James R Wilson. Least squares estimation of nonhomogeneous poisson processes. Journal of Statistical Computation and Simulation, 67(1):699–712, 2000.
  • Lee et al. [1991] Sanghoon Lee, James R Wilson, and Melba M Crawford. Modeling and simulation of a nonhomogeneous poisson process having cyclic behavior. Communications in Statistics-Simulation and Computation, 20(2-3):777–809, 1991.
  • Lewis and Shedler [1976] Peter AW Lewis and Gerald S. Shedler. Statistical analysis of non-stationary series of events in a data base system. IBM Journal of Research and Development, 20(5):465–482, 1976.
  • Li and Vorobeychik [2015] Bo Li and Yevgeniy Vorobeychik. Scalable optimization of randomized operational decisions in adversarial classification settings. In Artificial Intelligence and Statistics, pages 599–607, 2015.
  • Provost [2000] Foster Provost. Machine learning from imbalanced data sets 101. In Proceedings of the AAAI’2000 workshop on imbalanced data sets, volume 68, pages 1–3. AAAI Press, 2000.
  • Rausand and Hoyland [2003] Marvin Rausand and Arnljot Hoyland. System reliability theory: models, statistical methods, and applications, volume 396. John Wiley & Sons, 2003.
  • Ross [2014] Sheldon M Ross. Introduction to probability models. Academic press, 2014.
  • Shen and Kurshan [2020] Hongda Shen and Eren Kurshan. Deep q-network-based adaptive alert threshold selection policy for payment fraud systems in retail banking. In Proceedings of the ACM International Conference on AI in Finance (ICAIF ’20), New York, NY, 2020.

Appendix A Non-Homogeneous Poisson Process

This section provides known properties of NHPPs that have been referred to in the main paper, or that have been used in the technical proofs. For more properties and detailed proofs of the properties please see [Daley and Vere-Jones, 2007; Ross, 2014].

Proposition 0 (Independent splitting (thinning) of a NHPP).

The independent splitting of a NHPP with intensity into split processes with splitting functions , produces independent NHPPs with intensities .

Proposition 0 (Superposition of independent NHPPs).

The superposition of independent NHPPs is itself a NHPP with an intensity equal to the sum of the component intensities.

Proposition 0 ().

Let and be two independent NHPPs with respective intensity functions and , and let . Then,

  • is a NHPP with intensity function .

  • Let

    Random variables are Bernoulli(), with .

Proposition 0 ().

Arrival times corresponding to a NHPP with intensity , are dependent; however, when conditioned on previous arrivals we have the following. If an event arrives at time , independent of all arrivals prior to , the random wait time to the next event, denoted by , is distributed as


Appendix B Proof of Theorem 1

As described in Sec. 4, arrival process can be split into two independent processes and . Given that the assigned classifier scores are independent from one another, per Proposition 1, we further split each process , , into two independent sub-processes and , corresponding to transactions with scores higher and lower than , respectively, such that

  • [leftmargin=*]

  • Process with intensity function corresponds to non-fraudulent transactions with scores exceeding .

  • Process with intensity function corresponds to fraudulent transactions with scores exceeding .

Let denote the event that transaction arriving at time with a score higher than , belongs to process . Given Proposition 3, random variable is Bernoulli with parameter


Given a classifier with perfect predictive power, with a properly set threshold, the number of inspections is no more than the overall number of transactions that exceed the threshold in interval , which belong to process with expected value . For too low of a threshold value, transactions with scores higher than arriving after the first ones will not be inspected due to the capacity restriction. To this end, let . As a result of Proposition 3, since the sum of

Bernoulli random variables has Binomial distribution, the probability that exactly

transactions among the first arrivals exceeding belong to process , and are therefore fraudulent, is . Random variable is .

The total number of fraudulent arrivals in is , and therefore, , is given by


where (10) follows since for , and (11) results from the fact that is the expected value of . Eq. (13) results from replacing from (9).

Appendix C Proof of Theorem 2

Given that in the setting of this paper we are interested applications with a large number of expected arrivals in , for simplicity, let us assume that , and therefore . Then, from Theorem 2 it follows that for a given threshold , the detection rate can be rewritten as follows


When , then is decreasing in since the CDF is increasing in . Therefore, it is maximized for the smallest value for which the condition is satisfied, i.e., . When , then

which is less than the value achieved when . Therefore, the maximum is achieved with .

Note that eq. (15) has an intuitive interpretation. Suppose , then the capacity exceeds the expected number of transactions with , of which are fraudulent on average. Otherwise, when , only a fraction of the transactions with are captured on average, of which are fraudulent.

Appendix D Proof of Theorem 5

Per Proposition 1, we denote the arrival process of non-fraudulent and fraudulent transactions with scores exceeding critical curve by and , respectively, with intensities and . Based on Proposition 3, a transaction arriving at with score is fraudulent, i.e., belongs to , with probability


Let , , be the (random) waiting time starting from until the first transaction is selected for inspection when we have inspections left, i.e., the additional time after till a transaction score exceeds critical curve . If the , , inspection happened at time with respect to critical curve , then, the inspection happens at with respect to critical curve . Let us define the index . Based on Proposition 4, is distributed according to (8) with intensity function .

Let us denote the waiting times between inspections by . Then, the expected fraction of fraudulent transactions that are selected for inspected is

where (a) follows since for .

Appendix E Proof of Theorem 5

For a given inspection capacity , any of the (non-fraudulent or fraudulent) transactions are equally likely to be selected for inspection with probability . Then, the expected number of fraudulent transactions arriving according to process that are selected is

where (a) follows since the sampling is independent from the transaction being fraudulent, and (b) follows from Jensen’s inequality since the function is convex for , and therefore .

Appendix F Proof of Theorem 6

We derive the detection rate using the following definition.

Definition 0 ().

are continuous random variables with a common PDF . The ordered realizations of the random variables, , sorted in increasing order, are also random variables. is referred to as the order statistic.

As per definition 1, we denote the sorted scores assigned to non-fraudulent and fraudulent transactions in increasing order, respectively, by , and .

Among the fraudulent transactions, consider the one with the largest score, which is equivalent to the order statistic of a sample of size with PDF . Let us define the index . Similarly, among the non-fraudulent transactions, consider the one with the largest score, which is equivalent to the order statistic of a sample of size with PDF . Let us define the index . Then, for any , the fraudulent transaction with the largest score is selected for inspection, only if


Therefore, the fraction of fraudulent transactions that are inspected, is given by


where (18) follows since for , and , is a random variable with CDF


and the PDF of the order statistic is given by Lemma (2), provided in the following.

Lemma 0 ().

[Casella and Berger, 2002, Theorem 5.4.4] Let be the order statistics of an random sample of size from a continuous distribution with CDF and PDF . Then, the PDF of is


Appendix G End-to-End Upper Bound

In this paper, we consider approaches that identify suspicious events in two steps: (1) learning a predictive classifier that discovers frauds based on the transaction features , and (2) threshold-based sampling based on the classifier scores. We have mainly focused on the second step, and compared sampling methods for a given classifier. By considering the end-to-end system that takes both steps into account simultaneously, we can define the fraud detection rate as a function of the inspection capacity, denoted by , as follows:


The following theorem provides an upper bound on , which is also an upper bound on defined in eq. (1) of the main paper. Note that this bound is not necessarily achievable.

Theorem 1 (End-to-End Upper Bound).

For a given inspection capacity , the end-to-end tradeoff satisfies


The ideal classifier would be able to, given an inspection budget of transaction, perfectly select (the first) fraudulent transactions as the arrive. With enough capacity, i.e., when , the inspector will detect all frauds without error. Therefore,

which follows from Jensen’s inequality since function is convex for , and therefore .

Appendix H Experiment Setup Details

Data Prepossessing:

The IEEE-fraud dataset contains more than one million online transactions, and each transaction contains more than features. While the original dataset contains a train set and test set, only the train set contains fraud labels, and therefore, we only use the train set for our analysis which has K samples. Each transaction has a timestamp feature (in seconds) that is provided with respect to a reference value. We assume that each time interval corresponds to a full day of seconds, resulting in episodes for our experiments. We follow the analysis provided in [Deotte, 2019] for preprocessing and removing the redundant features through correlation analysis, and we further engineer aggregate features that increase the local validation AUC.

As described in Sec. 5 of the paper, the original dataset contains around frauds. We create two additional datasets by modifying the number of fraudulent transactions by up-sampling ([Chawla et al., 2002]) and down-sampling (uniformly at random) the minority class. The average number of daily transactions , and average number of daily frauds are provided in Table. 2

Class Average Number of Average Number of
Imbalance Daily Transactions Daily Frauds
Table 2. Properties of IEEE-fraud dataset.

Training Setup We split the dataset into three portions of by randomly selecting the days, such that there are days for training the classifier, days for estimating , and , and there are days for empirical experiments. The parameters for training each classifier based on the area under the ROC curve (AUC) are as followed:

  • Gradient Boosted Decision Trees (GBDT): We use the XGBoost library to train our classifier, using of the data for cross-validation, with the following parameters.

    {   ’n_estimators’: 2000,
        ’max_depth’: 12,
        ’learning_rate’ : 0.02,
        ’subsample’ : 0.8,
        ’colsample_bytree’: 0.4,
        ’missing’: -1,
        ’eval_metric’: ’auc }’
  • Random Forests (RF): We use the scikit-learn

    library to train our classifier, by using grid search cross-validation over the following hyperparameters in

    iterations, and use the set of parameters for our final training.

    {   ’bootstrap’: True,
        ’max_depth’: [5, 12],
        ’max_features’ : [2, 3],
        ’min_samples_leaf’: [3, 4, 5],
        min_samples_split’: [8, 10, 12],
        n_estimators’: [100, 200, 300, 1000]}’
  • Logistic regression (LR): We use the scikit-learn library to train our classifier, using by using grid search cross-validation in iterations over

    {  ’C’: np.power(10.0, np.arange(-3, 3)) }

[] [] [] []

Figure 5. Tradeoff with dynamic thresholds for batch processing (a, b), and static thresholds (c, d).

Additional Experiment Results In this section, we provide results similar to Fig. 4 corresponding to the Batch Processing (Fig. 5 (a), (b)) and Static Thresholds methods (Fig. 5 (c), (d)). Both methods exhibit similar trends as the method with dynamic thresholds. Specifically, for the impact of class imbalance on the left, it is observed that For very small capacities, the results are very close to the upper bound, and as the capacity increases, class imbalance impacts the detection rate more severely. And, the classifier learning phase impacts the tradeoff such that a classifier with inferior predictive power (low AUC) selects non-fraudulent transactions early-on, and operates further from the upper bound.