1. Introduction
Automated information processing and decisionmaking systems used in finance, security, quality control and medical applications use machine learning models for monitoring sequentially arriving events for malicious activities or abnormalities. Identifying such events in a timely manner can be crucial in preventing unfavorable outcomes, such as monetary loss due to fraud in retail banking or data breaches due to cyber attacks. While missing an abnormal event can have adverse consequences  since such events are sporadic and investigating events entails operational costs and can lead to processing delays  these systems are restricted in the number of risky events they select for manual inspection.
Many machine learning classification algorithms predict a score (often a probability) for each data sample representing the algorithm’s confidence about its class membership. In a binary classification problem, the class labels are derived from converting the predicted scores to binary labels using a
(decision) threshold. Adjusting the threshold, especially in settings with severe classimbalance or when the misclassification of one class outweighs the misclassification of the other class, can profoundly impact the classifier performance (e.g., True Positive (TP)False Positive (FP) tradeoff) (Provost, 2000). The threshold is generally tuned using a grid search across a range of thresholds, or it is computed from the receiver operating characteristic (ROC) curve or the precisionrecall curve in highly skewed datasets
(He and Garcia, 2009).Given the imbalanced nature of data in this domain, which makes learning classifiers that efficiently discriminate among the minority and majority class difficult, and the limited resources available for inspecting timesensitive risky events, we are interested in understanding the relationship between the rate of detection from the minority class (i.e., the fraction of samples from the minority class selected for inspection) and the inspection budget. Specifically, we focus on applications that involve realtime processing and decisionmaking where an abnormal event can only be inspected at the time of arrival, and we investigate how different selection policies based on classifier predictions operate in terms of the limited inspection budget rather than the decision threshold.
Point processes, such as the Poisson process, have been widely used for modeling event arrivals at random times in various applications, such as arrivals in call centers (Kim and Whitt, 2014a) system failures (Rausand and Hoyland, 2003), network traffic models (Chandrasekaran, 2009), and in financial modeling (Giesecke, 2004; Kou, 2002). We note that Poisson processes are not suitable in settings with scheduled arrivals (e.g., doctor appointments), intentionally separated events (e.g., plane landings), or events that arrive in groups, (e.g., at a restaurant where group members are not independent from one another). One can perform statistical tests on the data to confirm whether the arrivals can be modeled as a nonhomogeneous Poisson process as described in (Kim and Whitt, 2014b). In the setting considered in this work, events (e.g., transactions) arrive and are processed independently from one another, and therefore, their arrival can be modeled according to a nonhomogeneous Poisson process. The analysis can easily be extended to the more general renewal process (Daley and VereJones, 2007).
Contributions
We consider an imbalanced binary classification problem where the goal is to select a limited number of sequentially arriving data samples that are most likely to belong to the minority class. We present the problem in the context of fraud detection in financial transactions, but our results apply to the general imbalanced binary classification problem. Our contributions are as follows:

[leftmargin=*]

We break the problem into two tasks: learning a classifier from data, and using the classifier predictions to sample sequential arrivals for inspection. We focus on the second task, and study the tradeoff between the minority class detection rate and the inspection budget, for a given learned classifier. This tradeoff can be used to determine how many samples need to be inspected to achieve a certain detection rate from the minority class.

We assume that events arrive periodically according to a NHPP, and focus on four selection (decision) strategies: sampling based on static and dynamic thresholds, random sampling and sampling in batches. For each method, we analytically characterize the minorityclass detection rateinspection capacity tradeoff.

For the case of sampling with respect to a static threshold, we determine the optimal threshold value that maximizes the minorityclass detection rate for a given inspection capacity.

We use a publicly available fraud detection dataset to learn a classifier and estimate the timevarying arrival rate function of the NHPP, and compare the empirical results from each sampling technique with our analytical bounds. We show that using dynamic thresholds operates very closely to the upper performance limit resulting from sampling in batches, especially for very small inspection capacities.

For this dataset, we investigate how class imbalance and the predictive power of the classifier, affects the tradeoff, and compare the empirical results against an upper bound on the endtoend problem when considering both tasks of learning and operational decisions jointly.
2. Related Work
Most work on detecting samples from the minority class focus on learning optimal classifiers with respect to a given performance metric, such as the F1score or the area under the ROC curve, and then satisfy the inspection constraints by adjusting the decision threshold. The work in (Koyejo et al., 2014) studies the optimal fixed threshold selection problem for binary classification with respect to various performance metrics. In (Shen and Kurshan, 2020)
, the authors consider a dynamic environment and model the threshold tuning as a sequential decision making problem. They use reinforcement learning to adaptively adjust the thresholds by maximizing a reward in terms of the net monetary value of missed and detected frauds when restricted to a fixed inspection capacity. The work in
(Li and Vorobeychik, 2015) studies the adversarial binary classification problem with operational constraints (e.g., inpsection budget), where an intelligent adversary attempts to evade the decision policies. By modeling the problem as a Stackleberg game, they determine the optimal randomized operational policy that abides by the constraints. In other related work with dynamic environments, (Houssou et al., 2019) considers a fraud detection setting where rare fraudulent events arrive from a Poisson process with a parametric arrival function estimated from the data, and the goal is to predict the arrival of a new fraudulent event.More recently, (Dervovic et al., 2021) adopted the sequential assignment algorithm of (Albright, 1974) in the fraud detection setting such that the overall value of detected fraudulent transactions is maximized. In this paper, we use this algorithm to find adaptive thresholds for transactions arriving at random times according to a Poisson process. Given the sequential nature of arrivals in information processing and decisionmaking applications, this algorithm allows us to directly account for the limited inspection capacity when deciding to inspect transactions based on the output of a machine learning model.
3. Problem Formulation
Consider a transaction (e.g., payment, credit card purchase) fraud detection setting, where transactions arrive sequentially at random times over a finite time horizon according to a NonHomogeneous Poisson Process (NHPP) with a continuous arrival rate (intensity) function , where denotes the time of arrival. To each transactions we associate a triplet , where
is a random variable representing the observed features of transaction
, indicates if the transaction is fraudulent, and denotes its random arrival time. We assume that transactions are independent from one another and that there is significant class imbalance such that , i.e., there are considerably less fraudulent transactions compared to nonfraudulent ones.There is a binary classifier , that assigns a random value
to a transaction with feature vector
, which represents the classifier’s confidence that a transaction is fraudulent. We have an inspector (decisionmaker) that can investigate a transaction, and determine whether it is fraudulent without error; however, the inspector is only able to investigate a limited number of transactions during . Therefore, the inspector needs to decide which transactions should be selected for inspection in order to detect as many fraudulent transactions as possible given its limited inspection resources. Note that since transactions and are independent, then for any classifier , the corresponding scores and are also independent.Transaction Arrival Process:
We assume transactions arrive according to a NHPP with rate function , and cumulative rate function . The number of transactions in interval is a random variable
with Poisson distribution parametrized by
, and the expected number of arrivals in is ^{1}^{1}1A NHPP is denoted by , corresponding to the number of arrivals in . We use to denote the arrivals in .. The rate function and cumulative rate function can be estimated from several observed realizations of the process over , using nonparametric estimators as proposed in (Lewis and Shedler, 1976; Arkin and Leemis, 2000), or through parametric methods as in (Lee et al., 1991; Kuhl and Wilson, 2000). In our experiments in Sec. 5, we use the heuristic estimator proposed in
(Lewis and Shedler, 1976).3.1. Objective
The inspector has limited resources, and can only select and investigate a fraction of the incoming transactions in , which we refer to as the inspection capacity. We assume that for a given capacity , it selects a fraction of the expected number of arrivals equal to transactions. Our goal is to evaluate how well a given sampling method is able to choose fraudulent transactions based on the scores of a binary classifier , when there is limited inspection capacity. Specifically, we define the fraud detection rate as a function of , denoted by , to be the expected fraction of true frauds selected for inspection, given as
(1) 
where is the set of sequentially arriving transaction indices selected for inspection using classifier , which depends on the arrival process . Note that in (1) is closely related to the true positive rate (TPR) of the classifier, with the slight difference that it is defined for the setting with streaming data constrained by operational resources and is defined with respect to the capacity .
3.2. Inspection Sampling Methods
We briefly describe the various decisionmaking methods considered that are used to select transactions for inspection.
Static Thresholds:
The inspector determines a fixed threshold , and will only inspect an arriving transaction if its score satisfies , and if it has not exhausted its inspection capacity. If a transaction is not selected for inspection at the time of arrival, it will not be inspected at a later time. The inspector determines the threshold while accounting for the arrival of transactions and with respect to the classifier performance. Note that if the threshold is set too high, then the inspector may not select enough transactions for inspection, and if it is set too low, the inspector may select more nonfraudulent transactions initially, using up its inspection capacity too early, and therefore, will not be able to inspect transactions with high scores arriving later.
Dynamic Thresholds:
In this case, the inspector determines a timedependent threshold , and inspects a transaction arriving at time if its associated score satisfies . Similar to the static threshold, this timevarying threshold is determined such that the inspector selects the transactions that are most likely to be fraudulent given the classifier performance and the arrival process.
Random Sampling:
The inspector disregards the confidence score assigned by the classifier, and selects transactions uniformly at random. This method is equivalent to a worstcase scenario where a noskill classifier assigns a score of to all .
Batch Processing:
Assume that the inspector can process and investigate transactions in batches, and there is no need to select transactions instantaneously at the time of arrival. Then, the inspector will select the set of transactions with the highest scores at time . Batch processing is not a practical method for the setup considered here, as there is a strict requirement for timely decisionmaking. This method provides an upper performance limit for any realistic method using a classifier for realtime decisionmaking, and is therefore included in our analysis.
The following section describes each sampling method in detail and presents its corresponding fraud detection rate.
4. Fraud Detection RateInspection Capacity Tradeoff
In this section, we compute the expected fraction of frauds that are selected, and therefore detected, with each of the methods described in Sec. 3.2
. We denote the probability density function (PDF) of the classifier score
assigned to a transaction with feature vector and label by . Let and denote the PDF of the score assigned to a nonfraudulent and fraudulent transaction, respectively^{2}^{2}2We do not explicitly show random variable as a subscript of and hereafter.. Accordingly, anddenote the cumulative distribution function (CDF) of classifier scores assigned to nonfraudulent and fraudulent transactions.
Based on Proposition 1 given in Appendix A, with rate can be split into two independent subprocesses as follows:

Process with rate represents arrival of nonfraudulent transactions, and the random number of arrivals in is . We denote the scores corresponding to transactions from (ordered in time), by with arrival times .

Process with rate represents arrival of fraudulent transactions, and the random number of arrivals in is . We denote the scores corresponding to transactions arriving from (ordered in time), by with arrival times .
4.1. Static Thresholds
The inspector selects transactions as they arrive if the classifier score exceeds a predetermined threshold , and if it does not violate the capacity constraint.
Theorem 1 ().
The fraud detection rate with respect to a static threshold , denoted by , is
(2) 
Proof.
The proof is given in Appendix B. ∎
Theorem 2 provides the threshold that maximizes .
Theorem 2 ().
Given an inspection capacity , the optimal static threshold value that maximizes the detection rate is , for which
(3) 
Proof.
The proof is given in Appendix C. ∎
Note that is the threshold that satisfies the inspector capacity such that there will only be transactions (on average) with scores exceeding in , and it is independent from the rate function . For nonstreaming settings without inspection restrictions, scores such as Youden’s J statistic, or the Brier score are used to determine the optimal threshold.
4.2. Dynamic Thresholds
In the case of adaptive thresholds, transactions are sequentially selected for inspection according to a timedependent threshold , computed with respect to the arrival process. In this work, we adopt the strategy proposed in (Albright, 1974), originally designed for assigning jobs, each with an associated random value and arriving at random times, to a limited number of operatives with nonidentical productivity. An optimal sequential assignment algorithm is proposed in (Albright, 1974) that maximizes the total expected reward, defined as the expected operative productivity. This algorithm was recently applied to a fraud detection problem in (Dervovic et al., 2021), where each transaction is a job arriving according to a nonhomogeneous Poisson process, all operatives have identical productivity, and the value of a job is defined as a function of the transaction monetary amount and the classifier confidence score. Here, we define the job value as the classifier confidence score, but the results can be easily extended to the setting in(Dervovic et al., 2021).
The algorithm operates as follows: Let denote a timedependent threshold, referred to as a critical curve, when the inspector can select transactions. If a transaction with score arrives at , and the inspector has inspections left in its budget, it selects the transaction if and only if . The optimal critical curves are derived from a set of differential equations given in Theorem 3.
Theorem 3 ().
(Albright, 1974, Theorem 2) For a total number of inspections, the optimal critical curves that select the transactions with the highest expected sum of scores, , satisfy the following system of differential equations
where .
In our setting, the adaptive thresholds maximize the expected sum of scores, which in turn selects the events with the highest scores, i.e., the most suspicious ones.
Theorem 4 ().
Proof.
The proof is given in Appendix D. ∎
4.3. Random Sampling
The inspector selects a random subset of transactions, irrespective of the classifier scores, and as stated in the following theorem, the detection rate is a linear function of the inspection capacity.
Theorem 5 ().
The fraud detection rate using random sampling given an inspection capacity of , denoted by , is
(6) 
Proof.
The proof is given in Appendix E. ∎
4.4. Batch Processing
In this case, the inspector selects transactions that have the highest scores among all transactions. Therefore, a fraudulent transaction is selected, irrespective of its arrival time, if it is among the transactions with the largest scores.
Theorem 6 ().
Proof.
The proof is given in Appendix F. ∎
5. Experiments
In this section, we use a public dataset to compute the detection rateinspection capacity tradeoff for each sampling method. We compare the analytical bounds on the tradeoff, derived in Sec. 4, with the observed average tradeoff obtained empirically using each sampling method. We use the IEEECIS Fraud Detection (IEEEfraud) Dataset ^{3}^{3}3Available at: https://www.kaggle.com/c/ieeefrauddetection/data., provided by Vesta Corporation containing over 1 million realworld ecommerce transactions, comprising of more than 400 feature variables, time stamps (secs) and fraud labels. The dataset contains 183 days of transactions, with of samples labeled fraudulent. In order to demonstrate how the class imbalance impacts the detection rate, we modify the imbalance by upsampling (SMOTE (Chawla et al., 2002)) and downsampling (uniformly at random) the minority class to make up and of the transactions, respectively. We refer to the dataset with frauds as IEEEfraud.
Arrival Rate, Classifier Score Densities and Dynamic Thresholds:
In our experiments, we consider each interval to be one day with seconds, and use a random  split of the days as follows:
We use the first half of data to train three classifiers with different predictive powers: gradient boosted decision trees (GBDT), random forests (RF) and logistic regression (LR). The training results on all three datasets using the AUC of the ROC as the evaluation metric are reported in Table
1. We use the second part of data to estimate the Empirical Cumulative Distribution Function (ECDF) of the classifier scores, and , shown in Fig. 1(a) for the IEEEfraud dataset. As expected, a more powerful classifier assigns higher scores to fraudulent transactions with much higher probability. We use the method in (Lewis and Shedler, 1976) to estimate the rate function , shown in Fig. 1(b), which is used to compute the timedependent thresholds used in the Dynamic Thresholds method. Finally, we use the last part of data for our empirical sampling experiments discussed in the following. A more detailed description of the experiment setup is provided in Appendix H. Additionally, in order to investigate how estimation errors or model assumptions, e.g., independence of classifier scores and time of arrivals, affect the empirical results, we simulate data based on our estimated , and , which we refer to as simulated data.Dataset  Parameters  Classifier  AUC  AUC 

(Valid)  (Test)  
GBDT  
IEEEfraud  RF  
LR  
GBDT  
IEEEfraud  RF  
LR  
GBDT  
IEEEfraud  RF  
LR 
5.1. Analytical Results
Fig. 2 (a)(d) displays the results on the IEEEfraud dataset with the GBDT classifier for batch processing (BP), dynamic thresholds (DT), static thresholds (ST), and random sampling (RS), which corresponds to a noskill classifier, for equivalent to inspecting  of the expected arrivals each day. The dashed line delineates an (not necessarily achievable) upper bound on the entire tradeoff when the learning of the classifier is also taken into account, derived in Appendix G. For each method, the expected tradeoff derived analytically, is very close to the experimental results on the test data and the simulated data. Specifically, the curves match almost perfectly for batch processing given that it is independent of the arrival process. With dynamic thresholds the difference between the analytical and empirical curves is much smaller compared to the static thresholds since the mismatch between the estimated arrival rate with the actual arrivals of fraudulent transactions affects the analytical bounds more in the case of fixed thresholds. Finally, as stated in Theorem 5, for random sampling, the detection rate equals the inspection capacity . The experiments on a real dataset show that the NHPP formulation of arrivals could be used for practical applications when inspecting streaming data.
5.2. Empirical Results
Fig. 3 compares the tradeoff for different sampling methods on the IEEEfraud dataset with the GBDT classifier. As expected, given the sequential nature of arrivals, using adaptive thresholds to sample suspicious transactions outperforms using a static threshold. In fact, the dynamic thresholds method operates very closely (within a margin of ) to batch processing, which is not a practical method for timely decisionmaking itself and serves as an upper performance limit for all thresholdbased strategies that sample transactions in realtime using a classifier. With the optimal threshold derived in Theorem 3, the static threshold curve is within a margin of from the batch processing curve, and performs competitively to the dynamic thresholds method especially for small .
We investigate how other aspects of the imbalanced binary classification problem with limited resources, such as the minoritymajority imbalance and the initial phase of learning a predictive classifier impacts the tradeoff. While we illustrate the results for dynamic thresholds, the tradeoffs for other methods show similar trends, and are provided in Appendix H.
Class Imbalance:
Fig. 4 (a), shows the tradeoff on three datasets when using the GBDT classifier for predicting scores. Each curve is also compared with an upper bound (shown in dashed lines) when the endtoend system is considered (see Appendix G). As observed, with adaptive thresholds, the tradeoff matches the upper bound for very small inspection capacities . As increases, the the detection rate diverges more from the upper bound in more imbalanced datasets, since it is much harder to learn powerful classifiers to discriminate between the minority and majority class samples.
Learning Phase:
Figs. 4 (b) shows the tradeoff on the IEEEfraud dataset for three classifiers with different predictive power. As expected, for the classifier with a higher AUC, which is able to better distinguish a fraudulent sample from a nonfraudulent one, the tradeoff curve is strictly superior across all inspection capacities. This is especially pronounced for very small , evident from the steep slope of the tradeoff corresponding to GBDT, which is tangent to the upper bound. A classifier with a larger AUC, with higher probability, will predict a larger score for a sample from the minority class compared to the majority class. Therefore, the most suspicious samples can be prioritized for selection across the entire interval without using up the capacity too early.
6. Conclusions
In this paper, we study the tradeoffs for realtime identification of suspicious events when there are operational capacity restrictions. By separating the learning phase from the operational decision phase, we characterize the minorityclass detection rate directly as a function of the inspection resources and the learned classifier predictions. We formulate the streaming arrival of events as a nonhomogeneous Poisson process, and analytically derive this tradeoff for static and adaptive thresholdbased decision making strategies. Our experiments on a public fraud detection dataset show that such formulation could be used for practical applications with limited resources for inspecting streaming data, and that using adaptive thresholds operates very closely to the upper performance limit resulting from batch processing, especially for very small inspection resources. Future work includes studying the endtoend tradeoff while also considering the learning phase, and extensions to settings where the misclassification cost of minorityclass samples are nonidentical.
Disclaimer
This paper was prepared for informational purposes by the Artificial Intelligence Research group of JPMorgan Chase & Co. and its affiliates (“JP Morgan”), and is not a product of the Research Department of JP Morgan. JP Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful.
References
 Albright [1974] S Christian Albright. Optimal sequential assignments with random arrival times. Management Science, 21(1):60–67, 1974.
 Arkin and Leemis [2000] Bradford L Arkin and Lawrence M Leemis. Nonparametric estimation of the cumulative intensity function for a nonhomogeneous poisson process from overlapping realizations. Management Science, 46(7):989–998, 2000.
 Casella and Berger [2002] George Casella and Roger L Berger. Statistical inference, volume 2. Duxbury Pacific Grove, CA, 2002.
 Chandrasekaran [2009] Balakrishnan Chandrasekaran. Survey of network traffic models. Waschington University in St. Louis CSE, 567, 2009.
 Chawla et al. [2002] Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. Smote: synthetic minority oversampling technique. Journal of artificial intelligence research, 16:321–357, 2002.
 Daley and VereJones [2007] Daryl J Daley and David VereJones. An introduction to the theory of point processes: volume II: general theory and structure. Springer Science & Business Media, 2007.
 Deotte [2019] Chris Deotte. XGB Fraud with Magic  [0.9600], 2019. https://www.kaggle.com/cdeotte/xgbfraudwithmagic09600/notebook.
 Dervovic et al. [2021] Danial Dervovic, Parisa Hassanzadeh, Samuel Assefa, and Prashant Reddy. Nonparametric stochastic sequential assignment with random arrival times. In Proceedings of the International Joint Conferences on Artifical Intelligence (IJCAI), 2021.
 Giesecke [2004] Kay Giesecke. Credit risk modeling and valuation: An introduction. Available at SSRN 479323, 2004.
 He and Garcia [2009] Haibo He and Edwardo A Garcia. Learning from imbalanced data. IEEE Transactions on knowledge and data engineering, 21(9):1263–1284, 2009.
 Houssou et al. [2019] Régis Houssou, Jérôme Bovay, and Stephan Robert. Adaptive financial fraud detection in imbalanced data with timevarying poisson processes. arXiv preprint arXiv:1912.04308, 2019.
 Kim and Whitt [2014a] SongHee Kim and Ward Whitt. Are call center and hospital arrivals well modeled by nonhomogeneous poisson processes? Manufacturing & Service Operations Management, 16(3):464–480, 2014.
 Kim and Whitt [2014b] SongHee Kim and Ward Whitt. Choosing arrival process models for service systems: Tests of a nonhomogeneous poisson process. Naval Research Logistics (NRL), 61(1):66–90, 2014.
 Kou [2002] Steven G Kou. A jumpdiffusion model for option pricing. Management science, 48(8):1086–1101, 2002.
 Koyejo et al. [2014] Oluwasanmi O Koyejo, Nagarajan Natarajan, Pradeep K Ravikumar, and Inderjit S Dhillon. Consistent binary classification with generalized performance metrics. Advances in Neural Information Processing Systems, 27:2744–2752, 2014.
 Kuhl and Wilson [2000] Michael E Kuhl and James R Wilson. Least squares estimation of nonhomogeneous poisson processes. Journal of Statistical Computation and Simulation, 67(1):699–712, 2000.
 Lee et al. [1991] Sanghoon Lee, James R Wilson, and Melba M Crawford. Modeling and simulation of a nonhomogeneous poisson process having cyclic behavior. Communications in StatisticsSimulation and Computation, 20(23):777–809, 1991.
 Lewis and Shedler [1976] Peter AW Lewis and Gerald S. Shedler. Statistical analysis of nonstationary series of events in a data base system. IBM Journal of Research and Development, 20(5):465–482, 1976.
 Li and Vorobeychik [2015] Bo Li and Yevgeniy Vorobeychik. Scalable optimization of randomized operational decisions in adversarial classification settings. In Artificial Intelligence and Statistics, pages 599–607, 2015.
 Provost [2000] Foster Provost. Machine learning from imbalanced data sets 101. In Proceedings of the AAAI’2000 workshop on imbalanced data sets, volume 68, pages 1–3. AAAI Press, 2000.
 Rausand and Hoyland [2003] Marvin Rausand and Arnljot Hoyland. System reliability theory: models, statistical methods, and applications, volume 396. John Wiley & Sons, 2003.
 Ross [2014] Sheldon M Ross. Introduction to probability models. Academic press, 2014.
 Shen and Kurshan [2020] Hongda Shen and Eren Kurshan. Deep qnetworkbased adaptive alert threshold selection policy for payment fraud systems in retail banking. In Proceedings of the ACM International Conference on AI in Finance (ICAIF ’20), New York, NY, 2020.
Appendix A NonHomogeneous Poisson Process
This section provides known properties of NHPPs that have been referred to in the main paper, or that have been used in the technical proofs. For more properties and detailed proofs of the properties please see [Daley and VereJones, 2007; Ross, 2014].
Proposition 0 (Independent splitting (thinning) of a NHPP).
The independent splitting of a NHPP with intensity into split processes with splitting functions , produces independent NHPPs with intensities .
Proposition 0 (Superposition of independent NHPPs).
The superposition of independent NHPPs is itself a NHPP with an intensity equal to the sum of the component intensities.
Proposition 0 ().
Let and be two independent NHPPs with respective intensity functions and , and let . Then,

is a NHPP with intensity function .

Let
Random variables are Bernoulli(), with .
Proposition 0 ().
Arrival times corresponding to a NHPP with intensity , are dependent; however, when conditioned on previous arrivals we have the following. If an event arrives at time , independent of all arrivals prior to , the random wait time to the next event, denoted by , is distributed as
(8) 
Appendix B Proof of Theorem 1
As described in Sec. 4, arrival process can be split into two independent processes and . Given that the assigned classifier scores are independent from one another, per Proposition 1, we further split each process , , into two independent subprocesses and , corresponding to transactions with scores higher and lower than , respectively, such that

[leftmargin=*]

Process with intensity function corresponds to nonfraudulent transactions with scores exceeding .

Process with intensity function corresponds to fraudulent transactions with scores exceeding .
Let denote the event that transaction arriving at time with a score higher than , belongs to process . Given Proposition 3, random variable is Bernoulli with parameter
(9) 
Given a classifier with perfect predictive power, with a properly set threshold, the number of inspections is no more than the overall number of transactions that exceed the threshold in interval , which belong to process with expected value . For too low of a threshold value, transactions with scores higher than arriving after the first ones will not be inspected due to the capacity restriction. To this end, let . As a result of Proposition 3, since the sum of
Bernoulli random variables has Binomial distribution, the probability that exactly
transactions among the first arrivals exceeding belong to process , and are therefore fraudulent, is . Random variable is .The total number of fraudulent arrivals in is , and therefore, , is given by
Appendix C Proof of Theorem 2
Given that in the setting of this paper we are interested applications with a large number of expected arrivals in , for simplicity, let us assume that , and therefore . Then, from Theorem 2 it follows that for a given threshold , the detection rate can be rewritten as follows
(15) 
When , then is decreasing in since the CDF is increasing in . Therefore, it is maximized for the smallest value for which the condition is satisfied, i.e., . When , then
which is less than the value achieved when . Therefore, the maximum is achieved with .
Note that eq. (15) has an intuitive interpretation. Suppose , then the capacity exceeds the expected number of transactions with , of which are fraudulent on average. Otherwise, when , only a fraction of the transactions with are captured on average, of which are fraudulent.
Appendix D Proof of Theorem 5
Per Proposition 1, we denote the arrival process of nonfraudulent and fraudulent transactions with scores exceeding critical curve by and , respectively, with intensities and . Based on Proposition 3, a transaction arriving at with score is fraudulent, i.e., belongs to , with probability
(16) 
Let , , be the (random) waiting time starting from until the first transaction is selected for inspection when we have inspections left, i.e., the additional time after till a transaction score exceeds critical curve . If the , , inspection happened at time with respect to critical curve , then, the inspection happens at with respect to critical curve . Let us define the index . Based on Proposition 4, is distributed according to (8) with intensity function .
Let us denote the waiting times between inspections by . Then, the expected fraction of fraudulent transactions that are selected for inspected is
where (a) follows since for .
Appendix E Proof of Theorem 5
For a given inspection capacity , any of the (nonfraudulent or fraudulent) transactions are equally likely to be selected for inspection with probability . Then, the expected number of fraudulent transactions arriving according to process that are selected is
where (a) follows since the sampling is independent from the transaction being fraudulent, and (b) follows from Jensen’s inequality since the function is convex for , and therefore .
Appendix F Proof of Theorem 6
We derive the detection rate using the following definition.
Definition 0 ().
are continuous random variables with a common PDF . The ordered realizations of the random variables, , sorted in increasing order, are also random variables. is referred to as the order statistic.
As per definition 1, we denote the sorted scores assigned to nonfraudulent and fraudulent transactions in increasing order, respectively, by , and .
Among the fraudulent transactions, consider the one with the largest score, which is equivalent to the order statistic of a sample of size with PDF . Let us define the index . Similarly, among the nonfraudulent transactions, consider the one with the largest score, which is equivalent to the order statistic of a sample of size with PDF . Let us define the index . Then, for any , the fraudulent transaction with the largest score is selected for inspection, only if
(17) 
Therefore, the fraction of fraudulent transactions that are inspected, is given by
(18)  
(19) 
where (18) follows since for , and , is a random variable with CDF
(20) 
and the PDF of the order statistic is given by Lemma (2), provided in the following.
Lemma 0 ().
[Casella and Berger, 2002, Theorem 5.4.4] Let be the order statistics of an random sample of size from a continuous distribution with CDF and PDF . Then, the PDF of is
(21) 
Appendix G EndtoEnd Upper Bound
In this paper, we consider approaches that identify suspicious events in two steps: (1) learning a predictive classifier that discovers frauds based on the transaction features , and (2) thresholdbased sampling based on the classifier scores. We have mainly focused on the second step, and compared sampling methods for a given classifier. By considering the endtoend system that takes both steps into account simultaneously, we can define the fraud detection rate as a function of the inspection capacity, denoted by , as follows:
(22) 
The following theorem provides an upper bound on , which is also an upper bound on defined in eq. (1) of the main paper. Note that this bound is not necessarily achievable.
Theorem 1 (EndtoEnd Upper Bound).
For a given inspection capacity , the endtoend tradeoff satisfies
(23) 
Proof.
The ideal classifier would be able to, given an inspection budget of transaction, perfectly select (the first) fraudulent transactions as the arrive. With enough capacity, i.e., when , the inspector will detect all frauds without error. Therefore,
which follows from Jensen’s inequality since function is convex for , and therefore .
∎
Appendix H Experiment Setup Details
Data Prepossessing:
The IEEEfraud dataset contains more than one million online transactions, and each transaction contains more than features. While the original dataset contains a train set and test set, only the train set contains fraud labels, and therefore, we only use the train set for our analysis which has K samples. Each transaction has a timestamp feature (in seconds) that is provided with respect to a reference value. We assume that each time interval corresponds to a full day of seconds, resulting in episodes for our experiments. We follow the analysis provided in [Deotte, 2019] for preprocessing and removing the redundant features through correlation analysis, and we further engineer aggregate features that increase the local validation AUC.
As described in Sec. 5 of the paper, the original dataset contains around frauds. We create two additional datasets by modifying the number of fraudulent transactions by upsampling ([Chawla et al., 2002]) and downsampling (uniformly at random) the minority class. The average number of daily transactions , and average number of daily frauds are provided in Table. 2
Class  Average Number of  Average Number of  

Imbalance  Daily Transactions  Daily Frauds  
Dataset  
IEEEfraud  
IEEEfraud  
IEEEfraud 
Training Setup We split the dataset into three portions of by randomly selecting the days, such that there are days for training the classifier, days for estimating , and , and there are days for empirical experiments. The parameters for training each classifier based on the area under the ROC curve (AUC) are as followed:

Gradient Boosted Decision Trees (GBDT): We use the XGBoost library to train our classifier, using of the data for crossvalidation, with the following parameters.
{ ’n_estimators’: 2000,’max_depth’: 12,’learning_rate’ : 0.02,’subsample’ : 0.8,’colsample_bytree’: 0.4,’missing’: 1,’eval_metric’: ’auc }’ 
Random Forests (RF): We use the scikitlearn
library to train our classifier, by using grid search crossvalidation over the following hyperparameters in
iterations, and use the set of parameters for our final training.{ ’bootstrap’: True,’max_depth’: [5, 12],’max_features’ : [2, 3],’min_samples_leaf’: [3, 4, 5],min_samples_split’: [8, 10, 12],’n_estimators’: [100, 200, 300, 1000]}’ 
Logistic regression (LR): We use the scikitlearn library to train our classifier, using by using grid search crossvalidation in iterations over
{ ’C’: np.power(10.0, np.arange(3, 3)) }
Additional Experiment Results In this section, we provide results similar to Fig. 4 corresponding to the Batch Processing (Fig. 5 (a), (b)) and Static Thresholds methods (Fig. 5 (c), (d)). Both methods exhibit similar trends as the method with dynamic thresholds. Specifically, for the impact of class imbalance on the left, it is observed that For very small capacities, the results are very close to the upper bound, and as the capacity increases, class imbalance impacts the detection rate more severely. And, the classifier learning phase impacts the tradeoff such that a classifier with inferior predictive power (low AUC) selects nonfraudulent transactions earlyon, and operates further from the upper bound.