Adaptive Fraud Detection System Using Dynamic Risk Features

10/10/2018 ∙ by Huiying Mao, et al. ∙ Microsoft Virginia Polytechnic Institute and State University 0

eCommerce transaction frauds keep changing rapidly. This is the major issue that prevents eCommerce merchants having a robust machine learning model for fraudulent transactions detection. The root cause of this problem is that rapid changing fraud patterns alters underlying data generating system and causes the performance deterioration for machine learning models. This phenomenon in statistical modeling is called "Concept Drift". To overcome this issue, we propose an approach which adds dynamic risk features as model inputs. Dynamic risk features are a set of features built on entity profile with fraud feedback. They are introduced to quantify the fluctuation of probability distribution of risk features from certain entity profile caused by concept drift. In this paper, we also illustrate why this strategy can successfully handle the effect of concept drift under statistical learning framework. We also validate our approach on multiple businesses in production and have verified that the proposed dynamic model has a superior ROC curve than a static model built on the same data and training parameters.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the eCommerce industry, Fraud Detection Systems (FDSs) play an important role when the online fraud is increasing and spreading rapidly. Online purchases are made on a, known as, Card Not Present (CNP) environment, where no physical cards or cardholder signatures are required. This provision of convenience, however, also generates a fertile ground for cybercrime. In CNP scenario, fraudsters only need to provide credit/debit card information to execute a purchase. If the purchase is made successfully by a fraudster, once identified, the legitimate cardholder can file a dispute (i.e. chargeback) to the card issuing bank. For CNP transactions, merchants are also required to be responsible for the fraud financial liability. Therefore, an effective fraud detection system is essential for the eCommerce merchants. The system needs to be able to distinguish legitimate transactions from fraudulent ones and make prompt decisions (reject/approve) as required most of the time.

A simplified flowchart of a fraud detection system is depicted in Figure 1

. Given an online purchase, a FDS can approve, reject or send the transaction to the manual review team, if applicable. The decision is made based on the risk score estimated using a machine learning (ML) model. Different decisions result in different types of feedback, which are constantly looped into the FDS for the model performance improvement. There are three types of fraud feedback. Different types of feedback have different delay schedules and provide different confidence levels indicating the true status of the transaction (either legitimate or fraudulent)

Figure 1: Mechanism flowchart of an FDS.
  1. Chargeback: this type of feedback often returns late. It could take up to several weeks or months starting from the legitimate cardholders filing the dispute to the time when merchants really receive the chargeback request. The transactions that end up with a chargeback request are often labeled as fraudulent transactions by the merchants.

  2. Manual review (MR) decisions: this type of feedback usually returns ranging from several minutes to a few hours. Although the decision made MR can be subjective, but MR rejections can be considered as a fraudulent signal with a high level of confidence. Depending on budget constraints and practical needs, not every eCommerce business has human investigators, so not every FDS can have this type of feedback.

  3. System rejects: this is almost real-time from the FDS. The decision is made automatically without additional reviews. Therefore, the reliability of this type of feedback may be doubtful and need to be used more carefully. However, in practice, a customer can always escalate for the wrongful rejection by contacting the customer service. The final decisions made by customer service can be later used for the model evaluation and performance improvement.

The behaviors of legitimate customers and fraudsters keep evolving over time either intentionally or unintentionally. For example, the behavior of the customers may change abruptly due to the launch of marketing promotion, advertising, and new products. However, the behavior of online fraudsters tend to adapt even more quickly. As soon as online merchant adapts new strategy to prevent fraud, the online fraudsters may easily find another weakness/loophole to exploit or they could alternatively focus their attention elsewhere. Therefore, compared with traditional classification problems, the FDSs have more challenge to overcome for eCommerce merchants.

Due to the aforementioned scenario, the stream drawn from transactions in an eCommerce application is commonly not stationary, that is, the data are not drawn from a fixed distribution. This phenomenon is called ?concept drift?, and there is a wealth of research devoted to this topic; see (Andrea Dal Pozzolo and Bontempi, 2015, 2018; Joao Gama, 2013; Jing Gao, 2007; Gregory Ditzler and Polikar, 2015) and references therein.

In a non-stationary or drifting environment, a non-adaptive model that is trained under the false stationary assumption would not perform well or even fail completely at worst. There has been a need for efficient and adaptive algorithms for learning in a drifting environment since the beginning of machine learning, and now is ever increasing driven by the big-data phenomenon witnessed in the past decade. Research work related to learning with concept drift has also been growing and many drift-aware adaptive learning algorithms have been developed. Some similar adaptive strategies have been developed independently under different names in different contexts. There are two primary families of strategies referred to as active and passive approaches (Elwell and Polikar, 2011). Active approaches have a detection mechanism of the changes in the data distribution, which activates an adaption once certain thresholds are reached. Passive approaches constantly update the model without requiring an explicit detection mechanism.

While there have been many excellent research works done on adaptive learning in a general setting, not much published works have addressed real life machine learning in the challenging eCommerce environment. We list some challenges (not inclusive) here.

  1. A decision has to be made in a blink of second (typically 20-1000 milliseconds) with high accuracy. In practice, if it takes the FDS too long to make a decision for a purchase, the system may have a timeout and generally the transaction is approved. This opens a gate for fraudsters. If the response time is short and the decision is made with high inaccuracy, the result is costly, too. If too many fraudulent transactions are approved inline and many chargebacks are filed later, this definitely could incur a huge loss to the merchants. If too many legitimate transactions are rejected, the customers may choose to shop elsewhere or the customer service has to handle high volume of escalation calls. In the presence of manual reviews, if the review agents are flooded with a huge volume of transactions, and their review quality will suffer. When the fraud attack happens at the worst time such during holiday season or the fraud detection system halts due to a failure in software or hardware, it would no doubt exacerbate the situation

  2. The behavior of fraudster often changes very rapidly by exploiting an vulnerability of an FDS such as using stealing payment instruments from legitimate customers to make a huge volume of purchases within a short time period.

  3. For the active strategy to work, an effective and efficient detection system with rapid change in the data distribution often involves high computation cost and thus might not be justified in the revenue gained.

The motivation for this research stemmed from many years of handing concept drift in industry and also from the drawbacks of the current fraud detection systems. Our main contribution in this paper is to provide an effective solution to tackle the typical challenges in the eCommerce industry, that was mentioned in the previous paragraphs. Different from the existing approaches, we propose a new method that uses dynamic risk features to track the concept drift and further build a dynamic model. Our strategy is to quantify concept drift by using an entity profile with fraud feedback. This entity profile continues being updated by consuming fraud feedback signals. The statistics derived from the entity profile are then used as input features at both training and scoring stages. This approach enables the machine learning model to be self-adaptive during concept drift. Instead of focusing on model architecture and training methodology in the existing literature, we focus more on those features that can effectively adapt to concept drift in an eCommerce industry. By keeping relatively fewer features, we can detect changes in data stream without incurring high computational cost. The results were validated in production environment and we found that models built with this approach could effectively handle concept drift. In essence, our proposed approach is a delicate combination of both active and passive strategies for learning in a nonstationary environment. In addition, the experiment conducted using real data in a production setting showed that models thus built were scalable and robust.

The rest of the paper is structured as follows. Section 2 provides a brief explanation of concept drift. Section 3 gives a brief summary of approaches to adaptive learning algorithms. Section 4 introduces three types of dynamic risk features, and details the dynamic modeling strategy for handling concept drift. The results of applying our model to real data are shown in section 5. Section 6 concludes this paper with a few remarks and speculation on future research.

2 Concept Drift

In this section, we will first describe a fraud detection machine learning problem in the environment where the concept drift issue exists. Followed by specifying the root cause of concept drift, we will discuss why the existence of concept drift has some negative impact on the performance of static models. Lastly, various statistics used to track and quantify concept drift will be discussed. We refer to (Joao Gama, 2013) for a comprehensive survey on concept drift adaptions and to (Geoffrey I. Webb and Petitjean, 2016) for quantitative analysis of drift.

2.1 Definition of concept drift and two scenarios

Concept drift is a phenomenon when the underlying data generating system changes over time (Elwell and Polikar, 2011). We denote the features of a transaction as and the status of this transaction as . is a

-dimension random vector, where

is the number of features that describe this transaction. Examples of features can be “product name”, “purchase dollar amount”, “device type”, and so forth.

is a binary variable whose value is either 1 or 0, indicating the transaction status being fraudulent or legitimate. Concept drift is said to occur when

(1)

where

represents the joint probability density function (PDF) of

and at time , time refers to the next timestamp, is a norm to measure the difference between the two density functions, and is a pre-defined threshold. Such a data generating system with underlying concept drift is also referred as non-stationary environment (NSE) in the literature (e.g., (Ditzler and Polikar, 2013)).

We use , a conditional PDF that gives the conditional probability of given , to describe the likelihood of presence of transaction features. For purchases made by good customers, the distribution of transaction features is . Similarly, the distribution of transaction features from fraudsters is described by . Because of

(2)

the fluctuation of joint PDF in (2) is originated from the fluctuations of and . We can divide concept drift into two scenarios:

  1. : the overall population distribution shifts between good customers and fraudsters at the different time points? and . For example, under economic prosperity at time , higher portion of purchases are made by good users compared with the regular economic status at time . For another example, fraudsters test the FDS constantly. When they find a loophole in the system, they would place much more orders. As a result, the percentage of fraudulent transactions jumps. In other words, the good/bad purchase volume distribution change leads to the variation in with respect to time .

  2. : the shopping features of good customers or fraudsters at time differs from their shopping features at time . For instance, customers adjust their purchase behavior from product to product after an advertising campaign; fraudsters heavily attack a certain product when the product is popular in the market.

These two scenarios are not necessarily mutually exclusive. They often occur simultaneously.

2.2 Performance degradation of static model

Supervised learning models, where the status of the subject is known, are commonly applied by a FDS. Classified and Regression Trees (CART), Neural Network model, and Support Vector Machine (SVM) are the most commonly used classifiers . In supervised learning, the properties of conditional PDF is of the most interest (Jerome Friedman and Tibshirani, 2001). Under concept drift, may or may not change based on

(3)

Literature categorizes concept drift into “real” and “virtual” versions based on the existence of fluctuation in ; see (Joao Gama, 2013; Jing Gao, 2007; T. Ryan Hoens and Chawla, 2012; Mark G. Kelly and Adams, 1999):

  1. Real concept drift if .

  2. Virtual drift if .

This paper focuses on handling real concept drift. In the following context, unless specified, ?concept drift? refers to “real concept drift”.

The objective of having machine learning model in a fraud detection system is that to use this model to effectively estimate the conditional probability, i.e.

(4)

outputs a score or a probability, which measures the risk level of the transaction. That is, given enough volume of stationary historical transactions, a sophisticated learning algorithm is used to estimate the probability of a transaction with features being fraudulent , where is generated from the same stationary environment. For notation simplicity, from now on will be denoted as .

When concept drift happens, without taking this factor into account, the probability of a transaction with features generated at time being fraudulent, , is wrongly predicted by . In other words, suppose is a model trained based on historical data with fraud labels up to time :

(5)

where represents the set of all transactions and represents the set of corresponding labels. For a new transaction with feature generated at time ,

(6)

The inequality is the root cause of incorrect prediction. In this paper, such a model without concept drift handling strategy is referred as “static model”.

Due to concept drift, model performs much worse than expected. ROC curve in Figure 2 shows an example of comparison between in-time prediction (blue) and offline prediction (orange) from one of our business portfolios. In-time means that model performance is evaluated on a dataset that shares the same time range with training dataset; offline refers to that the test dataset is collected outside the training dataset time range.

Figure 2: Performance degradation of static model.

3 Survey of Adaptive Algorithms

In this section, we give a brief summary of approaches and algorithms that address concept drift issues which are mostly relevant to this paper. We recommend articles (Tsymbal, 2004; Joao Gama, 2013; Gregory Ditzler and Polikar, 2015) for a comprehensive treatment.

Fraud detection systems in the eCommerce use adaptive learning algorithms to process high-speed flows of stream data. Based on whether there is a mechanism to detect change in data distribution, adaptive algorithms for learning in the presence of concept drift are primary based on active or passive approach. Active algorithms detect concept drift, and passive algorithms constantly update the model with new data, regardless whether drift is present.

In an active approach that handles concept drift, there are two phrases: change detection and adaption. The change detection mechanism rarely operates directly on raw data, but instead on independent features that are often based domain expert knowledge, and are extracted from incoming data stream. Once a change is detected, the classifier needs to adapt to the change from the newly available information and to discard obsolete ones. The adaption can be done through windowing, weighting or random sampling (Gregory Ditzler and Polikar, 2015).

In the literature, two main approaches for change detection had been proposed and adopted, which differ for the different entity in analysis: based on form of distributions of the input data, and based on the classification error. The first approach detects the change in the jointed pdf structure. The second approach evaluates variations in the classification error on supervised data, and the classification accuracy uses a fixed threshold or an adaptive one (João Gama and Rodrigues, 2004). In both cases, a change is detected as soon as the classifier’s accuracy falls below the threshold. The first approach assesses the drift of the pdf of the inputs disregarding their label values (Cesare Alippi, 2008).

A combination of both approaches was proposed in (Cesare Alippi, 2012). The proposed solution assesses the stationary in both the joint probability density function of the labeled data and the distribution of the inputs on unlabeled data. Dal Pozzolo et al. treated immediate feedback samples and delayed samples, whose labels are obtained only after some time, separately. They suggested to trained two distinct classifiers based on each type of feedback respectively, and then aggregate the outputs (Andrea Dal Pozzolo and Bontempi, 2015). Gao et al. proposed a method which uses an ensemble of classifiers built on sequential chucks of training samples to handle concept drift. In their approach, each classifier is trained on a short period of time and updated frequently (Jing Gao, 2007).

4 Model with dynamic risk features

In this paper, we propose a dynamic model that can overcome the model performance degradation caused by concept drift. Our strategy is to incorporate concept drift measures as dynamic risk features into model training and scoring. By design, the model learns not only from the original static features but also from dynamic risk features, which provide an effective measurement of concept drift. As a result, the model self-adapts when concept drifts happen. This approach saves the effort to constantly retrain the model for preventing the performance degradation.

4.1 Measuring concept drift using entity profile with fraud feedback

As shown in Section 2, concept drift can be measured using the variation on , and . Therefore, to track concept drift, we need to consistently monitor the probability distributions with respect to time and to estimated , , . However, the estimates of , , are not the adequate measurement for concept drift since they are the instantaneous measures and could vary from the norm most of them time. Therefore, in our study, we use the average probabilities within a sliding window as surrogates, and the average probabilities are approximated by frequency ratios.

To formulate the methodology: let denotes the probability of interest at time , which changes over time. The average probability , the surrogate of , is defined as

(7)

where is the length of a sliding window. The is approximated through some statistics

by the law of large numbers. That is,

(8)

and thus

(9)

The length of the sliding window needs to be chosen carefully. It should be long enough to have a good sample size for reliable estimation, while short enough to be sensitive to the fluctuation. Our approach is to use two sliding windows with lengths of and (). That is, at time , statistics are calculated based on the transactions happened within the short-term time span and long-term time span , respectively. Later discussion will illustrate the statistics calculation only on short-term time span, as that for the long-term span is very similar.

4.1.1 Using overall fraud rate (FR) to measure

By definition,

(10)

and is estimated by overall fraud rate (FR) within the time window , which is defined as

(11)

Besides overall , overall dollar-weighted fraud rate ($FR) is also a suitable candidate to be included as a dynamic risk feature:

(12)

Overall does not directly estimate , but it puts more weights on higher dollar value transactions and helps model to learn the fraud pattern change.

Figure 3: Overall fraud rate with respect to time for one business partner.

Figure 3 shows overall fraud rate calculated on the long-term sliding window (left) and short-term sliding window (right). The solid lines are calculated at time . There is a varied time delay before receiving the chargeback signal. We also draw the final fraud rates (dash lines), which are calculated offline when this study was conducted, for comparison. These two lines follow the same trending pattern. This justifies Formula (8). The difference between the solid and dashed lines diminishes because that more recent transactions have higher percentage of chargebacks having not been received yet when plotting them.

4.1.2 Using fraud rate of each transaction entity to measure

In practice, a transaction entity is often described by a set of features, . Therefore, we propose measuring the fraud rates profiled on important features. Suppose is the transaction entity with associated features: . Within the time window , the number of good and bad transactions and their dollar amounts for each transaction entity and marginal totals are shown in Table 1.

Total
Bad(Y=1)
Good(Y=0)
Total
Table 1: Transaction count and dollar amount distribution profiled on entity within time window

represents the number (dollar amount) of fraudulent transaction with feature ; , within time window . are for the number (dollar amount) of good transactions with feature , within the sample period. Their marginal totals are calculated by

(13)
(14)

Thus, the fraud rates and dollar weighted fraud rates for entity within time window are:

(15)
(16)

They are the risk features used to measure .

Figure 4 shows the fraud rates of a selected object. For example, we can select ?product name? as the entity and one particular product as the selected object. Red line is the fraud rate calculated by 4-week sliding windows and the black line is for 8-week sliding windows. As we can see, this object is under attack around the November 2016. FR calculated within the 4-week sliding window is more sensitive to fraud attack. When the short-term FR is higher than long-term FR, it indicates that fraudsters are attacking this product recently. On the other hand, if short-term FR is lower, it means fraudsters divert their attention to the other products. Also, the difference between long-term FR and short-term FR reflects the fraud attack severity.

Figure 4: Entity profiled fraud rate for a selected object.

4.1.3 Using Weight of Evidence on features of entities, (WoE()), to measure

Similarly,

(17)

and is high-dimensional. We propose to use the estimation of

(18)

to measure . The estimation is known as Weight of Evidence (WoE) in credit risk modeling (Anderson, 2007), formulated by

(19)
(20)

where are the count and dollar amount for bad and good transactions happened within time window as shown in Table 1.

For a particular entity feature value

is the difference between the log of odds ratios of

and that of the overall population:

(21)

Therefore, when WoE, the odds of being fraudulent feature is higher than the average; when WoE, the odds of being fraudulent feature is lower compared to the average. That is to say, WoE provides an indication of fraudsters? attacking target feature.

For example, in Figure 5, among 10 products A-J in the market: Product B is targeted by fraudsters on March 1st where its WoE is above zero. But as time goes by, Product B is no longer the attack object; instead, Product G becomes the attack target on April 15.

Figure 5: Snapshots of long-term and short-term WoE distribution for 10 products.

One final note for this subsection is that the selection of features is pivotal. Important features could include product name, account email domain, billing country, merchant name, and so forth. An feature should be selected such that the number of feature values should not be too few or too many. If too few, the feature value is too broad to have the probability fluctuation to show up. For example, if “market country” is chosen as the feature, then fraudster attack different product within the same country won?t appear in the tracking. Likewise, it should not have too many feature values. Otherwise, the frequency ratio approximations for each feature would be unstable.

4.2 Measuring concept drift using entity profile with fraud feedback

So far, we have shown multiple tracking statistics which can provide comprehensive description to concept drift. In this subsection, we will show how to use these statistics to construct a set of dynamic risk features and use those features as model inputs.

Suppose is the set of entities chosen. The dynamic risk feature set is constructed as

(22)

To approximate real-time fluctuation, dynamic risk features need to be updated relative frequently. Denote the pre-defined updating time stamps for dynamic risk features as . At every time stamp , calculate dynamic risk feature set based on the transactions happened within the sliding windows and . Later, for transactions happened within time span , become part of the transactions attributes based on the entity, as illustrated in Figure 6.

Figure 6: Attaching logic of assembling transactions.

Therefore, for a transaction with a feature happens at time , the FDS provides a set of associated dynamic risk features. Since these features are calculated in a relative frequent fashion (difference between and is small), we can use and indistinguishably to denote the dynamic risk features associated with transaction feature . Denote the assembled transaction as

(23)

where is -dimensional. So is the number of original features and is the number of dynamic risk feature. and refer to the same transaction. The only difference is has both dynamic risk feature and static feature describing it, while only has static features.

Let be the collection of dynamic risk features calculated prior to time , i.e.

(24)

Denote as the model applied in production based on the training dataset

(25)

We call a “dynamic model” since it includes the dynamic risk features as training inputs. At time , for transaction , the output score from the dynamic model, because of assembling as inputs, satisfies

(26)

The justification for Equation (26) comes from the indicating power of to as shown in Equation (9).

By this approach, without having to update the existing model between time and , the dynamic model can overcome concept drift phenomenon to predict the probability of a transaction being fraudulent at time more accurately.

5 Model Validation Using Real Data

Online purchase transaction data from two of Microsoft’s business partners were used to valid our approach. Within each, we randomly select 70% transactions for training and the rest 30% for testing. For the same business partner data, static and dynamic models are trained over the same set of transactions and evaluated on the same test dataset. FastTree, known as an efficient implementation of the MART gradient boosting algorithm [2], is used as our model training algorithm. The configuration for dynamic model is

  1. Entity: product name, account email domain, billing country code, device type + currency + the first three digits/letters of a SKU;

  2. Sliding window size: 4 weeks and 8 weeks;

  3. Dynamic feature update frequency: daily.

Figure 7 and Figure 8 show the ROC curve comparison between static model and dynamic model on business partner 1 and business partner 2 respectively. ROC curves were generated from test dataset. Blue line represents the results of the dynamic model, while orange line represents static model. These lifts confirm that dynamic model is superior for both business partners. For partner 1, while keeping false positive rate (FPR) at 0.5% for both models, dynamic model can increase true positive rate (TPR) by 2.23% relatively; if TPR is controlled at 84.4%, dynamic model can reduce FPR by about 20% relatively. For partner 2, while keeping FPR at 0.5%, dynamic model can increase TPR by about 12.3% relatively; if TPR is controlled at 41.6%, dynamic model can reduce FPR by about 31.1% relatively. We also found that dynamic risk features were shown as top features with higher information values.

Figure 7: ROC curve comparison for partner 1.
Figure 8: ROC curve comparison for partner 2.

Figure 9 shows the fraudulent transactions that dynamic model caught while the static model missed. The red curve is fraud rate on one entity value calculated on the 4-week sliding windows, and black curve is for the 8-week sliding windows. The system was under fraud attack on 04/06/2017 when the red line is significantly higher. Some fraudulent transactions are shown in the right table with their corresponding scores. D_Score is the score given by the dynamic model and S_Score is given by the static model. If we have used the dynamic score and the cutoff as 85, we could have caught most frauds which static model missed.

Figure 9: Fraudulent transactions dynamic model catches.

Back to Figure 2, due to concept drift, static model performed much worse than expected on offline data. On similar dataset, Figure 10 shows the degradation of model performance is much smaller for the comparison between in-time prediction and offline validations.

Figure 10: Performance degradation of dynamic model.

6 Discussion

Entity profile with fraud feedback is a critical feature which brings most recent fraud patterns for dynamic models to consume. Selection of features for the entity profile is very crucial. This selection can be treated as a problem of partitioning the data space. If, for example, product title is the chosen entity, the data space will be cut into pieces by the number of products. Since FR() and WoE() are calculated for each piece, the partition of data space should not be too coarse or too fine. If cutting too coarsely, concept drift won’t be captured. For example, if we choose country as the major feature for entity, it will cut the data space into USA part, Mexico part, China part, etc. The dynamic risk features calculated on the countries will not reflect fraud target shifting within the countries. If cutting too finely, the estimates of FR and WoE will not be stable. How to wisely divide sample space is an interesting topic that is worth further research.

The selection of moving window size is another important factor to consider. We selected the long-term short-term sliding windows based on experience. It works well but finding a systematic algorithm with an automatic window selection can be an interesting topic. We have explored long term and short term cascade modeling which helps to build a solid model to catch new emerging fraud patterns without impacting the detection power on existing ones.

We mainly use the ROC curves for validation of our dynamic approach to adaptive learning, as they are often used for evaluating binary classifiers. Other measures that related to revenue gained, or prevented losses could be interesting from a business point of view. However, these measures may not be comparable for different businesses.

As we mentioned in the introduction section, our approach in this paper is a combination of active and passive strategies in handing concept drift for adaptive learning. We select dynamical risk features to build a light-weight detection mechanism for fraud pattern changes using signals from the input and out signals. The approach is validated in real operational settings and is proved to be effective; cf.[Section 5] (Joao Gama, 2013). Although this paper focuses on handling real concept drift as said in Subsection 2.2, our approach works for virtual drift as well.

References

References

  • Anderson (2007) Anderson, R. (2007). The Credit Scoring Toolkit: Theory and Practice for Retail Credit Risk Management and Decision Automation. Oxford University Press.
  • Andrea Dal Pozzolo and Bontempi (2015) Andrea Dal Pozzolo, Giacomo Boracchi, O. C. C. A. and Bontempi, G. (2015). Credit card fraud detection and concept-drift adaptation with delayed supervised information. The 2015 International Joint Conference on Neural Networks (IJCNN), pages 1–8.
  • Andrea Dal Pozzolo and Bontempi (2018) Andrea Dal Pozzolo, Giacomo Boracchi, O. C. C. A. and Bontempi, G. (2018). Credit card fraud detection: a realistic modeling and a novel learning strategy. IEEE Transactions on Neural Networks and Learning Systems, 29(8):3784–3797.
  • Cesare Alippi (2008) Cesare Alippi, M. R. (2008). Just-in-time adaptive classifiers—part i: Detecting nonstationary changes. IEEE Transactions on Neural Networks, 19(7):1145–1153.
  • Cesare Alippi (2012) Cesare Alippi, Giacomo Boracchi, M. R. (2012). Just-in-time ensemble of classifiers. The 2012 International Joint Conference on Neural Networks (IJCNN), pages 1–8.
  • Ditzler and Polikar (2013) Ditzler, G. and Polikar, R. (2013). Incremental learning of concept drift from streaming imbalanced data. IEEE Transactions on Knowledge and Data Engineering, 25(10):2283–2301.
  • Elwell and Polikar (2011) Elwell, R. and Polikar, R. (2011). Incremental learning of concept drift in nonstationary environments. Incremental Learning of Concept Drift in Nonstationary Environments, 22(10):1517–1531.
  • Geoffrey I. Webb and Petitjean (2016) Geoffrey I. Webb, Roy Hyde, H. C. H. L. N. and Petitjean, F. (2016). Characterizing concept drift. Data Mining and Knowledge Discovery, 30(4):964–994.
  • Gregory Ditzler and Polikar (2015) Gregory Ditzler, Manuel Roveri, C. A. and Polikar, R. (2015). Learning in nonstationary environments: A survey. IEEE Computational Intelligence Magazine, 10(4):12–25.
  • Jerome Friedman and Tibshirani (2001) Jerome Friedman, T. H. and Tibshirani, R. (2001). The Elements of Statistical Learning, volume 1. Springer Series in Statistics, New York.
  • Jing Gao (2007) Jing Gao, Wei Fan, J. H. P. S. Y. (2007).

    A general framework for mining concept-drifting data streams with skewed distributions.

    Proceedings of the 2007 SIAM International Conference on Data Mining, pages 3–14.
  • Joao Gama (2013) Joao Gama, Indre Zliobaite, A. B. M. P. (2013). A survey on concept drift adaptation. ACM Computing Surveys, 1(1):1–44.
  • João Gama and Rodrigues (2004) João Gama, Pedro Medas, G. C. and Rodrigues, P. (2004). Learning with Drift Detection – SBIA 2004, volume 3171 of Lecture Notes in Computer Science. Springer, Berlin, Heidelberg.
  • Mark G. Kelly and Adams (1999) Mark G. Kelly, D. J. H. and Adams, N. M. (1999). The impact of changing populations on classifier performance. Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, pages 367–371.
  • T. Ryan Hoens and Chawla (2012) T. Ryan Hoens, R. P. and Chawla, N. V. (2012). Learning from streaming data with concept drift and imbalance: an overview.

    Progress in Artificial Intelligence

    , 1(1):89–101.
  • Tsymbal (2004) Tsymbal, A. (2004). The problem of concept drift: Definitions and related work. Technical report, Department of Computer Science, Trinity College, Dublin, Ireland.