1 Framework and Terminology
For analyzing the problem of training set formation under concept drift, we adopt the following framework.
A sequence of instances is observed, one instance at a time, not necessarily in equally spaced time intervals. Let
is a vector in-dimensional feature space observed at time and is the corresponding label. For classification , for prediction . We call an instance and a pair a labeled instance. We refer to instances as historical data and instance as target (or testing) instance.
1.1 Incremental Learning with Concept Drift
We use incremental learning framework. At every time step we have historical data (labeled) available . A target instance arrives. The task is to predict a label . For that we build a learner , using all or a selection from the available historical data . We apply the learner to predict the label for . A prediction process at time step is illustrated in Figure 1. That is for one time step.
At the next step after the classification or prediction decision is casted, the label becomes available. How the instance with a label is a part of historical data. The next testing instance is observed. We picture a fragment of the incremental learning loop in Figure 2
. The classifier training phase at timeis zoomed in. Training set formation strategies are the subjects of our investigation. They are depicted as a ‘black box’ in the figure.
Every instance is generated by a source . We delay more formal definition of a source until the next section, for now assume that it is a distribution over the data. If all the data is sampled from the same source, i.e. we say that the concept is stable. If for any two time points and , we say that there is a concept drift.
Note that a random noise (deviation) is not considered to be a concept drift, because the data generating source is still the same.
The core assumption when dealing with the concept drift problem is uncertainty about the future. We assume that the source of the target instance
is not known with certainty. It can be assumed, estimated or predicted butthere is no certainty. Otherwise the data can be decomposed into two separate data sets and learned as individual models or in a combined manner (then it is a multitask learning problem ).
We do not consider periodic seasonality as concept drift problem. But if seasonality is not known with certainty, we consider it as concept drift problem. For instance, a peak in sales of ice cream is associated with summer but it can start at different time every year depending on the temperature and other factors, therefore it is not known exactly when the peak will start.
1.2 Causes of a concept drift
Before looking what can actually cause the drift, let us return to the source and provide more rigorous definition of it.
Classification problem independently of presence or absence of concept drift may be described as follows . Let is an instance in -dimensional feature space. , where is the set of class labels. The optimal classifier to classify
is completely determined by a prior probabilities for the classes
and the class-conditional probability density functions (pdf), .
We define a set of a prior probabilities of the classes and class-conditional pdf’s as concept or data source:
When referring to a particular source at time we will use the term source, while when referring to a fixed set of prior probability and the classes and class-conditional pdf we will use the term concept and denote it .
Recall, that in Bayesian decision theory  the classification decision for instance at equal costs of mistake is made based on maximal a posteriori probability, which for a class is:
where p() is an evidence of , which is constant for all the classes .
As first presented by Kelly et al , concept drift may occur in thee ways.
Class priors might change over time.
The distributions of one or several classes might change.
The posterior distributions of the class memberships might change.
Note, that the distributions might change in such a way that the class membership is not affected (e.g. symmetric movement to opposite directions).
Sometimes change in (independently whether it affects or not) is referred as virtual drift and change in is referred as real drift . We argue, that from practical point of view it is not essential whether the drift is real or virtual, since depends on as in Equation (2). In this thesis now on we do not make a distinction between the real and virtual drifts.
2 How Do Concept Drift Learners Work?
Following the framework, which was set-up in the previous section, the learner should provide the most accurate generalization for the data at time . In order to build such a learner, four main design sub-problems need to be solved.
- A.1 Future assumption:
a designer needs to make an assumption about the future data source .
- A.2 Change type:
a designer needs to identify possible change patterns.
- A.3 Learner adaptivity:
based on the change type and the future assumption a designer chooses the mechanisms which make the learner adaptive.
- A.4 Model selection:
a designer needs a criterion to choose a particular parametrization of the selected learner at every time step (e.g. the weights for an ensemble members, the window size for variable window method).
All these sub-problems are the choices to be made when designing a learner. In Figure 3 we depict a positioning of each design sub-problem within the established learning framework.
In the next subsections we discuss each of the design sub-problems individually.
2.1 Future assumption
Future assumption is the assumption to be made about the source of the target instance . We identify three types of choices here.
Assuming that .
Estimating the source based on .
Predicting the change.
The first option, assuming , is the most common among concept drift, although rarely explicitly stated. It is assumed that in the nearest future we will see the data coming from the same source as we saw in the near past.
The second option utilizes information from the unlabeled target instance . Estimation of the source is usually done by measuring the distance between and historical reference instances. The algorithms presented in [148, 123, 37, 97] use this future assumption.
Generally in concept drift problem the future data source is not known with certainty. However, there are methods using trainable prediction rules, to estimate the future state and incorporate that estimation into the incremental learning process. The algorithms using future predictions are presented in [24, 157, 161, 21].
2.2 Change types
In Section 1.2 we identified the causes of a drift, or what happens to the data generating source itself. Here by change types we mean the configuration patterns of the data sources over time. The structural types of change are usually defined based on those configurations.
For intuitive explanation, let us for now restrict the number of possible sources over time to two: and .
The simplest pattern of a change is sudden drift, when at time a source is suddenly replaced by source . For example, Kate is reading the news. Sudden interest in meat prices in New Zealand when she got an assignment to write an article, is a sudden drift.
Gradual drift is another type often met in the literature. However in fact there are two types being mixed under this term. The first type of gradual drift is referring to a period when both sources and are active (e.g. [141, 154, 112]). As time passes, the probability of sampling from source decreases, probability of sampling from source increases. Note, that at the beginning of this gradual drift, before more instances are seen, an instance from the source might be easily mixed up with random noise.
Another type of drift also referred as gradual includes more than two sources, however, the difference between the sources is very small, thus the drift is noticed only when looking at a longer time period (e.g. [148, 37, 5]). We refer to the former type of gradual drift as gradual and the latter type of drift as incremental (or stepwise). For example, gradual drift is increasing interest in real estate, while Kate prefers real estate news more and more over time when her interest in buying a flat increases.
Finally, there is another big type of drift referred as reoccurring context. That is when previously active concept reappears after some time. It differs from common seasonality notion in a way that it is not certainly periodic, it is not clear when the source might reappear. In Kate’s example these are the biographies of Formula-1 drivers. The interest is related to the schedule of the races. But she does not look up the biographies at the time of the races, because she is watching them at the time. She might want to look up them later in the middle of the week. And the particular drivers she will be interested in might depend on who won the races this time.
In Figure 4 we give an illustration of the main structural drift types, assuming one dimensional data, where a source is characterized by the mean of the data. We depict only the data from one class.
Note that the types of drifts discussed here are not exhaustive. If we think of a data segment of length and just two data generating sources and , the number of possible combinations of the sources (that means possible change patterns) would be , a lot. Moreover, in concept drift research it is often assumed that the data stream is endless, thus there could be infinite number of possible change patterns. We define the major structural types, since we argue, that assumption about the change types is absolutely needed for designing adaptivity strategies.
Recently there has been an attempt to categorize change types into mutually exclusive categories  based on number of reoccurences, severity, speed and predictability. In principle the proposed categorization tires to quantify the main aspects of the learner design process into change categorization. We argue that the categories cannot be mutually exclusive, because the change frequency count, speed, severity is relative to the length of the subsequence, at which one is looking. Thus we restrict our categorization to very few qualitative categories. We reserve predictability as a part of the learner design process (future assumption), not the change itself.
2.3 Learner adaptivity
We identify four main adaptivity areas:
Adaptive training set formation (e.g. training windows, instance selection) can be employed, which is the scope and focus of this thesis. Training set formation can be decomposed into
training set selection,
training set manipulation (e.g. bootstrapping, noise),
feature set manipulation.
The adaptivity strategies, which are based on training set selection selection, can be generally divided into windowing (selecting training instances consecutive in time) and instance selection (when sequential in time instances are selected as a training set). The choice of adaptivity strategy strongly depends on the assumption about the change type, discussed in the previous section. For sudden drift windowing strategies are generally preferred, while for gradual drift and reoccurring contexts instance selection strategies are preferred.
2.4 Model selection
In this thesis we use the generalization error as the primary measure of the concept drift learner performance. Thus for model selection (training) purposes the procedure of estimation the expected generalization error for target instance at every time step needs to be defined. The two main options are:
theoretical evaluation of the generalization error, and
estimation of the generalization error using cross validation.
In any case error estimation choice is strongly related to the future assumption, because it depends on the expectation regarding the future data source .
The design process of a concept drift learner is graphically illustrated in Figure 5 (a). We see relations (1) and (2) as key issues in designing concept drift learners. (1) the strategies selected to make the learners adaptive would strongly depend on the assumption about the change type, present in the data. (2) the model selection and evaluation strategies would strongly depend on the assumption about the future data source, on which the learner will be applied.
It is common to categorize concept drift learners into two major groups:
learner adaptivity is initiated by a trigger (or active change detector), and
a learner regularly evolves independently of the alarms or detectors.
The two categories can be positioned within the design framework we just defined. In the first group the initiation for learner adaptivity comes from the ‘change type’ block, while in the second group the adaptivity is based on ‘model evaluation and selection’ block. The process is illustrated in Figure 5 (b).
We will give more details about the categories of the drift learners in the next section, where we overview the related work.
3 Taxonomy of Available Concept Drift Learners
In this section we overview and map the related work. This section is intended to give a general view, the approaches specifically related to our work will be presented in corresponding chapters. The overview is concentrated on a supervised learning under concept drift.
Schlimmer and Granger 
in 1986 formulated the problem of incremental learning from noisy data and presented an adaptive learning algorithm STAGGER. They are the authors of the term ‘concept drift’. Since then a number of studies dealing with concept drift problem appeared. There were three ‘peaks’ in interest, one around 1998 followed by a special issue of Machine Learning journal, the other around 2004 followed by a special issue of Intelligent Data Analysis journal . The third ‘peak’ started around 2007 and continues now on, as a result of increasing loads of streaming data and computational resources. Several PhD theses have directly addressed the problem of concept drift [12, 113, 155, 26, 140].
The learners responsive to a concept drift can be divided into two big groups based on when the adaptivity is ‘switched on’. They are either trigger based or evolving. Trigger based means that there is a signal which indicates a need for model change. The trigger directly influences how the new model should be constructed. Most often change detectors are employed as triggers. The evolving methods on the contrary do not maintain an explicit link between the data progress and model construction and usually do not detect changes. They aim to build the most accurate classifier either by maintaining the ensemble weights or prototyping mechanisms. They usually keep a set of alternative models, and the models for a particular time point are selected based on their performance estimation. This is ‘why’ dimension in the taxonomy.
Another dimension for grouping concept drift learners is based on how the learners adapt. What are the actual adaptation mechanisms? The mechanisms were discussed following the design assumption A.4 presented in Section 2. Generally the adaptation mechanisms are either related to training set formation or a design and parametrization of the base learner.
Based on those two dimensions we overview the main methodological contributions available in the literature. The taxonomy is graphically presented in Figure 6. The positions of popular techniques (our interpretation) are indicated by ellipses.
3.1 Evolving learners
We start by overviewing the evolving techniques. Some of the techniques discussed above employ change detection mechanisms, still these are not the triggers of adaptation (‘detect and cut’), but rather a tool to reduce computational complexity. First we discuss ensemble techniques, which make the largest group, and then other evolving techniques.
3.1.1 Adaptive ensembles
The most popular evolving technique for handling concept drift is classifier ensemble. Classification outputs of several models are combined or selected to get a final decision. The combination or selection rules are often called fusion rules.
There is a number of ensembles for concept drift, where the ideas are not specific to particular type of base learners (although some studies are limited to testing one base learner) [83, 141, 142, 150, 148, 72, 137, 7, 115, 151, 44, 163, 131]. There are also base learner specific ensembles. In those classifier combination rules usually depend on the base learner specific parameters of the learned models: [81, 79] with SVM, 
with Gaussian mixture models,
In both cases adaptivity is achieved by fusion rules, i.e. how the weights are assigned to the individual model outputs at each point in time. In a discrete case an output of a single model might be selected. In this case all except one models get zero weights. The weight indicates the ‘competence’ of a base learner, expected in the ‘nearest future’ (future assumption A.1). The weight is usually a function of the historical performance [83, 141, 142, 150, 72, 7, 115, 163, 131] in the past or estimated performance using selective cross validation [148, 137, 151, 44, 95] or base learner specific performance estimates [81, 79, 157, 127]. The historical evaluation is restricted to sudden and incremental drifts, while cross validation allow taking into account gradual drifts and reoccurring contexts.
In adaptive ensemble learners much attention is drawn to model evaluation and fusion rules (A.4), while little attention is drawn to the model construction (A.3). Still there is a number of options how to build diverse base classifiers. Usually the implicit aim is to have at least one classifier in the ensemble trained for each distinct concept. This can be achieved using different training set selection strategies.
The straightforward approach is to divide historical data into blocks, which include instances sequential in time. Often these blocks are non overlapping [142, 150, 148, 7, 115, 72, 81, 79, 95], sometimes overlapping . These techniques are suitable for sudden and to some extent to incremental drifts, they favor reoccurring contexts. Another approach is using different sized training windows [83, 141, 137, 127], which implicitly assume that once off sudden drift has happened. Training windows are overlapping sequential blocks of instances, but all of them have fixed ending ‘now’ (time ). The individual models in ensembles can also be constructed using non sequential instance selection . This technique is more suitable to gradual drift, as well as reoccurring contexts.
Another approach to building diverse base classifiers is to use the same training data, but different types of base learners (e.g. SVM, decision tree, Naive Bayes)[163, 131].
All these techniques build individual models from what has already been seen in the past. In principle base classifiers can also be built adding unseen data, for instance noise or unlabeled testing data, which is listed as our future work.
3.1.2 Instance weighting
Instance weighting weighting methods make another group of evolving adaptation techniques. The algorithms can consist of a single learner [85, 163, 117] or an ensemble [56, 14, 30], but the adaptivity here is achieved not by combination rules, but by systematic training set formation. Ideas from boosting  are often employed, giving more attention to the instances which were misclassified.
3.1.3 Feature space
There are models, manipulating feature space to achieve adaptivity. 
uses ideas from transfer learning to achieve adaptivity. New features are added to the training instances, which contain information from the past model performances. augments the feature space by a time stamp. [73, 152] use dynamic feature space over time. In  the variables to observe next are adaptively selected.
3.1.4 Base model specific
There are also models to be mentioned, where adaptivity is achieved by managing specific model parameters or design.  maintain variable training window via adjusting internal structure of decision trees. Regression parameters are being adjusted in . Past support vectors are transfered and combined with the recent training data in . The later examples illustrate the variety of possible specific model designs.
3.2 Learners with triggers
Another group of methods uses triggers, which determine how the models or sampling should be changed at a given time.
3.2.1 Change detectors
The most popular trigger technique is change detection, which is often implicitly related to a sudden drift. Change detection can be based on monitoring the raw data [13, 119], the parameters of the learners  or the outputs (error) of the learners [55, 5, 114].  develop change detection methods in each of the three categories. The detection methods usually cut the training window at change point, although the change point and training window might not be the same .
3.2.2 Training windows
There are methods using heuristics for determining training window size[154, 98, 4, 161]. The heuristics is related to error monitoring. The training window is determined using look up table principles, where there is an action for each possible value of a trigger. There also are base learning specific methods, for determining training windows [68, 163, 92, 147]. The window size is also determined based on historical accuracy.
3.2.3 Adaptive sampling
The listed trigger based methods were using training windows. Another group of trigger based methods use instance selection. The incoming testing instances (unlabeled) are inspected. Based on the relation between the testing instance and predefined prototypes [37, 74, 80, 162] or historical training instances directly[123, 65, 97, 10] a training set for a given instance is selected.
In Table 1 we provide a summary of the listed algorithms. The properties are structured according to the four design assumptions, which were discussed in Section 2. The categorization is based on our interpretation of the methods.
Change detectors and ensembles are the two most popular techniques. Change detectors are naturally suitable for the data where sudden drift is expected. Ensembles, on the other hand, are more flexible in terms of change type, while they can be slower in reaction in case of a sudden drift.
We overviewed general methods for handling concept drift in supervised learning. A discussion of specific applications will follow in Section 5. Before proceeding to applications let us look at the broader context of learning with changing data.
|papers||trigger||A.1 Future||A.2 Change||A.3 Learner||A.4 Selection|
|[83, 141]||tr. windows|
|[142, 150, 72, 7, 115]||no (ensemb.)||last||sud./ inc.||time blocks||hist. err|
|[163, 131]||learner sp.|
|[137, 44, 95]||last||sud./ inc.||tr. windows|
|||no (ensemb.)||estim.||grad./ sud.||time blocks||cross v.|
|||estim.||grad./ sud.||inst. select.|
|[81, 79]||last||sud./ inc.||time blocks|
|||no (ensemb.)||last||sud./ inc.||tr. windows||learner dep.|
|||pred.||grad./ sud.||learner sp.|
|[85, 163, 117]||no (inst. wght.)||last||inc.||inst. wght.||hist. err|
|[56, 14, 30]|
|[50, 73, 16, 152]||no (feature sp.)||last||various||feature sp.||hist. err|
|[76, 145]||no||last||various||learner sp.||hist. err|
|[13, 119, 55, 5, 114, 41]||yes (ch. detec.)||last||sud.||tr. windows||hist. err|
|[154, 98, 4, 161]||yes (windows)||last||sud. /inc.||tr. windows||hist. err|
|[68, 163, 92, 147]||learner sp.||learner dep.|
|[37, 74, 80, 162]||yes (inst. sel.)||estim.||grad./ sud.||inst. sel.||prototyping|
|[123, 65, 97, 10]||cross v.|
4 Related Research Areas
After reviewing the adaptive techniques for supervised learning, which were mostly developed in data mining and machine learning communities, we now give an interdisciplinary perspective of the concept drift problem. In this section we point the ‘neighboring’ research fields. We pick the works, which are not necessary the ‘key’ references in these fields, but the ones which touch a problem of dataset change.
We present the research fields in three categories, which we identified as connections with the concept drift problem: time, knowledge transfer and adaptivity. In Figure 7 we position the related areas within these three categories. We discuss them in the following sections.
4.1 Time context
Time context in concept drifting problems means that the data is sequential in time and the models are also associated with time and need to be continuously updated. There are research fields focusing on the aspects of model update primarily for a stationary data.
Incremental learning focuses on machine learning the tasks, where all the training data is not available at once [47, 59]. The data is received over time thus the models need to be updated or retrained, to increase the accuracy. Schlimmer and Granger  introduced the assumption of concept change in incremental learning context.
Over decades incremental learning area became less active. It was gradually overtaken by data stream mining, where the data flow is continuous an rapid . Data stream mining focuses on the processing speed and complexity, thus naturally the attention toward timely change detection  including anomaly detection  has increased.
Dynamic Bayesian networks are causal models assuming forward relation between the variables in time .
Finally, in time series analysis non stationarity is handled using ARIMA models .
4.2 Knowledge transfer
Knowledge transfer means that regularly there is a potential difference between the distribution of training data and the data to which the models will be applied (testing data). Thus the information from the old data needs to be adapted to fit to the new data. In concept drift problem this discrepancy arises in time, due to changes in the data generating process. However, a dataset shift can have a number of other reasons, for instance, sample selection bias , domain shift due to changes in measurements, model shift due to imbalance of data , discrimination in decision making , which are out of the scope of this thesis. In addition, the knowledge from related problem might be transfered to solve a related one.
Case based reasoning (CBR)  is the process of solving new problems based on the solutions of similar past problems. Generally CBR can be treated as lazy learning. Lazy learning does not build generalizing models, but maintain a database of reference data and uses the relevant past data only when a related query is made . In this domain Aha  introduced noise tolerant instance based algorithms, IB3 was the first instance based technique capable of handling concept drift.
A great part of lazy learning research is devoted to instance selection methods to increase accuracy. There is another related instance selection research area (not necessarily lazy learning) aiming to reduce the learning complexity by data reduction .
In machine learning the process of applying the knowledge gained on solving a similar problem is referred as transfer learning  or inductive transfer. The ideas of inductive transfer were extended to temporal representation and used for learning under concept drift .
Adaptive knowledge transfer has been exploited in multitask learning  and learning from multiple sources [31, 103]. Non stationarity problem in machine learning community is sometimes called covariate shift [19, 70].
Finally, a field of active learning 
is remotely related to the problem of concept drift. In active learning the data is labeled on demand, the methods select the instances which need to be labeled to make the learner more accurate or reduce labeling costs. The relation to concept drift problem is in the ways the methods identify, how well the unlabeled instances correspond to a particular concept.
4.3 Model adaptivity
Model adaptivity here means the models which have the properties of adaptation incorporated into learning. The adaptation might be to a change, as in concept drift problem. Adaptation can also mean the learning process (in stationary or non stationary environment), when the accuracy of the model is increasing along with more incoming examples.
Artificial immune systems (AIS) are inspired by immunology 
. They are adaptive to changes like biological immune systems. AIS use evolutionary computation and memory to learn to recognize changing patterns.
In evolutionary computation dynamic optimization problems are actively studied . The goal is track the optima which is dynamically changing in time. The major approaches are related to maintaining and enhancing diversity, expecting that once the optima changes, there are suitable models available withing the pool . A a next step in this direction is to seek for a relation between the past models and current task . In 
a relation between the change type and magnitude and the evolutionary algorithm is introduced.
Ubiquitous knowledge discovery is an emerging area, which focuses on learning in distributed and mobile systems . The systems work in environment, they need to be intelligent and adaptive. The objects of UKD systems exist in time and space in a dynamically changing environment, they can change location and might appear or disappear. The objects have information processing capabilities, know only their local spatio-temporal environment, act under real-time constraints and are able to exchange information with other objects. These objects are humans, animals, and, increasingly, computing devices.
To sum, the problem of change is far not limited to data mining and machine learning community. Concept drift problem lies in all three dimensions: time dimension, need for adaptivity and knowledge transfer.
In this section we survey applications, where concept drift problem is relevant in both supervised and unsupervised learning. We present the real life problem, discuss the sources of a drift and the actual learning tasks in the context of these problems.
We find four general types of applications: monitoring control, personal assistance, decision making and artificial intelligence.Monitoring and control often employs unsupervised learning, which detects abnormal behavior. It includes detection of adversary activities on the web, computer networks, telecommunications, financial transactions. Personal assistance and information applications include recommender systems, categorization and organization of textual information, customer profiling for marketing. Decision making includes diagnostics, evaluation of creditworthiness. The ‘ground truth’ is usually delayed, i.e. the true answer whether the decision was correct becomes available only after certain time. Artificial intelligence applications include a wide spectrum of moving and stationary systems, which interact with changing environment, for instance robots, mobile vehicles, smart household appliances.
We define five dimensions, relevant to the applications facing concept drift:
the speed of learning and output,
classification or prediction accuracy,
costs of mistakes,
The speed of learning output means what is a relative volume of data and how fast the decision needs to be made. For example, in credit card fraud detection the decision needs to be fast to stop the crime and the data loads are huge, while in credit evaluation a decision regarding the credit can be made even in a few days time. In both cases adversary activities to cheat the system might be expected, while adversary activities in diagnostics would make less sense. The precise accuracy in diagnostics is generally much more significant than in movie recommendations, moreover, in movie recommendations the decision might be ‘soft’ in a sense the viewer is not always deterministic, which movie he or she liked more.
Our global interpretation of the four types of applications in accord with these dimensions is provided in Table 2.
|1. Monitoring control||high||approximate||medium||hard||active|
|2. Assistance information||medium||approximate||low||soft||low|
|3. Decision making||low||precise||high||delayed||possible|
|4. AI and robotics||high||precise||high||hard||low|
In the following sections we discuss each of the application types separately and give arguments for the choices we made in the table.
5.1 Monitoring and Control
In monitoring and control applications the data volumes are large and it needs to be processed in real time. Two types of tasks can be distinguished: prevention and protection against adversary actions, and monitoring for management purposes.
5.1.1 Monitoring against adversary actions
Monitoring against adversary actions is often an unsupervised learning task or one class classification, where the properties of ‘normal behavior’ are well defined, while the properties of attacks can differ and change from case to case. Classes are typically highly imbalanced with a few real attacks.
Intrusion detection is one of the typical monitoring problems. That is a detection of unwanted access to computer systems mainly through network (e.g. internet). There are passive intrusion detection systems, which only detect and alert the owner, and active systems, which take protective action. In both cases here we refer only to a detection part.
Adversary actions is the primary source of concept drift in intrusion detection. The attackers try to invent new ways how to attack, which would overcome the existing security. The secondary source of concept drift is technological progress in time, when more advanced and powerful machines are created, they become accessible to intruders. ‘Normal’ behavior can also change over time.
Lane and Brodley  explicitly formulated the problem of concept drift in intrusion detection a decade ago. They presented a detection system using instance based learning. Current research directions and problematics in intrusion detection can be found in a general review . From supervised learning, lately, ensemble techniques have been proposed . Artificial immune systems are widely considered for intrusion detection.
Adversary behavior also applies to telecommunications industry, both intrusion and fraud. Mobile masquerade detection problem  from research perspective is closely related to intrusion detection. The goal is to prevent adversaries from unauthorized access to a private data. The sources of concept drift are again twofold: adversary behavior trying to overcome the control as well as changing behavior of legitimate users. Fraud detection and prevention in telecommunication industries  is also subject to concept drift due to similar reasons.
In financial sector data mining techniques are employed to monitor streams of financial transactions (credit cards, internet banking) to alert for possible frauds. Insider trading in stock market is one more application.
Both supervised and unsupervised learning techniques are used  for detection of fraudulent transactions. The data labeling might be imprecise due to unnoticed frauds, legitimate transactions might be misinterpreted and the imbalance of the classes is very high (few frauds as compared to legitimate actions). Concept drift in user behavior is one of the challenges.
Insider trading is trading in stock market based on non-public information about the company, in most countries it is prohibited by law. Inside information can come in many forms: knowledge of a corporate takeover, a terrorist attack, unexpectedly poor earnings, the FDA’s acceptance of a new drug , inside trading disadvantages regular investors. There is a potential for concept drift, since the inside traders would try to come up with novel ways to distribute the transactions in order to hide.
5.1.2 Monitoring for management
Monitoring for management usually uses streaming data from sensors. It is also characterized by high volumes of data and real time decision making; however, adversary cases usually are not present.
Traffic management systems use data mining to determine traffic states , e.g. car density in a particular area, accidents. Traffic control centers are the end users of such systems. Transportation systems are dynamic (always moving).The traffic patterns are changing seasonally as well as permanently, thus the systems have to be able to handle concept drift.
Data mining can also be employed for prediction of public transportation travel time , which is relevant for scheduling and planning. The task is also subject to concept drift due to traffic patterns, human driver factors, irregular seasonality.
Concept drift is also relevant in remote sensing in fixed geographic locations. Interactive road tracking is an image understanding system to assist a cartographer annotating road segments in aerial photographs . In this problem change detection comes into play when generalizing to different roads over time. In place recognition  or activity recognition  dynamics of the environment cause concept drift in the learned models.
Climate patterns, such as floods, are expected to be stationary, but the detection systems have to incorporate not regular reoccurring contexts. In a light of a climate change the systems might benefit from adaptive techniques, for instance, sliding window training . In  the authors use active learning of non stationary Gaussian process for river monitoring.
In production monitoring human factor can be the source of concept drift. Consider a boiler used for heat production. The fuel feeding and burning stages might depend on individual habits of a boiler operator, when the fuel is manually input into the system . The control task is to identify the start and end of the fuel feeding, thus algorithms should be equipped with mechanisms to handle concept drift.
In service monitoring changing behavior of the users can be the source of a drift. For example, data mining is used to detect accidents or defects in telecommunication network . A change in call volumes may be the results of an increased number of people trying to call friends or family to tell them what is happening or a decrease in network usage caused by people being unable to use the network. Or the change might be unrelated to the telecommunication network at all. The fault detection techniques have to be able to handle such anomalies.
5.2 Personal Assistance and Information
These applications mainly organize and/or personalize the flow of information. The applications can be categorized into individual assistance for personal use, customer profiling for business (marketing) and public or specified information. In any case, the class labels are mostly ‘soft’ and the costs of mistake are relatively low. For example, if a movie recommendation is wrong it’s not a world disaster and even the user himself or herself might not know for sure, which of the two given movies he or she likes more.
5.2.1 Personal assistance
Personal assistance applications deal with user modeling aiming to personalize the flow of information, which is referred as information filtering. A rich technical presentation on user modeling can be found in . One of the primary applications of user modeling is representation of queries, news, blog entries with respect to current user interests. Changes in user interests over time are the main cause of concept drift.
Large part of personal assistance applications are related to textual data. The problem of concept drift has been addressed in news story classification [156, 15] or document categorization [99, 82, 111].  in a light of changing user interests address the issue of reoccurring contexts. Drifting user interests are relevant in building personal assistance in digital libraries  or networked media organizer .
There is also a large body of research addressing web personalization and dynamics [158, 135, 33, 23], which is again subject to drifting user interests. In contrast to end user text mining discussed before, here mostly interim system data (logs) is mined.
Finally, concept drift problem is highly relevant for spam filtering [36, 46]. First of all there are adversary actions (spamming) in contrast to the personal assistance applications listed before. That means the senders are actively trying to overcome the filters therefore the content changes rapidly. Adversaries are intelligent and adaptive. Spam types are subject to seasonality and popularity of the topics or merchandises. There is a drift in the amount of spam over time, as well as in the content of the classes . Spam messages are disjunctive in content. Besides, personal interpretation of what is spam might differ and change.
5.2.2 Customer profiling
For customer profiling aggregated data from many users is mined. The goal is to segment the customers according to their interests. Since individual interests are changing over time, customer profiling algorithms should take this non stationarity into account.
Direct marketing is one of the applications. Adaptive data mining methods are used in customer segmentation based on product (cars) preferences  or service use (telecommunications) . Lately in addition to similarity measures between individual customers social network analysis has been employed into customer segmentation . It is observed that user interests do not evolve simultaneously. The users that used to have similar interests in the past might no longer share the interests in the future. The authors model this as an evolving graph. Adaptivity is also relevant to association rule mining applied to shopping basket identification and analysis .
Automatic recommendations can be related to both customer profiling and personal assistance. The recommender systems are characterized by sparsity of data. For example, there are only a few movie ratings per user, while the recommendations need to be inferred over the while movie pool. The publicity of recommender systems research has increased rapidly with a NetFlix movie recommendation competition. The winners used temporal aspect as one of the keys to the problem [84, 8]. Three sources of drift were noted movie biases (popularity changes over time), user bias (natural drift of users’ rating scale benchmarking to the recent ratings) and changes in user preferences. There are earlier works on recommender systems in which changes over time were addressed  via time weighting.
Information applications are related to changes in data distribution over time, which is sometimes referred as virtual drift in concept drift literature . Then changes in class assignment is called real drift. Virtual drift would typically occur over longer period of time. For example, in news recommendation system, the news about meat prices in New Zealand suddenly become relevant for Kate (the label changes, but the document comes from the same distribution as before). It might happen that the consumers in New Zealand would switch from pork to beef, thus the distribution of articles about meat would change independently from Kate’s interests.
is the first category of information applications. Given e-mail, news or document streams, the task is to extract meaningful structures, organize the data into topics. Temporal order is necessary for making sense. The topics themselves and even the vocabulary for particular topics change in time.
The state of the art Latent Dirichlet Allocation model for probabilistic document corpus modeling was recently equipped with a time dimension [18, 149]. In  the dynamics of scientific topics articles of Science magazine from 1881 to 1999 (120 years) was analyzed, the emergence, peak and decline of topics was showed, the topic vocabulary representation was build.  incorporated the time stamp into the static model.  presented a method for organization of e-mail messages, to provide a framework for content analysis. Intuitively this is similar to including time feature into the original observation.
Concept drift is relevant in making macroeconomic forecasts , predicting the phases of a business cycle . The data is drifting primary due to large number of influencing factors, which are not feasible to be taken into prediction models. Due to the same reason financial time series are known to be non stationary to predict .
In business management, in particular, software project management, careful planning can be inaccurate if concept drift is not taken into account.  employ data mining models for project time prediction, the models are equipped with concept drift handling techniques.
5.3 Decision Making
Decision making and diagnostics applications usually involve limited amount of data (might be sequential or time stamped). Decisions are not required to be made in real time, thus the applied models might be computationally expensive. But high accuracy is essential in these applications and the costs of mistakes are large.
Bankruptcy prediction or individual credit scoring is typically considered to be a stationary problem . However, in these problems concept drift is closely related to a hidden context , changes in context, which is not observed or measured in the original model. The need for different models for bankruptcy prediction under different economic conditions was acknowledged and proposed in . The need for models to be able to deal with non stationarity has been rarely acknowledged . Although concept drift problem is present, adversaries might make use of full adaptivity of the models. Thus offline adaptivity, which would be restricted to already seen subtypes of customers, is needed .
can be subject to concept drift due to adaptive nature of microorganisms [139, 148]. The effect of antibiotics to a patient is often naturally diminishing over time, since microorganisms mutate and evolutionary develop antibiotic resistance. If a patient is treated with antibiotic when it is not neccesary, a resistance might develop and antibiotics might no longer help when they are really needed.
Clinical studies and systems need adaptivity mechanisms to changes caused by human demographics [89, 54]. The changes in disease progression can also be triggered by changes in a drug being used . In incremental drug discovery experiments the drift between training and testing sets can caused by non uniform sampling .
Data mining can be used to discover emerging resistance and monitor nonsomnical infections in hospitals (the infections which result from the treatment) . Given patient and microbiology data as an input, the task is to model the resistance. The resistance changes over time.
5.4 AI and Robotics
In AI applications the problem of concept drift is often called dynamic environment. The objects learn how to interact with the environment and since the environment is changing, the learners need to be adaptive.
5.4.1 Mobile systems and robotics
Ubiquitous Knowledge Discovery (UKD) deals with the distributed and mobile systems, operating in a complex, dynamic and unstable environment. The word ’ubiquitous’ means distributed at a time. Navigation systems, vehicle monitoring, household management systems, music mining are examples of UKD.
DARPA navigation challenge was presented in . A winning entry in 2005 used online learning for road image classification into drivable and not drivable. They used an adaptive Mixture of Gaussians, for gradual adaptation they were adjusting the internal Gaussian and rapid adaptation by replacement of the Gaussians with the new ones. The needed speed of adaptation would depend on the road conditions.
5.4.2 Intelligent systems
5.4.3 Virtual reality
Finally, virtual reality needs mechanisms to take concept drift into account. In computer game design  adversary actions of the players (cheating) might be one of the drift sources. In flight simulation the strategies and skills differ across different users .
In Table 3 we summarize the discussed applications with concept drift.
|against||computer security||intrusion detection||[91, 104, 77]|
|Monitoring||adversaries||telecommunications||intrusion detection, fraud||[106, 66]|
|and||finance||fraud, insider trading||[20, 40]|
|control||for||transportation||traffic management||[32, 109]|
|management||positioning||place, activity recognition||[164, 102, 100]|
|industrial mon.||boiler control, telecom mon.||[6, 120]|
|textual information||news, document classification||[156, 15, 99, 82, 111]|
|personal||spam categorization||[36, 46]|
|assistance||web||web personalization||[158, 135, 33, 23]|
|Assistance||libraries, media||[64, 48]|
|and||customer||marketing||customer segmentation||[32, 16, 93, 134]|
|information||profiling||recommender systems||movie recommendations||[84, 8, 39]|
|document organization||articles, mail||[18, 149, 161, 78]|
|information||economics||macroeconomics, forecasting||[58, 80, 62]|
|project management||software project mgmt.|||
|finance||creditworthiness||bankruptcy prediction||[144, 67, 165]|
|Decision||biomedicine||drug research||antibiotic res., drug disc.||[148, 49, 69]|
|making||clinical research||disease monitoring||[89, 54, 17]|
|AI||mobile systems||robots, vehicles||[146, 124, 94]|
|and||intelligent systems||‘smart’ home, appliances||[126, 34]|
|robotics||virtual reality||computer games, flight sim.||[28, 63]|
Concept drift is relatively new research field and the terminology is not yet fixed. Moreover, the problem of shifting data is discovered and handled in very broad domain area. With the loads of data more and more attention is drawn to the differences between training and testing data distributions. We provide alternative terminology in Table 4.
|Data mining||concept drift|
|Machine learning||concept drift, covariate shift|
|Evolutionary computation||changing environment|
|AI and Robotics||dynamic environment|
|Statistics, time series||non stationarity|
|Databases||concept drift, load shedding|
|Information retrieval||temporal evolution|
7 Concluding Remarks
We provided an overview of the available concept drift responsive techniques and real learning tasks, where concept drift problem is relevant.
The problem of concept drift is very broad. There has been plenty of general research and attempts to understand the phenomena. Generalization is not possible without assumptions about the nature of the change. This depends on the data and the problem. The focus on applications has been limited so far. Lack of real data?
We argue that the challenges are different for different types of applications, see Table 2. How quickly does the data change? Is it worth complicating the model? It is if we deal with 100 years history of Science papers. Do we want full model adaptability? Is it secure? Wouldn’t a simple training window be enough in the most practical cases? Maybe focusing on selecting a proper the base model is essential?
In our opinion, focus on specific models for specific problems is prospective.
-  D. Aha. Lazy learning. In Lazy learning, pages 7–10. Kluwer Academic Publishers, 1997.
-  D. Aha and D. Kibler. Instance-based learning algorithms. In Machine Learning, pages 37–66, 1991.
-  C. Anagnostopoulos, N. Adams, and D. Hand. Deciding what to observe next: adaptive variable selection for regression in multivariate data streams. In SAC ’08: Proc. of the 2008 ACM symposium on Applied computing, pages 961–965. ACM, 2008.
-  S. Bach and M. Maloof. Paired learners for concept drift. In Proc. of the 8th IEEE Int. Conf. on Data Mining, pages 23–32. IEEE Press, 2008.
-  M. Baena-Garcia, J. del Campo-Avila, R. Fidalgo, A. Bifet, R. Gavalda, and R. Morales-Bueno. Early drift detection method. In ECML PKDD 2006 Workshop on Knowledge Discovery from Data Streams, 2006.
J. Bakker, M. Pechenizkiy, I. Zliobaite, A. Ivannikov, and T. Karkkainen.
Handling outliers and concept drift in online mass flow prediction in cfb boilers.In Proc. of the 3rd Int. Workshop on Knowledge Discovery from Sensor Data (SensorKDD 09), pages 13–22, 2009.
-  H. Becker and M. Arias. Real-time ranking with concept drift using expert advice. In KDD ’07: Proc. of the 13th ACM SIGKDD int. conf. on Knowledge discovery and data mining, pages 86–94. ACM, 2007.
-  R. Bell, Y. Koren, and C. Volinsky. The bellkor 2008 solution to the netflix prize. online, 2008.
-  S. Ben-David and R. Borbely. A notion of task relatedness yielding provable multiple-task learning guarantees. Mach. Learn., 73(3):273–287, 2008.
-  J. Beringer and E. Hullermeier. Efficient instance-based learning on data streams. Intell. Data Anal., 11(6):627–650, 2007.
-  S. Bickel, M. Bruckner, and T. Scheffer. Discriminative learning for differing training and test distributions. In ICML ’07: Proc. of the 24th int. conf. on Machine learning, pages 81–88. ACM, 2007.
-  A. Bifet. Adaptive Learning and Mining for Data Streams and Frequent Patterns. PhD thesis, Universitat Politecnica de Catalunya, 2009.
-  A. Bifet and R. Gavalda. Learning from time-changing data with adaptive windowing. In Proc. of SIAM Int. Conf. on Data Mining (SDM’07). SIAM, 2007.
-  A. Bifet, G. Holmes, B. Pfahringer, R. Kirkby, and R. Gavalda. New ensemble methods for evolving data streams. In KDD ’09: Proc. of the 15th ACM SIGKDD int. conf. on Knowledge discovery and data mining, pages 139–148. ACM, 2009.
-  D. Billsus and M. Pazzani. A hybrid user model for news story classification. In UM ’99: Proc. of the 7th int. conf. on User modeling, pages 99–108. Springer-Verlag, 1999.
-  M. Black and R. Hickey. Classification of customer call data in the presence of concept drift and noise. In Soft-Ware 2002: Proc. of the 1st Int. Conf. on Computing in an Imperfect World, pages 74–87. Springer-Verlag, 2002.
-  M. Black and R. Hickey. Detecting and adapting to concept drift in bioinformatics. In Proc. of Knowledge Exploration in Life Science Informatics, International Symposium, KELSI 2004, volume 3303 of LNCS, pages 161–168. Springer, 2004.
-  D. Blei and J. Lafferty. Dynamic topic models. In ICML ’06: Proc. of the 23rd int. conf. on Machine learning, pages 113–120. ACM, 2006.
-  J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. Wortman. Learning bounds for domain adaptation. In Advances in Neural Information Processing Systems 20, pages 129–136. MIT Press, 2008.
-  R. Bolton and D. Hand. Statistical fraud detection: A review. Statistical Science, 17(3):235–255, 2002.
-  M. Bottcher, M. Spott, and R. Kruse. Predicting future decision trees from evolving data. In ICDM ’08: Proc. of the 8th IEEE Int. Conf. on Data Mining, pages 33–42. IEEE Computer Society, 2008.
-  G. Box and G. Jenkins. Time Series Analysis, Forecasting and Control. Holden-Day, 1990.
-  P. De Bra, A. Aerts, B. Berden, B. de Lange, B. Rousseau, T. Santic, D. Smits, and N. Stash. Aha! the adaptive hypermedia architecture. In HYPERTEXT ’03: Proc. of the 14th ACM conf. on Hypertext and hypermedia, pages 81–84. ACM, 2003.
-  S. Brown and M. Steyvers. Detecting and predicting changes. Cognitive Psychology, 58(1):49–67, 2009.
G. Carpenter and S. Grossberg.
Adaptive resonance theory (art).
The handbook of brain theory and neural networks, pages 79–82. MIT Press, 1998.
-  G. Castillo. Adaptive Learning Algorithms for Bayesian Network Classifiers. PhD thesis, University of Aveiro, 2006.
-  V. Chandola, A. Banerjee, and V. Kumar. Anomaly detection - a survey. ACM Computing Surveys, 41(3):article no. 15, 2009.
-  D. Charles, A. Kerr, M. McNeill, M. McAlister, M. Black, J. K cklich, A. Moore, and K. Stringer. Player-centred game design: Player modelling and adaptive digital games. In Digital Games Research Conference 2005, Selected Papers Publication, pages 285–298, 2005.
-  H. Cho, M. Fadali, and K. Lee. Online probability density estimation of nonstationary random signal using dynamic bayesian networks. Int. Journal of Control, Automation, and Systems, 6(1):109–118, 2008.
-  Fang Chu and Carlo Zaniolo. Fast and light boosting for adaptive mining of data streams. In In PAKDD, pages 282–292. Springer Verlag, 2004.
-  K. Crammer, M. Kearns, and J. Wortman. Learning from multiple sources. J. Mach. Learn. Res., 9:1757–1774, 2008.
-  F. Crespo and R. Weber. A methodology for dynamic data mining based on fuzzy clustering. Fuzzy Sets and Systems, 150:267–284, 2005.
-  A. da Silva, Y. Lechevallier, F. Rossi, and F. de Carvalho. Construction and analysis of evolving data summaries: An application on web usage data. In ISDA ’07: Proc. of the 7th Int. Conf. on Intelligent Systems Design and Applications, pages 377–380. IEEE Computer Society, 2007.
-  D.Anguita. Smart adaptive systems: State of the art and future directions of research. In Proc. of the 1st European Symp. on Intelligent Technologies, Hybrid Systems and Smart Adaptive Systems, EUNITE 2001, 2001.
-  R. de Mantaras and E. Enric. Case-based reasoning: an overview. AI Communications, 10(1):21–29, 1997.
-  S. Delany, P. Cunningham, and A. Tsymbal. A comparison of ensemble and case-base maintenance techniques for handling concept drift in spam filtering. In Proc. of the 19th Int. Conf. on Artificial Intelligence (FLAIRS 2006), pages 340–345. AAAI Press, 2006.
-  S. Delany, P. Cunningham, A. Tsymbal, and L. Coyle. A case-based technique for tracking concept drift in spam filtering. Knowledge-Based Systems, 18(4–5):187–195, 2005.
-  T. Dietterich, G. Widmer, and M. Kubat. Special issue on context sensitivity and concept drift. Mach. Learn., 32(2), 1998.
-  Y. Ding and X. Li. Time weight collaborative filtering. In CIKM ’05: Proc. of the 14th ACM int. conf. on Information and knowledge management, pages 485–492. ACM, 2005.
-  S. Donoho. Early detection of insider trading in option markets. In KDD ’04: Proc. of the 10th ACM SIGKDD int. conf. on Knowledge discovery and data mining, pages 420–429. ACM, 2004.
-  A. Dries and U. Ruckert. Adaptive concept drift detection. In SDM, pages 233–244. SIAM, 2009.
-  R. Duda, P. Hart, and D. Stork. Pattern Classification (2nd Edition). Wiley-Interscience, 2000.
-  J. Ekanayake, J. Tappolet, H. Gall, and A. Bernstein. Tracking concept drift of software projects using defect prediction quality. In Proc. of the 6th IEEE Working Conference on Mining Software Repositories. IEEE Computer Society, 2009.
-  W. Fan. Systematic data selection to mine concept-drifting data streams. In KDD ’04: Proc. of the 10th ACM SIGKDD int. conf. on Knowledge discovery and data mining, pages 128–137. ACM, 2004.
-  T. Fawcett. ”in vivo” spam filtering: a challenge problem for kdd. SIGKDD Explor. Newsl., 5(2):140–148, 2003.
-  F. Fdez-Riverola, E. Iglesias, F. Diaz, J. Mendez, and J. Corchado. Applying lazy learning algorithms to tackle concept drift in spam filtering. Expert Syst. Appl., 33(1):36–48, 2007.
-  D. Fisher and J. Schlimmer. Models of incremental concept learning: A coupled research proposal. Technical report CS-88-05, Vanderbilt University, 1988.
-  O. Flasch, A. Kaspari, K. Morik, and M. Wurst. Aspect-based tagging for collaborative media organization. In From Web to Social Web: Discovering and Deploying User and Content Profiles: Workshop on Web Mining, WebMine 2006. Revised Selected and Invited Papers, volume 4737 of LNAI, pages 122–141. Springer-Verlag, 2007.
-  G. Forman. Incremental machine learning to reduce biochemistry lab costs in the search for drug discovery. In 2nd Workshop on Data Mining in Bioinformatics, pages 33–36, 2002.
-  G. Forman. Tackling concept drift by temporal inductive transfer. In SIGIR ’06: Proc. of the 29th annual int. ACM SIGIR conf. on Research and development in information retrieval, pages 252–259. ACM, 2006.
-  A. Freitas and J. Timmis. Revisiting the foundations of artificial immune systems for data mining. IEEE Transactions on Evolutionary Computation, 11(4):521–540, 2007.
-  Y. Freund and R. Schapire. A short introduction to boosting. In In Proc. of the 16th Int. Joint Conf. on Artificial Intelligence, pages 1401–1406. Morgan Kaufmann, 1999.
-  M. Gaber, A. Zaslavsky, and S. Krishnaswamy. Mining data streams: a review. SIGMOD Rec., 34(2):18–26, 2005.
-  P. Gago, A. Silva, and M. Santos. Adaptive decision support for intensive care. In Proc. of 13th Portuguese Conference on Artificial Intelligence, pages 415–425, 2007.
-  J. Gama, P. Medas, G. Castillo, and P. Rodrigues. Learning with drift detection. In Advances In Artificial Intelligence, Proc. of the 17th Brazilian Symposium on Artificial Intelligence (SBIA 2004), volume 3171 of LNAI, pages 286–295. Springer, 2004.
J. Gao, B. Ding, W. Fan, J. Han, and P. Yu.
Classifying data streams with skewed class distributions and concept drifts.IEEE Internet Computing, 12(6):37–49, 2008.
-  S. Gauch, M. Speretta, A. Chandramouli, and A. Micarelli. User profiles for personalized information access. In The Adaptive Web, pages 54–89. Springer Berlin / Heidelberg, 2007.
-  R. Giacomini and B. Rossi. Detecting and predicting forecast breakdowns. Working Paper 638, ECB, 2006.
-  Ch. Giraud-Carrier. A note on the utility of incremental learning. AI Commun., 13(4):215–223, 2000.
-  S. Grossberg. Adaptive pattern classification and universal recoding: I. parallel development and coding of neural feature detectors. Biological Cybernetics, 23(3):121–134, 1976.
-  D. Hand. Classifier technology and the illusion of progress. Statistical Science, 21:1, 2006.
-  M. Harries and K. Horn. Detecting concept drift in financial time series prediction using symbolic machine learning. In In Proc. of the 8th Australian joint conf. on artificial intelligence, pages 91–98, 1995.
-  M. Harries, C. Sammut, and K. Horn. Extracting hidden context. Mach. Learn., 32(2):101–126, 1998.
-  M. Hasan and E. Nantajeewarawat. Towards intelligent and adaptive digital library services. In ICADL 08: Proc. of the 11th Int. Conf. on Asian Digital Libraries, pages 104–113. Springer-Verlag, 2008. neturiu.
-  S. Hashemi, Y. Yang, M. Pourkashani, and M. Kangavari. To better handle concept change and noise: A cellular automata approach to data stream classification. In Australian Conf. on Artificial Intelligence, volume 4830 of LNCS, pages 669–674. Springer, 2007.
-  C. Hilas. Designing an expert system for fraud detection in private telecommunications networks. Expert Syst. Appl., 36(9):11559–11569, 2009.
-  R. Horta, B. de Lima, and C. Borges. Data pre-processing of bankruptcy prediction models using data mining techniques. Online, 2009.
-  G. Hulten, L. Spencer, and P. Domingos. Mining time-changing data streams. In KDD ’01: Proc. of the 7th ACM SIGKDD int. conf. on Knowledge discovery and data mining, pages 97–106. ACM, 2001.
-  C. Jermaine. Data mining for multiple antibiotic resistance. online, 2008.
-  J. Jiang and C. Zhai. Instance weighting for domain adaptation in nlp. In Proc. of the 45th Annual Meeting of the Association of Computational Linguistics, pages 264–271. Association for Computational Linguistics, 2007.
-  F. Kamiran and T. Calders. Classification without discrimination. In IEEE International Conference on Computer, Control & Communication (IEEE-IC4). IEEE press, 2009.
-  M. Karnick, M. Ahiskali, M. Muhlbaier, and R. Polikar. Learning concept drift in nonstationary environments using an ensemble of classifiers based approach. In Proc. of IEEE Int. Joint Conf. on Neural Networks (IJCNN 2008), pages 3455–3462, 2008.
I. Katakis, G. Tsoumakas, and I. Vlahavas.
Dynamic feature space and incremental feature selection for the classification of textual data streams.In Proc. of ECML/PKDD-2006 Int. Workshop on Knowledge Discovery from Data Streams, pages 102–116, 2006.
-  I. Katakis, G. Tsoumakas, and I. Vlahavas. Tracking recurring contexts using ensemble classifiers: an application to email filtering. Knowledge and Information Systems, 2009.
-  I. Katakis, G. Tsoumakas, and I. P. Vlahavas. An ensemble of classifiers for coping with recurring contexts in data streams. In ECAI, volume 178 of Frontiers in Artificial Intelligence and Applications, pages 763–764. IOS Press, 2008.
-  M. Kelly, D. Hand, and N. Adams. The impact of changing populations on classifier performance. In KDD ’99: Proc. of the 5th ACM SIGKDD int. conf. on Knowledge discovery and data mining, pages 367–371. ACM, 1999.
-  J. Kim, P. Bentley, U. Aickelin, J. Greensmith, G. Tedesco, and J. Twycross. Immune system approaches to intrusion detection — a review. Natural Computing: an international journal, 6(4):413–466, 2007.
-  J. Kleinberg. Bursty and hierarchical structure in streams. In KDD ’02: Proc. of the 8th ACM SIGKDD int. conf. on Knowledge discovery and data mining, pages 91–101. ACM, 2002.
-  R. Klinkenberg. Learning drifting concepts: Example selection vs. example weighting. Intell. Data Anal., 8(3):281–300, 2004.
-  R. Klinkenberg. Meta-learning, model selection and example selection in machine learning domains with concept drift. In Proc. of Annual Workshop of the Special Interest Group on Machine Learning, Knowledge Discovery, and Data Mining (FGML-2005) of the German Computer Science Society (GI) Learning - Knowledge Discovery - Adaptivity (LWA-2005), pages 64–171, 2005.
-  R. Klinkenberg and T. Joachims. Detecting concept drift with support vector machines. In ICML ’00: Proc. of the 17th Int. Conf. on Machine Learning, pages 487–494. Morgan Kaufmann Publishers Inc., 2000.
-  R. Klinkenberg and I. Renz. Adaptive information filtering: Learning drifting concepts. In Proc. of AAAI-98/ICML-98 workshop Learning for Text Categorization, pages 33–40, 1998.
-  J. Kolter and M. Maloof. Dynamic weighted majority: An ensemble method for drifting concepts. Journal of Machine Learning Research, 8:2755–2790, 2007.
-  Y. Koren. Collaborative filtering with temporal dynamics. In KDD ’09: Proc. of the 15th ACM SIGKDD int. conf. on Knowledge discovery and data mining, pages 447–456. ACM, 2009.
-  I. Koychev. Gradual forgetting for adaptation to concept drift. In Proc. of ECAI 2000 Workshop Current Issues in Spatio-Temporal Reasoning, pages 101–106, 2000.
-  A. Krause and C. Guestrin. Nonmyopic active learning of gaussian processes: an exploration-exploitation approach. In ICML ’07: Proc. of the 24th int. conf. on Machine learning, pages 449–456. ACM, 2007.
-  K. Ku-Mahamud, N. Zakaria, N. Katuk, and M. Shbier. Flood pattern detection using sliding window technique. In Proc. of the 3rd Asia International Conference on Modelling & Simulation, pages 45–50, 2009.
-  M. Kubat, J. Gama, and P. Utgoff. Special issue on incremental learning systems capable of. dealing with concept drift. Intelligent Data Analysis, 8(3), 2004.
-  M. Kukar. Drifting concepts as hidden factors in clinical studies. In Proc. of AIME 2003, 9th Conference on Artificial Intelligence in Medicine in Europe, pages 355–364, 2003.
-  P. Kumar and V. Ravi. Bankruptcy prediction in banks and firms via statistical and intelligent techniques - a review. European Journal of Operational Research, 180(1):1–28, 2007.
-  T. Lane and C. Brodley. Temporal sequence learning and data reduction for anomaly detection. ACM Trans. Inf. Syst. Secur., 2(3):295–331, 1999.
-  M. Last. Online classification of nonstationary data streams. Intell. Data Anal., 6(2):129–147, 2002.
-  N. Lathia, S. Hailes, and L. Capra. knn cf: a temporal social network. In RecSys ’08: Proc. of the 2008 ACM conf. on Recommender systems, pages 227–234. ACM, 2008.
-  A. Lattner, A. Miene, U. Visser, and O. Herzog. Sequential pattern mining for situation and behavior prediction in simulated robotic soccer. In RoboCup 2005: Robot Soccer World Cup IX, volume 4020 of LNCS, 2006.
-  Y. Law and C. Zaniolo. An adaptive nearest neighbor classification algorithm for data streams. In Proc. of PKDD, volume 3721 of LNCS, pages 108–120. Springer, 2005.
-  S. Laxman and P. Sastry. A survey of temporal data mining. In SADHANA, Academy Proceedings in Engineering Sciences, volume 31, 2006.
-  M. Lazarescu and S. Venkatesh. Using selective memory to track concept effectively. In Int. Conf. on Intelligent Systems and Control, pages 14–20, 2003.
-  M. Lazarescu, S. Venkatesh, and H. Bui. Using multiple windows to track concept drift. Intell. Data Anal., 8(1):29–59, 2004.
-  G. Lebanon and Y. Zhao. Local likelihood modeling of temporal text streams. In ICML ’08: Proc. of the 25th int. conf. on Machine learning, pages 552–559. ACM, 2008.
-  L. Liao, D. Patterson, D. Fox, and H. Kautz. Learning and inferring transportation routines. Artif. Intell., 171(5-6):311–331, 2007.
-  D. Lu, P. Mausel, E. Brondizio, and E. Moran. Change detection techniques. Int. Journal of Remote Sensing, 25(12):2365–2401, 2004.
-  J. Luo, A. Pronobis, B. Caputo, and P. Jensfelt. Incremental learning for place recognition in dynamic environments. In Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS07), pages 721–728, 2007.
-  Y. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation with multiple sources. In NIPS, pages 1041–1048. MIT Press, 2008.
-  M. Masud, J. Gao, L. Khan, J. Han, and B. Thuraisingham. A multi-partition multi-chunk ensemble technique to classify concept-drifting data streams. In Proc. of Pacific-Asia Conf. on Knowledge Discovery and Data Mining (PAKDD’09), pages 363–375, 2009.
-  M. May, B. Berendt, A. Cornuejols, J. Gama, F. Giannotti, A. Hotho, D. Malerba, E. Menasalvas, K. Morik, R. Pedersen, L. Saitta, Y. Saygin, A. Schuster, and K. Vanhoof. Research challenges in ubiquitous knowledge discovery. In Next Generation of Data Mining, Data Mining and Knowledge Discovery, pages 131–150. Boca Raton, FL: Chapman & Hall/CRC Press, 2008.
-  O. Mazhelis and S. Puuronen. Comparing classifier combining techniques for mobile-masquerader detection. In ARES ’07: Proc. of the The 2nd Int. Conf. on Availability, Reliability and Security, pages 465–472. IEEE Computer Society, 2007.
-  K. Merrick and M. Maher. Motivated learning from interesting events: Adaptive, multitask learning agents for complex environments. Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems, 17(1):7–27, 2009.
-  L. Minku, A. White, and X. Yao. The impact of diversity on on-line ensemble learning in the presence of concept drift. IEEE Transactions on Knowledge and Data Engineering, 99(1), 2009.
-  J. Moreira. Travel time prediction for the planning of mass transit companies: a machine learning approach. PhD thesis, Faculty of Engineering of University of Porto, 2008.
-  R. Morrison. Designing Evolutionary Algorithms for Dynamic Environments. SpringerVerlag, 2004. neturiu, yra google books dalis po tos knygos daugiau nieko esminio ir nepublikavo bet iki tol straipsni7 buo apie non-statinary environment, Raudzio frameworke ir terminologijoje mazdaug.
-  F. Mourao, L. Rocha, R. Araujo, T. Couto, M. Goncalves, and W. Meira. Understanding temporal aspects in document classification. In WSDM ’08: Proc. of the int. conf. on Web search and web data mining, pages 159–170. ACM, 2008.
-  A. Narasimhamurthy and L. Kuncheva. A framework for generating data to simulate changing environments. In AIAP’07: Proc. of the 25th IASTED International Multi-Conference, pages 384–389. ACTA Press, 2007.
-  K. Nishida. Learning and Detecting Concept Drift. PhD thesis, Hokkaido University, Japan, 2008.
-  K. Nishida and K. Yamauchi. Detecting concept drift using statistical testing. In Proc. of Discovery Science, 10th Int. Conf., DS 2007, volume 4755 of LNCS, pages 264–269. Springer, 2007.
-  K. Nishida, K. Yamauchi, and T. Omori. Ace: Adaptive classifiers-ensemble system for concept-drifting environments. In In Proc. of 6th International Workshop on Multiple Classifier Systems, MCS 2005, volume 3541 of LNCS, pages 176–185. Springer, 2005.
-  M. Nunez, R. Fidalgo, and R. Morales. Learning in environments with unknown dynamics: Towards more robust concept learners. J. Mach. Learn. Res., 8:2595–2628, 2007.
-  J. Oommen and L. Rueda. Pattern Recogn., 39(3):328–341, 2006.
-  A. Patcha and J. Park. An overview of anomaly detection techniques: Existing solutions and latest technological trends. Comput. Netw., 51(12):3448–3470, 2007.
-  J. Patist. Optimal window change detection. In ICDMW ’07: Proc. of the 7th IEEE Int. Conf. on Data Mining Workshops, pages 557–562. IEEE Computer Society, 2007.
-  A. Pawling, N. Chawla, and G. Madey. Anomaly detection in a mobile communication network. Comput. Math. Organ. Theory, 13(4):407–422, 2007.
-  N. Pelekis, B. Theodoulidis, I. Kopanakis, and Y. Theodoridis. Literature review of spatio-temporal database models. Knowl. Eng. Rev., 19(3):235–274, 2004.
-  N. Poh, R. Wong, J. Kittler, and F. Roli. Challenges and research directions for adaptive biometric recognition systems. In Proc. of Advances in Biometrics, Third International Conference, ICB 2009, volume 5558 of LNCS, pages 753–764. Springer, 2009.
-  M. Pourkashani and M. Kangavari. A cellular automata approach to detecting concept drift and dealing with noise. In AICCSA ’08: Proc. of the 2008 IEEE/ACS Int. Conf. on Computer Systems and Applications, pages 142–148. IEEE Computer Society, 2008.
-  M. Procopio, J. Mulligan, and G. Grudic. Learning terrain segmentation with classifier ensembles for autonomous robot navigation in unstructured environments. J. Field Robot., 26(2):145–175, 2009.
-  J. Quionero-Candela, M. Sugiyama, A. Schwaighofer, and N. Lawrence. Dataset Shift in Machine Learning. The MIT Press, 2009.
-  P. Rashidi and D. Cook. Keeping the resident in the loop: Adapting the smart home to the user. IEEE Trans. on Systems, Man, and Cybernetics, Part A: Systems and Humans, 39(5):949–959, 2009.
-  S. Raudys and A. Mitasiunas. Multi-agent system approach to react to sudden environmental changes. In Proc. of Machine Learning and Data Mining in Pattern Recognition, 5th Int. Conf., MLDM 2007, volume 4571 of LNCS, pages 810–823. Springer, 2007.
-  T. Reinartz. A unifying view on instance selection. Data Min. Knowl. Discov., 6(2):191–210, 2002.
-  H. Richter and S. Yang. Learning behavior in abstract memory schemes for dynamic optimization problems. Soft Comput., 13(12):1163–1173, 2009.
-  J. Roddick and M. Spiliopoulou. A survey of temporal knowledge discovery paradigms and methods. IEEE Trans. on Knowl. and Data Eng., 14(4):750–767, 2002.
-  J. Rodriguez and L. Kuncheva. Combining online classification approaches for changing environments. In SSPR/SPR, volume 5342 of LNCS, pages 520–529. Springer, 2008.
-  P. Rohlfshagen, P. Lehre, and X. Yao. Dynamic evolutionary optimisation: an analysis of frequency and magnitude of change. In GECCO ’09: Proc. of the 11th Annual conf. on Genetic and evolutionary computation, pages 1713–1720. ACM, 2009.
-  M. Rosenstein, Z. Marx, L. Kaelbling, and T. Dietterich. To transfer or not to transfer. In NIPS 2005 Workshop on Transfer Learning, 2005.
-  A. Rozsypal and M. Kubat. Association mining in time-varying domains. Intell. Data Anal., 9(3):273–288, 2005.
-  J. Scanlan, J. Hartnett, and R. Williams. Dynamicweb: Adapting to concept drift and object drift in cobweb. In AI ’08: Proc. of the 21st Australasian Joint Conf. on Artificial Intelligence, pages 454–460. Springer-Verlag, 2008.
-  J. Schlimmer and R. Granger. Incremental learning from noisy data. Machine Learning, 1(3):317–354, 1986.
-  M. Scholz and R. Klinkenberg. Boosting classifiers for drifting concepts. Intell. Data Anal., 11(1):3–28, 2007.
-  B. Settles. Active learning literature survey. Technical report, University of Wisconsin–Madison, 2009.
X. Song, C. Jermaine, S. Ranka, and J. Gums.
A bayesian mixture model with linear regression mixing proportions.In KDD ’08: Proc. of the 14th ACM SIGKDD int. conf. on Knowledge discovery and data mining, pages 659–667. ACM, 2008.
-  E. Spinosa. Novelty Detection in Data Streams. PhD thesis, University of Sao Paulo, 2008.
-  K. Stanley. Learning concept drift with a committee of decision trees. Technical Report UT-AI-TR-03-302, Computer Sciences Department, University of Texas, 2003.
-  N. Street and Y. Kim. A streaming ensemble algorithm (sea) for large-scale classification. In KDD ’01: Proc. of the 7th ACM SIGKDD international conference on Knowledge Discovery and Data Mining, pages 377–382. ACM, 2001.
-  B. Su, Y. Shen, and W. Xu. Modeling concept drift from the perspective of classifiers. In Prof. of the conference on Cybernetics and Intelligent Systems, 2008 IEEE, pages 1055–1060, 2008.
-  T. Sung, N. Chang, and G. Lee. Dynamics of modeling in data mining: interpretive approach to bankruptcy prediction. J. Manage. Inf. Syst., 16(1):63–85, 1999.
-  N. Syed, H. Liu, and K. Sung. Handling concept drifts in incremental learning with support vector machines. In KDD ’99: Proc. of the 5th ACM SIGKDD int. conf. on Knowledge discovery and data mining, pages 317–321. ACM, 1999.
-  S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale, M. Halpenny, G. Hoffmann, K. Lau, C. Oakley, M. Palatucci, V. Pratt, P. Stang, S. Strohband, C. Dupont, L.-E. Jendrossek, C. Koelen, C. Markey, C. Rummel, J. van Niekerk, E. Jensen, P. Alessandrini, G. Bradski, B. Davies, S. Ettinger, A. Kaehler, A. Nefian, and P. Mahoney. Winning the darpa grand challenge. Journal of Field Robotics, 23(9):661–692, 2006.
-  C. Tsai, C. Lee, and W. Yang. Mining decision rules on data streams in the presence of concept drifts. Expert Syst. Appl., 36(2):1164–1178, 2009.
-  A. Tsymbal, M. Pechenizkiy, P. Cunningham, and S. Puuronen. Dynamic integration of classifiers for handling concept drift. Information Fusion, 9(1):56–68, 2008.
-  C. Wang, D. Blei, and D. Heckerman. Continuous time dynamic topic models. In Uncertainty in Artificial Intelligence [UAI], pages 579–586. AUAI Press, 2008.
-  H. Wang, W. Fan, P. Yu, and J. Han. Mining concept-drifting data streams using ensemble classifiers. In KDD ’03: Proc. of the 9th ACM SIGKDD int. conf. on Knowledge discovery and data mining, pages 226–235. ACM, 2003.
-  H. Wang, J. Yin, J. Pei, P. Yu, and J. Yu. Suppressing model overfitting in mining concept-drifting data streams. In KDD ’06: Proc. of the 12th ACM SIGKDD int. conf. on Knowledge discovery and data mining, pages 736–741. ACM, 2006.
-  B. Wenerstrom and C. Giraud-Carrier. Temporal data mining in dynamic feature spaces. In ICDM ’06: Proc. of the 6th Int. Conf. on Data Mining, pages 1141–1145. IEEE Computer Society, 2006.
-  G. Widmer and M. Kubat. Effective learning in dynamic environments by explicit context tracking. In European Conference on Machine Learning, pages 227–243. Springer-Verlag, 1993.
-  G. Widmer and M. Kubat. Learning in the presence of concept drift and hidden contexts. Machine Learning, 23(1):69–101, 1996.
-  D. Widyantoro. Concept drift learning and its application to adaptive information filtering. PhD thesis, Texas A&M University, 2003.
-  D. Widyantoro and J. Yen. Relevant data expansion for learning concept drift from sparsely labeled data. IEEE Trans. on Knowl. and Data Eng., 17(3):401–412, 2005.
-  J. Wu, D. Ding, X. Hua, and B. Zhang. Tracking concept drifting with an online-optimized incremental learning framework. In MIR ’05: Proc. of the 7th ACM SIGMM int. workshop on Multimedia information retrieval, pages 33–40. ACM, 2005.
-  T. Yamaguchi. Constructing domain ontologies based on concept drift analysis. In in IJCAI-99. Workshop on Ontologies and Problem-Solving Methods, 1999.
-  R. Yampolskiy and V. Govindaraju. Direct and indirect human computer interaction based biometrics. Journal of computers, 2(10):76–88, 2007.
-  S. Yang and X. Yao. Population-based incremental learning with associative memory for dynamic environments. IEEE Trans. Evolutionary Computation, 12(5):542–561, 2008.
-  Y. Yang, X. Wu, and X. Zhu. Mining in anticipation for concept change: Proactive-reactive prediction in data streams. Data Min. Knowl. Discov., 13(3):261–289, 2006.
-  Y. Yang, X. Wu, and X. Zhu. Conceptual equivalence for contrast mining in classification learning. Data Knowl. Eng., 67(3):413–429, 2008.
-  P. Zhang, X. Zhu, and Y. Shi. Categorizing and mining concept drifting data streams. In KDD ’08: Proc. of the 14th ACM SIGKDD int. conf. on Knowledge discovery and data mining, pages 812–820. ACM, 2008.
-  J. Zhou, L. Cheng, and W. Bischof. Prediction and change detection in sequential data for interactive applications. In National Conference on Artificial Intelligence (AAAI), pages 805–810. AAAI, 2008.
-  I. Zliobaite and T. Krilavicius. Clan: Clustering for credit risk assessment. An entry to pakdd 2009 data mining competition, Vilnius University and Vytautas Magnus University, 2009.
-  I. Zliobaite and L. Kuncheva. Determining the training window for small sample size classification with concept drift. In ICDM’09 Workshop Proceedings, The 1st International Workshop on Transfer Mining (TM’09), 2009. to appear.