Log In Sign Up

BINet: Multi-perspective Business Process Anomaly Classification

In this paper, we introduce BINet, a neural network architecture for real-time multi-perspective anomaly detection in business process event logs. BINet is designed to handle both the control flow and the data perspective of a business process. Additionally, we propose a set of heuristics for setting the threshold of an anomaly detection algorithm automatically. We demonstrate that BINet can be used to detect anomalies in event logs not only on a case level but also on event attribute level. Finally, we demonstrate that a simple set of rules can be used to utilize the output of BINet for anomaly classification. We compare BINet to eight other state-of-the-art anomaly detection algorithms and evaluate their performance on an elaborate data corpus of 29 synthetic and 15 real-life event logs. BINet outperforms all other methods both on the synthetic as well as on the real-life datasets.


Analyzing Business Process Anomalies Using Autoencoders

Businesses are naturally interested in detecting anomalies in their inte...

DeepAlign: Alignment-based Process Anomaly Correction using Recurrent Neural Networks

In this paper, we propose DeepAlign, a novel approach to multi-perspecti...

An Anomaly Contribution Explainer for Cyber-Security Applications

In this paper, we introduce Anomaly Contribution Explainer or ACE, a too...

Dynamically Modelling Heterogeneous Higher-Order Interactions for Malicious Behavior Detection in Event Logs

Anomaly detection in event logs is a promising approach for intrusion de...

Deep Convolutional Neural Networks for Anomaly Event Classification on Distributed Systems

The increasing popularity of server usage has brought a plenty of anomal...

Online anomaly detection using statistical leverage for streaming business process events

While several techniques for detecting trace-level anomalies in event lo...

Exploring Business Process Deviance with Sequential and Declarative Patterns

Business process deviance refers to the phenomenon whereby a subset of t...

1 Introduction

Anomaly detection is an important topic for today’s businesses because its application areas are so manifold. Fraud detection, intrusion detection, and outlier detection are only a few examples. However, anomaly detection can also be applied to business process executions, for example, to clean datasets for more robust predictive analytics and robotic process automation (RPA). Especially in RPA, anomaly detection is an integral part because the robotic agents must recognize tasks they are unable to execute not to halt the process. Naturally, businesses are interested in anomalies within their processes because these can be indicators for inefficiencies, insufficiently trained employees, or even fraudulent activities. Consequently, being able to detect such anomalies is of great value, for they can have an enormous impact on the economic well-being of the business.

In today’s digital world, companies rely more and more on process-aware information systems (PAISs) to accelerate their processes. Within these systems, anomaly detection should be an automatic task, and thus an utterly autonomous anomaly detection system is desirable. Most anomaly detection algorithms still rely on a threshold that distinguishes normal from anomalous behavior. Typically, this threshold has to be set by a user and then remains fixed during the execution. To set the threshold adequately, the user requires a deep understanding of the underlying process, which, in the ever more complex processes of the future, is a challenging endeavor.

This paper is an extension of our previous work on BINet from 2018 nolle2018binet . Compared to the original publication, we have slightly simplified the architecture of BINet and present three different versions of BINet, each with different dependency modeling capabilities. Additionally, we elaborate on the threshold heuristics proposed in nolle2018binet

and introduce a novel set of heuristics not mentioned in the original paper. We improved the dataset generation algorithm, which now uses an extended likelihood graph to generate causally dependent activities and event attributes. Finally, we propose a simple rule-based classifier to distinguish different anomaly classes, solely based on the outputs of BINet.

BINet (Business Intelligence Network), is a neural network architecture that allows detecting anomalies on event attribute level. Often, the cause of an anomaly is only represented by the value of a single attribute. For example, a user has executed an activity without permission. This anomaly is only represented by the user attribute of precisely this event. Anomaly detection algorithms must work on the lowest (attribute) level, to provide the most significant benefit. BINet has been designed to process both the control flow perspective (sequence of activities) and the data perspective (see. van2016process ).

Due to the nature of the architecture of BINet, it can be used for ex-post analysis, but can also be deployed in a real-time setting to detect anomalies at runtime. Being able to detect anomalies at runtime is crucial because otherwise no countermeasures can be initiated in time. BINet can be trained during the execution of the process and therefore can adapt to concept drift. If unseen attribute values occur during the training, the network can be altered and retrained on the historical data to include the new attribute value in the future. Dealing with concept drift is also important since most business processes are flexible systems. BINet can detect point anomalies as well as contextual anomalies (see han2011data ).

BINet works under the following assumptions.

  1. No domain knowledge about the process

  2. No clean dataset (i.e., the dataset contains anomalous examples)

  3. No reference model

  4. No labels (i.e., no knowledge about anomalies)

  5. No manual threshold

In the context of business processes, an anomaly is defined as a deviation from a defined behavior, i.e., the business process. An anomaly is an event that does not typically occur as a consequence of preceding events; specifically, their order and the combination of attributes. Anomalies that are attributed to the order of activities (e.g., two activities are executed in the wrong order) are called control flow anomalies. Anomalies that are attributed to the event attributes (e.g., a user has executed an activity without permission) are called data anomalies.

We compare BINet to eight state-of-the-art anomaly detection methods and evaluate on a comprehensive dataset of 29 synthetic logs and 15 real-life logs, using artificial anomalies. This work contains five main contributions.

  1. BINet neural network architecture

  2. Automatic threshold heuristics

  3. Generation algorithm for synthetic event logs

  4. Comprehensive evaluation of state-of-the-art methods

  5. Method to classify anomalies based on outputs of BINet

Throughout this paper, we use a simple paper submission process as the primary example to illustrate concepts, methods, and results. The process model in Fig. 1 describes the creation of a scientific paper. Note that the process includes the peer review process, which is executed by a reviewer, whereas the paper is conceptualized and compiled by an author. We return to this process in Sec. 3 when describing the dataset generation.

Figure 1: A simple paper submission process which is used as an example throughout the paper

2 Related Work

In the field of process mining van2016process , it is popular to use discovery algorithms to mine a process model from an event log and then use conformance checking to detect anomalous behavior wen2007mining ; Bezerra2009Anomaly ; bezerra2008anomaly . However, the proposed methods do not utilize the event attributes, and therefore cannot be used to detect anomalies on attribute level.

A more recent publication proposes the use of likelihood graphs to analyze business process behavior bohmer2016multi

. This method includes important characteristics of the process itself by including the event attributes as part of an extended likelihood graph. However, this method relies on a discrete order in which the attributes are connected to the graph, which may introduce a bias towards certain attributes. Furthermore, the same activities are mapped to the same node in the likelihood graph, thereby assigning a single probability distribution to each activity. In other words, control flow dependencies cannot be modeled by the likelihood graph, because the probability distribution of attributes following an activity does not depend on the history of events.

The main drawback of this method is that it uses the initial log to build the likelihood graph, and therefore no case of the original log is classified as anomalous. This is related to the method the authors chose to determine a threshold for the anomaly detection task. We address this caveat again in Sec. 5. Nevertheless, the notion of the likelihood graph inspired the generation method for synthetic event logs in Sec. 3.

A review of traditional anomaly detection methodology can be found in pimentel2014review . Here, the authors describe and compare many methods that have been proposed over the last decades. Another elaborate summary of anomaly detection in discrete sequences is given by Chandola et al. in chandola2012survey

. The authors differentiate between five different basic methods for novelty detection: probabilistic, distance-based, reconstruction-based, domain-based, and information-theoretic novelty detection.

Probabilistic approaches estimate the probability distribution of the normal class and thus can detect anomalies as they come from a different distribution. An important probabilistic technique is the sliding window approach 

warrender1999detecting . In window-based anomaly detection, an anomaly score is assigned to each window in a sequence. Then the anomaly score of the sequence can be inferred by aggregating the window anomaly scores. Recently, Wressnegger et al. used this approach for intrusion detection and gave an elaborate evaluation in wressnegger2013acloselook . While being inexpensive and easy to implement, sliding window approaches show a robust performance in finding anomalies in sequential data, especially within short regions chandola2012survey .

Distance-based novelty detection does not require a clean dataset, yet it is only partly applicable to process cases, as anomalous cases are usually very similar to normal ones. A popular distance-based approach is the one-class support vector machine (OC-SVM). Schölkopf et al. 

scholkopf1999support first used support vector machines cortes1995support for anomaly detection.

Reconstruction-based novelty detection (e.g., neural networks) is based on the idea to train a model that can reconstruct normal behavior but fails to do so with anomalous behavior. Therefore, the reconstruction error can be used to detect anomalies Japkowicz2001 . This approach has successfully been used for the detection of control flow anomalies nolle2016unsupervised as well as data flow anomalies nolle2018analyzing in event logs of PAISs.

Domain-based novelty detection requires domain knowledge, which violates our assumption of no domain knowledge about the process. Information-theoretic novelty detection defines anomalies as the examples that influence an information measure (e.g., entropy) on the whole dataset the most. Iteratively removing the data with the highest impact yields a cleaned dataset and thus a set of anomalies.

The core of BINet is a recurrent neural network, trained to predict the next event and its attributes. The architecture is influenced by the works of Evermann evermann2016deep ; evermann2017predicting and Tax tax2017predictive

, who utilized long short-term memory 

hochreiter1997long (LSTM) networks for next event prediction, demonstrating their utility. LSTMs have been used for anomaly detection in different contexts like acoustic novelty detection marchi2015novel and predictive maintenance malhotra2016lstm . These applications mainly focus on the detection of anomalies in time series and not, like BINet, on multi-perspective anomaly detection in discrete sequences of events.

The novelty of BINet lies in the tailored architecture for business processes, including the control flow and data perspective, the scoring function to assign anomaly scores, and the automatic threshold heuristic. It is a universally applicable method for anomaly detection both in the control flow and the data perspective of business process event logs. Furthermore, BINet fulfills all the requirements connected to the assumptions above. Lastly, BINet can handle multiple event attributes and model causal dependencies between control flow and data perspective, as well dependencies between event attributes. This combination is, to the best of our knowledge, novel to the field.

3 Datasets

As a basis for the following sections, we first want to define the terms case, event, log, and attribute. A log consists of cases, each of which consists of events executed within a process. Each event is defined by an activity name and its attributes, e.g., a user who executed the event. We use a nomenclature adapted from van2016process . Case, Event, Log, Attribute. Let be the set of all cases, and be the set of all events. The event sequence of a case , denoted by , is defined as , where is the set of all sequences over . An event log is a set of cases . Let be a set of attributes and be a set of attribute values, where is the set of possible values for the attribute . Note that is the number of events in case , is the number of cases in , and is the number of event attributes.

3.1 Synthetic Dataset Generation

To evaluate our method, we generated synthetic event logs from random process models of different complexities. We used PLG2 burattin2015plg2 to generate six random process models. The complexity of the models varies in the number of activities, breadth, and width. We also use a handmade procurement process model called P2P as in nolle2018binet . For demonstrative purposes, we also include the Paper process from Fig. 1 in the datasets since it features human readable activities.

We adopt the notion of the extended likelihood graph (cf. bohmer2016multi ) to generate causally dependent event attributes. For each activity in the process from Fig. 1, we create a group of possible users allowed to execute the activity. Additionally, we assign different probabilities to each user. Figure 2 demonstrates the final result.

Figure 2: A likelihood graph with user attribute; 1.0 probabilities omitted for simplicity

Note that the Experiment activity appears twice in the likelihood graph. This is to introduce a long-term control flow dependency. That is, Conduct Study always eventually follows Develop Hypothesis, and never eventually follows Develop Method. Note also that the user group, as well as the corresponding probabilities, are different.

This method can easily be extended to generate more than one event attribute. For simplicity, the visualization in Fig. 2 uses a table-like structure to depict the activities and the possible users. In reality, this is implemented as a directed graph, where each possible user is a direct follower of the activity, with the probabilities being the edge weights. Hence, we can add more attributes by adding additional successors to each user and so on. Hereby, causal dependencies between event attributes can be modeled (e.g., Main Author only works Mondays and Tuesdays).

We have described long-term control flow dependencies as well as data dependencies. Data to control flow dependencies are also possible, as in activity Research Related Work. Develop Method always directly follows Research Related Work if Student is the user.

Now, we can generate event logs by using a random-walk through the likelihood graph, complying with the transition probabilities, generating activities and attributes along the way. We implemented the generation algorithm so that all these dependencies can be controlled by parameters and the event attributes are automatically generated. Please refer to the code repository111, and specifically the notebooks section, for a detailed description of the algorithm as well as examples.

In addition to the synthetic logs, we also use the real-life event logs from the Business Process Intelligence Challenge (BPIC): BPIC12222, BPIC13333, BPIC15444, and BPIC17555 Furthermore, we use a set of 4 event logs (Anonymous) from real-life procurement processes provided by a consulting company.

3.2 Artificial Anomalies

Like Bezerra bezerra2013algorithms and Böhmer bohmer2016multi , we apply artificial anomalies to the event logs, altering percent of all cases. Inspired by the anomaly types used in bezerra2013algorithms ; bohmer2016multi (Skip, Insert, and Switch) we identified more elaborate anomalies that frequently occur in real business processes. These anomalies are defined as follows.

  1. Skip: A necessary sequence of up to 3 events has been skipped

  2. Insert: Up to 3 random activities have been inserted

  3. Rework: A sequence of up to 3 events has been executed a second time

  4. Early: A sequence of up to 2 events has been executed too early, and hence are skipped later in the case

  5. Late: A sequence of up to 2 events has been executed too late, and hence are skipped earlier in the case

  6. Attribute: An incorrect attribute value has been set in up to 3 events

Notice that we do apply the artificial anomalies to the real-life event logs as well, knowing that they likely already contain natural anomalies which are not labeled. Thereby, we can measure the performance of the algorithms on the real-life logs to demonstrate feasibility while using the synthetic logs to evaluate accuracy.

As indicated in Fig. 3 we can gather a ground truth dataset by marking the attributes with their respective anomaly types. Note that we introduce a Shift anomaly type, which is used to indicate the place where an Early or Late event used to be. Essentially this is equivalent to a Skip; however, we want to differentiate these two cases. This becomes important later on.

Figure 3: Anomalies applied to cases of the paper submission process

Notice that we insert a random event in case of Insert, i.e., the activity name does not come from the original process. This is to prevent a random insert from resembling, say, Rework, which is possible when randomly choosing an activity from the original process. Attribute values of inserted events are set in the same fashion. When applying an Attribute anomaly, we randomly select an attribute value from the likelihood graph that is not a direct successor of the last node.

So, we created datasets with ground truth data on attribute level. For the anomaly detection task, these labels are mapped to either Normal or Anomaly, thus creating a binary classification problem. The ground truth data can easily be adapted to case level by the following rule: A case is anomalous if any of the attributes in its events are anomalous.

We generated 4 likelihood graphs for each synthetic process model with different numbers of attributes, different transition probabilities, and dependencies. Then, we sampled logs from these likelihood graphs, resulting in 28 synthetic logs (29, including the Paper dataset). Together with BPIC12, BPIC13, BPIC15, BPIC17, and Anonymous, the corpus consists of 44 event logs. We refer to the datasets by their names as defined in Table 1, which gives a detailed overview of the corpus.

Name. #Logs #Activities #Cases #Events #Attr. #Attr. Values
Paper 1 27 5K 66K 1 13
P2P 4 27 5K 48K–53K 1–4 13–386
Small 4 41 5K 53K–57K 1–4 13–360
Medium 4 65 5K 39K–42K 1–4 13–398
Large 4 85 5K 61K–68K 1–4 13–398
Huge 4 109 5K 47K–53K 1–4 13–420
Gigantic 4 154–157 5K 38K–42K 1–4 13–409
Wide 4 68–69 5K 39K–42K 1–4 13–382
BPIC12 1 73 13K 290K 0 0
BPIC13 3 11–27 0.8K–7.5K 4K–81K 2–4 23–1.8K
BPIC15 5 422–486 0.8K–1.4K 46K–62K 2–3 23–481
BPIC17 2 17–53 31K–43K 284K–1.2M 1 289–299
Anonymous 4 19–37 968–17K 6.9K–82K 1 160–362
Table 1: Overview showing dataset information

4 Method

In this section, we describe the BINet architecture and how it is utilized for anomaly detection.

4.1 Preprocessing

Due to the mathematical nature of neural networks, we must transform the logs into a numerical representation. To accomplish this, we encode all nominal attribute values by using an integer encoding. An integer encoding is a mapping of all possible attribute values for an attribute to a unique positive integer. The integer encoding is applied to all attributes of the log, including the activity name.

Now, event logs can be represented as third-order tensors. Each event

is a first-order tensor , with , the first attribute always being the activity name, representing the control flow perspective. Hence, an event is defined by its activity name and the event attributes. Each case is then represented as a second-order tensor , with , being the maximum case length of all cases in the log

. To force all cases to have the same size, we pad all shorter cases with event tensors only containing zeros, which we call padding events (these are ignored by the neural network).

The log can now be represented as a third-order tensor , with , the number of cases in log . Using matrix index notation, we can now obtain the second attribute of the third event in the ninth case with . We can also obtain all the second attributes of the third event by , using “:” to denote the cross-section of tensor along the case axis. Likewise, we can obtain all the second attributes of case nine by . Thus, we can define a preprocessor as follows: Preprocessor Let , , and be defined as above, then a preprocessor is a mapping .

The preprocessor encodes all attribute values and then transform the log into its tensor representation. In the following, we refer to the preprocessed log by (features), with .

4.2 BINet Architecture

BINet is based on a neural network architecture that is trained to predict the attributes of the next event. To model the sequential nature of event log data, the core of BINet is a recurrent neural network, using a Gated Recurrent Unit (GRU) 

cho2014learning , an alternative to the popular long short-term memory (LSTM) hochreiter1997long .

BINet processes the distinct sequence of events for each case. For each event, BINet has to predict the next event based on the history of events in the case. Thus, BINet is a sequence-to-sequence recurrent neural network.

It is important to understand that for each prediction, BINet has a recollection of not only the last event in the sequence but all of the events. The internal state of the GRU units changes with each new event and resembles a latent representation of the sequence of events up until that point. For each new case, these internal states are reset.

Figure 4 shows the internal architecture of BINet. We propose three versions of BINet (BINetv1, BINetv2, and BINetv3). These versions differ in their capability of modeling causal dependencies based on the inputs they receive. BINet consists of dedicated encoder GRUs (light green) for each input attribute (light blue) of the last event. These GRUs are responsible for creating a latent representation of the complete history of a single attribute. Note that each attribute is fed through an embedding layer to reduce the input dimension (see mikolov2013efficient ; dekoninck2018trace2vec ).

Figure 4: BINet architectures for a log with two event attributes, User and Day; the three versions of BINet differ only in the inputs they receive

The counterpart to the encoder GRUs are the decoder GRUs (dark green) which receive as input a concatenation of attribute representations. These GRUs are responsible for combining the control flow and the data perspective, and hence to create a latent representation of the complete history of events that is meaningful to the respective next attribute prediction in the next layer. For prediction, BINet uses a softmax output layer (pink) for each next attribute. A softmax layer outputs a probability distribution over all possible values of the respective attribute.

In its simplest form, BINet predicts the next activity and all next attributes solely based on past events. This architecture is called BINetv1 (black). However, there likely exist causal dependencies between the activity and the corresponding attribute values for that activity.

To model this dependency, we propose a second architecture, BINetv2 (red), which in addition to the input of BINetv1, gets access to the activity of the next event. This is to condition the attribute decoders onto the actual next activity, as opposed to inferring the next activity from the states of the encoders. Using the BINetv2 architecture, we can now model control flow to data dependencies.

There also likely exist dependencies between attributes (i.e., certain users only work certain days of the week). With BINetv1 or BINetv2, these dependencies are not modeled because the attributes are treated as though they were independent. To address this, the last architecture we propose is BINetv3 (orange), which gets access to the complete next event. Hence, BINetv3 can model data to data dependencies.

Note that the selective concatenation layer (dark grey) is special in the sense that it does not allow information to flow from one of the next event inputs to the respective next attribute decoder GRU. We must prevent this because otherwise, BINetv3 can simply predict correctly by observing the attributes in the next event. This problem does not arise with BINetv1 and BINetv2, because the activity GRU has no direct connection to the next event, and the user and day GRUs only have access to the next activity. Apart from the encoder-decoder structure, BINetv2 is essentially equivalent to the original BINet architecture from nolle2018binet .

We want to elaborate on the differences in the BINet versions by referring back to the paper submission process from Fig. 2. Suppose the last activity input to BINet is Research Related Work and the user was Main Author. The activity output should now give a probability of approximately 60 percent to Develop Hypothesis. In case of BINetv1, however, the user output does not match the 80 percent for Main Author and the 20 percent for Author, because the respective decoders also have to take into account the other 40 percent of not going to Develop Hypothesis, and hence output a higher probability for Student. BINetv2 does not suffer from this problem, because it knows for certain that Develop Hypothesis is the next activity (because of the next activity input), and thus can learn the probabilities appropriately.

To demonstrate the advantage of BINetv3, we have to imagine a third weekday attribute as part of the extended likelihood graph. Suppose for a given activity Main Author works only on Fridays, and Author works from Monday until Thursday. BINetv3 can correctly predict that if the weekday is Friday, the user must be Main Author, whereas BINetv2 is not. Likewise, BINetv3 can infer that if the user is Main Author, the day must be Friday.

For BINetv1, activity, user, and day are entirely independent, for BINetv2, user and day are dependent on the activity, and for BINetv3, user is dependent on activity and day, whereas day is dependent on activity and user. Our implementation of BINet theoretically allows for any number of events and attributes.

4.3 Calculating Anomaly Scores

After the initial training phase, BINet can now be used for anomaly detection. This is based on the assumption that BINet assigns a lower probability to an anomalous attribute than a normal attribute.

The last step of the anomaly detection process is the scoring of the events. Therefore, we use a scoring function in the last layer of the architecture. This scoring function for an attribute receives as input the output of the softmax layer for , that is, a probability distribution , and the actual value of the attribute, .

Using the example above (the last activity being Research Related Work and the user being Main Author), the output of the activity softmax layer might look as depicted in Fig. 5. The probability for Develop Hypothesis is , and the probability for Develop Method is . Note that BINet gives a high probability for the two correct next activities with respect to the paper process.

Figure 5: Output of the activity softmax layer after reading activity Research Related Work and user Main Author

We can now define the anomaly score for a possible attribute value as the sum of all probabilities of the probability distribution tensor greater than the probability assigned to , . The scoring function is therefore defined as follows, with being -th probability.

Figure 5 also shows the resulting anomaly scores, , for each possible activity. Intuitively, an anomaly score of

indicates that the probability of an attribute value lies within the top 55 percent (plus a small margin) confidence interval of BINet. Thus, we can set a threshold as indicated in Fig. 

5 (), to flag all values as normal that lie within the first 80 percent of BINet’s confidence interval.

The scoring function is applied to each softmax output of BINet, transforming the probability distribution tensor into a scalar anomaly score. We can now obtain the anomaly scores tensor by applying BINet to the feature tensor ,

mapping an anomaly score to each attribute in each event in each case. The anomaly score for attributes of padding events is always 0.

4.4 Training

BINet is trained without the scoring function. The GRU units are trained in sequence to sequence fashion. With each event that is fed in, the network is trained to predict the attributes of the next event. We train BINet with a GRU size of (two times the maximum case length), on mini batches of size

for 20 epochs using the Adam optimizer with the parameters stated in the original paper 


. Additionally, we use batch normalization 

ioffe2015batch after each GRU to counteract overfitting.

4.5 Detection

An anomaly detector only outputs anomaly scores. We need to define a function that maps anomaly scores to a label , indicating normal and indicating anomalous, by applying a threshold . Whenever an anomaly score for an attribute is greater than , this attribute is flagged as anomalous. Therefore, we define a threshold function , with inputs and .

In the example from Fig. 5, setting results in Develop Hypothesis and Develop Method being flagged as normal, whereas all other activities are flagged as anomalous.

4.6 Threshold Heuristic

Most anomaly detection algorithms rely on the user setting a threshold manually or define the threshold as a constant. To determine a threshold automatically, we propose a new heuristic that mimics how a human would set a threshold manually.

Let us consider the following example. A user is presented with a visualization of an anomaly detection result, say, a simple case overview showing all events and their attributes like depicted in Fig. 6. Anomalous attributes are shown in red and normal attributes are shown in green. The user is asked to set the threshold manually using a slider. Most people start with the slider either set to the maximum (all attributes are normal, all green) or the minimum (all attributes are anomalous, all red) and then move the slider while observing the change of colors in the visualization. Intuitively, most users fix the slider within a region where the number of shown anomalies is stable, that is, even when moving the slider to the left and right, the visualization stays the same. Furthermore, users likely prefer a threshold setting that shows significantly less anomalous than normal attributes, which corresponds to a slider setting closer to the maximum (the right side). In other words, a setting that produces less false positives.

Figure 6: Example of how an anomaly detection visualization changes with different threshold settings; the rightmost setting corresponds to how a user would likely set the slider manually

This behavior of a human setting the threshold can be modeled based on the anomaly ratio , which can be defined as follows, with denoting the number of non-padding events and .

By dividing by we calculate the average based only on non-padding events.

Figure 7 shows for a run of BINetv1 on the Paper dataset. In addition to , the figure shows the values for and for the anomaly class.

Figure 7: Thresholds as defined by the heuristics in relation to the anomaly ratio and its plateaus (blue intervals)

Note that is a discrete function that we sample for each reasonable threshold. Reasonable candidate thresholds are the distinct anomaly scores encountered in ; other values can be disregarded. The candidate thresholds are indicated by the minor ticks and the respective dotted grid lines in Fig. 7.

To define the heuristics in the following, we first have to define the first and second order derivatives of the discrete function . They can be retrieved using the central difference approximation. Let , then the derivatives are approximated by

To mimic the human intuition, we have to consider regions of where the slope is close to zero. These are regions where (we chose to be two times the average slope of ), which we refer to as plateaus (blue regions in Fig. 7). Based on these plateaus we can now define the lowest plateau heuristic as follows. Let be the sequence of candidate thresholds that lie within the lowest plateau, then

corresponding to the left-most (), the right-most (), and the mean-centered () threshold inside the lowest plateau.

In nolle2018binet we proposed the elbow heuristic to mimic the same behavior. The definition of the elbow heuristics are given by

So, is the threshold where the rate of change of is maximized, whereas is the minimum. With respect to these thresholds are the points where either a steep drop ends in a plateau () or a plateau ends in a steep drop (). Although and can indicate the beginning and the end of a plateau, these are not necessarily the thresholds a human would naturally pick.

To compare our results to the best possible threshold, we define the heuristic by use of the score metric. The heuristic is defined as follows, where is the set of ground truth labels.

It is important to understand that can only be used if the labels are available at runtime. However, in most cases, anomaly detection is an unsupervised problem, and hence no labels are available.

It might be beneficial to apply different thresholds to different dimensions of . For example, it might be sensible to set a different threshold for the user attribute than the activity because the inherent probability distribution can be different. This is possible by using “:” to apply heuristics on cross-sections of by using index notation. Let then we can define the following threshold strategies

We only explicitly show the parameter for clarity, other parameters are set according to the definition of the chosen heuristic.

Thus, returns a tensor that holds one threshold for each attribute in an event, whereas holds a threshold for each event position in a case. Lastly, combines the two ideas and gives a threshold for each combination of event position and attribute. In other words, instead of applying the threshold heuristic once for all dimensions of , we apply it multiple times for different cross-sections of , obtaining multiple different thresholds.

5 Evaluation

We evaluated BINet on all 44 event logs and compared it to eight state-of-the-art methods. Two methods from chandola2012survey : a sliding window approach (t-STIDE+) warrender1999detecting ; and the one-class SVM (OC-SVM). Additionally, we compared BINet to two approaches from bezerra2013algorithms

: the Naive algorithm and the Sampling algorithm. Furthermore, we provide the results of the denoising autoencoder (DAE) approach from 

nolle2018analyzing . Lastly, we compared BINet to the approach from bohmer2016multi (Likelihood). Naive and Likelihood set the threshold automatically, so we extended the approaches to support the use of external threshold heuristics. These extensions are referred to by Naive+ and Likelihood+. For all non-deterministic methods (i.e., DAE, BINet, and Sampling), we executed five independent runs to counteract the randomness.

For the OC-SVM, we relied on the implementation of scikit-learn666 using an RBF kernel of degree 3 and . The Naive, Sampling, Likelihood, and DAE methods were implemented as described in the original papers. Sampling, Likelihood, Baseline, and the OC-SVM do not rely on a manual setting of the threshold and were unaltered. t-STIDE+ is an implementation of the t-STIDE method from warrender1999detecting , which we adapted to support the data perspective (see nolle2018analyzing ). Naive+ is an implementation of Naive that removes the fixed threshold of and sets the threshold according to the heuristic. Likelihood+ implements the first part of Likelihood (the generation of the extended likelihood graph from the log) and replaces the threshold algorithm with the aforementioned heuristics.

In the last section, we described the intuition of setting separate thresholds using different strategies (e.g., one threshold per attribute). To decide on the best strategy, we evaluate the four strategies (, , , and ) for all synthetic datasets and all methods that support the heuristics, with . The results of the experiments in Fig. 8 indicate that, indeed, it is sensible to set separate thresholds for individual attributes. Interestingly, we also find that setting a single threshold yields similar results. Setting a threshold per event or per event and attribute does perform significantly worse.

Figure 8: Average score by method and strategy over all synthetic datasets, using as the heuristic

Next, we repeated the same experiment for all of the aforementioned heuristics and using as the strategy. The results can be seen in Fig. 9. Intriguingly, the lowest plateau heuristics perform best for all methods except the DAE. Furthermore, it seems to work best to choose the mean-centered threshold within the lowest plateau ().

Figure 9: Average score by method and heuristic over all synthetic datasets, using as the strategy

Based on the results of the preliminary experiments, we set as the heuristic for the following experiments for all methods apart from the DAE, for which we set . For Likelihood, Sampling, Naive, and OC-SVM we use the internal threshold heuristics.

The overall results are shown in Fig. 10. Note that for the real-life datasets we do not have complete information, and hence the score is not a good representation of the quality of the detection algorithms. However, because we compare all methods on the same basis, the results are still meaningful. Furthermore, we only know about the artificial anomalies inside the real-life datasets, and therefore we expect a high recall (of the artificial anomalies), whereas we expect a low precision because the dataset likely contains natural anomalies (which are not labeled).

Figure 10: Average , , and

by dataset type over all datasets; error bars indicate variance over datasets with different numbers of attributes and multiple runs

This theory is confirmed by the results in Fig. 10. Note also that the recall scores for both the synthetic and the real-life datasets are very similar, indicating comparable performance (for artificial anomalies) on both dataset types.

Finally, we find that BINetv1 works best on the synthetic datasets, whereas the field is mixed on the real-life datasets. However, all three BINet versions perform better than the other methods. DAE performs significantly worse on real-life data because it ran out of memory on some of the bigger datasets. Therefore, DAE has been penalized defining precision and recall to be zero for these runs.

Detailed results can be found in Tab. 2, which also gives results for case level (i.e., only anomalous cases have to be detected, not the attributes). An interesting observation is that t-STIDE+ performs best on BPIC12 when evaluating on case level. This might be attributed to BPIC12 being a dataset without event attributes (the only one in the corpus). On attribute level, Likelihood+ is marginally better than BINet on BPIC13. For all other datasets, BINet shows the best performance.

Level Method Paper P2P Small Medium Large Huge Gigantic Wide BPIC12 BPIC13 BPIC15 BPIC17 Anonymous
Case OC-SVM warrender1999detecting 0.49 0.27 0.25 0.29 0.24 0.23 0.29 0.31 0.55 0.24 0.26 0.35 0.10
Naive Bezerra2009Anomaly 0.50 0.48 0.49 0.39 0.41 0.40 0.34 0.44 0.55 0.21 0.17 0.31 0.16
Sampling Bezerra2009Anomaly 0.50 0.49 0.49 0.47 0.49 0.49 0.45 0.49 0.55 0.21 0.17 0.32 0.23
Likelihood bohmer2016multi . 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Naive+ 0.50 0.48 0.49 0.44 0.49 0.45 0.38 0.47 0.55 0.21 0.17 0.28 0.15
t-STIDE+ nolle2016unsupervised 0.40 0.51 0.53 0.43 0.45 0.45 0.41 0.47 0.68 0.32 0.29 0.32 0.22
Likelihood+ 0.85 0.74 0.76 0.72 0.73 0.73 0.73 0.73 0.62 0.44 0.33 0.45 0.51
DAE nolle2016unsupervised 0.46 0.71 0.72 0.71 0.71 0.70 0.63 0.70 0.60 0.21 0.00 0.30 0.35
BINetv1 0.74 0.77 0.78 0.75 0.75 0.75 0.74 0.76 0.62 0.41 0.37 0.51 0.51
BINetv2 nolle2018binet 0.76 0.77 0.77 0.72 0.71 0.70 0.68 0.73 0.61 0.40 0.38 0.43 0.45
BINetv3 0.79 0.77 0.76 0.71 0.69 0.69 0.66 0.74 0.66 0.45 0.36 0.49 0.50
Attribute OC-SVM warrender1999detecting 0.09 0.06 0.05 0.08 0.04 0.05 0.07 0.09 0.05 0.06 0.01 0.09 0.30
Naive Bezerra2009Anomaly 0.13 0.15 0.14 0.12 0.09 0.11 0.09 0.16 0.05 0.05 0.01 0.10 0.39
Sampling Bezerra2009Anomaly 0.33 0.33 0.34 0.32 0.34 0.34 0.31 0.32 0.08 0.07 0.01 0.14 0.39
Likelihood bohmer2016multi 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Naive+ 0.13 0.16 0.15 0.16 0.13 0.13 0.13 0.18 0.05 0.05 0.01 0.09 0.33
t-STIDE+ nolle2016unsupervised 0.28 0.33 0.32 0.25 0.25 0.26 0.19 0.28 0.40 0.12 0.05 0.17 0.39
Likelihood+ 0.74 0.61 0.63 0.58 0.57 0.58 0.55 0.57 0.35 0.29 0.07 0.28 0.63
DAE nolle2016unsupervised 0.25 0.61 0.61 0.56 0.56 0.55 0.46 0.56 0.06 0.09 0.00 0.24 0.52
BINetv1 0.64 0.68 0.69 0.65 0.65 0.65 0.64 0.65 0.42 0.28 0.21 0.38 0.60
BINetv2 nolle2018binet 0.67 0.65 0.67 0.60 0.59 0.60 0.56 0.60 0.34 0.25 0.19 0.29 0.55
BINetv3 0.67 0.65 0.66 0.59 0.57 0.59 0.54 0.61 0.48 0.29 0.19 0.35 0.63
Table 2: score over all datasets by detection level and method; best results (before rounding) are shown in bold typeface

All results are given using the heuristics described above. Labels were not used in the process. Additional material (e.g., evaluation per perspective, per dataset, runtime) can be found in the respective code repository.

To validate the significance of the results, we apply the non-parametric Friedman test friedman1937use on average ranks of all methods based on score for all synthetic datasets. Then, we apply the Nemenyi post-hoc test nemenyi1963dist as demonstrated in demvsar2006statistical , to calculate pairwise significance. Figure 11 shows a critical difference (CD) diagram as proposed in demvsar2006statistical to visualize the results with a confidence interval of 95 percent. Based on the critical difference, we recognize that BINetv1 performs significantly better than all other methods, except BINetv2 and BINetv3. That is, all three BINet versions lie in the same significance group with respect to the critical difference. DAE lies in the same group as BINetv2 and BINetv3, and Likelihood+ in the same as DAE and BINetv3. All other methods lie more than the critical difference away from the three BINets.

Figure 11: Critical difference diagram for all methods on all synthetic datasets; groups of methods that are not significantly different (at ) are connected (cf. demvsar2006statistical )

6 Classifying Anomalies

Until now, we have not utilized the predictive capabilities that BINet possesses. Using the probability distribution output of the softmax layers in conjunction with the binarized anomaly scores, we can define simple rules to infer the type (or class) of an anomaly.

We use the term predictions to denote all possible attribute values that lie within the confidence interval determined by the threshold. Note that now the Shift class becomes relevant because it indicates the place where an early or late execution would belong. For each anomalous attribute (according to BINet), we apply the following rules in order.

  1. Skip: If all predictions do not appear somewhere in the case

  2. Insert: If one of the predictions appears somewhere in the case and that occurrence has not been flagged as anomalous

  3. Rework: If the same activity is present somewhere earlier in the case and is not flagged as anomalous

  4. Shift: If one of the prediction appears either somewhere earlier or later in the case and is flagged as anomalous

  5. Late: If the activity appears somewhere earlier in the predictions and is flagged as anomalous

  6. Early: If the activity appears somewhere later in the predictions and is flagged as anomalous

  7. Attribute: Trivially, all anomalous attributes in the data perspective are of type Attribute

The result of the classification is visualized in Fig. 12. Interestingly, this set of simple rules performs remarkably well. Anomaly classes inferred by the rules are indicated by the color of the cells, whereas ground truth labels are shown as text in the cells (we omit Normal for clarity). Incidentally, this visualization also depicts the binarized anomaly scores according to the threshold since each classified attribute is also an anomalous attribute.

Figure 12: Classification of anomalies on the Paper dataset based on anomaly scores from BINetv1 using ; colors indicate the prediction of the classifier (see legend) and actual classes are shown as text within the cells

We also included some examples where the classification is incorrect. An interesting case is the second Skip example because Evaluate has also been marked as anomalous ①. As we have defined in Fig. 2, Evaluate is an activity that always eventually follows Develop Method. However, Develop Method was skipped. Therefore, BINet is never presented with the causing activity, and hence regards Evaluate as anomalous.

A different example is that BINet misses the third Rework activity in the second example ②. We observed many of these errors, and they are related to the fact that BINet is conditioned on the last input activity and forgets the history of the case (forgetting problem). Under these conditions, Develop Method indeed directly follows Research Related Work, and hence BINet misses it. This forgetting problem is something we want to address in the future.

The most interesting case is the second Early example ③. Here, BINet misclassifies the Early activity as Shift. Upon closer inspection, we realize that this is indeed a way of explaining the anomaly, albeit not the one the labels indicate. With respect to Develop Method, Conclude indeed occurs too early in the case. Nevertheless, BINet fails to detect the actual Shift point ④, and hence the rules do not match the pattern correctly.

Using this simple set of rules we ran the classifier on all synthetic datasets, using BINetv1 as the anomaly detection method and (the best heuristic for BINetv1). Figure 13

shows the results in a confusion matrix. Note that the classifier uses as input the anomaly detection result of BINet, and hence can never distinguish normal from anomalous examples. Thus, the errors for the normal class are based on the errors BINet commits in the anomaly detection task. Disregarding these errors, this results in a macro average

score of over all datasets for the classification task. Since BINetv1 reaches an average score of on the detection task, this result is truly impressive, considering the simplicity of the rules. For the joint task (detection and classification), BINetv1 reaches an average score of .

Figure 13: Confusion matrix for all runs of BINetv1 on synthetic datasets with ; color indicates distribution of actual class

In Fig. 13, we notice that BINet errs especially often for Rework, Early, Late, and Shift. This is connected to the forgetting problem mentioned earlier. Remember that Rework, Early, and Late anomalies are affecting sequences of events, that is, up to 3 events can be part of a rework anomaly, and up to 2 events can be executed early or late. In the case of Rework, we have already seen an example in Fig. 12, where BINet misclassifies because of forgetting. Figure 13 confirms that this error occurs quite often since more than 50 percent of all Rework anomalies are misclassified as Normal. All of these misclassifications happen in cases where a sequence of more than one event has been executed again.

Note that not every repetition of an event is classified as a Rework, only the events identified to be anomalous are. Hence, a repeated event (a loop in the process) is classified as Normal, if BINet has learned that it can occur multiple times in a case. In the Paper process, this is demonstrated by the second Submit event, which can naturally occur multiple times in a case. In Fig. 13 we can see that BINet very rarely classifies a Normal activity as Rework (never in the Paper datasets); thus, we can conclude that BINet has learned to model the loop in the Paper process correctly.

As with Rework, we can explain the errors for Early and Late by the same argument. However, these two classes are also often misclassified as Insert or Shift. The latter goes back to the second Early example of Fig. 12 and the ambiguity of labels. The Insert errors are of a different kind. They occur because the rule set is not taking into account that multiple events can be executed early or late. We expect to find an early execution somewhere later in the case as the prediction; however, this can only be true for the first event of an early sequence. The same argument can be made for late executions.

The Shift errors are related to the fact that the random process models often allow skipping of events. When a Shift anomaly is applied to an optional event, BINet, or any other method, has no means of finding the anomaly. This could be accounted for by altering the generation algorithm.

Nevertheless, the results indicate that a simple set of rules can be used to classify the anomalies types we have introduced before. Note that this is a white-box approach and a human user can easily interpret the resulting classification. Even though the different classes are only a subset of all anomaly types, they do cover many of the anomalies encountered in real-life business processes. Importantly, it is quite easy to define new rules for new types of anomalies based on the predictive capabilities of BINet.

7 Conclusion

In this paper, we presented three versions of BINet, a neural network architecture for multi-perspective anomaly classification in business process event logs. Additionally, we proposed a set of heuristics for setting the threshold of an anomaly detection algorithm automatically, based on the anomaly ratio function. Finally, we demonstrated that a simple set of rules could be used for classification of anomaly types, solely based on the output of BINet.

BINet is a recurrent neural network, and can, therefore, be used for real-time anomaly detection, since it does not require a completed case for detection. BINet does not rely on any information about the process modeled by an event log, nor does it depend on a clean dataset. Utilizing the lowest plateau heuristic, BINet’s internal threshold can be set automatically, reducing manual workload and allowing fully autonomous operation.

It can be used to find point anomalies as well as contextual anomalies because it models the sequential nature of the cases utilizing both the control flow and the data perspective. Furthermore, BINet can cope with concept drift, for it can be set up to train on new cases in real-time continuously.

Based on the empirical evidence obtained in the evaluation, BINet is a promising method for anomaly detection, especially in business process event logs. BINet outperformed the opposition on all detection levels. Specifically, on the synthetic datasets, BINet’s performance surpasses those of other methods by an order of magnitude. We demonstrated that BINet also performs well on the real-life datasets because BINet shows high recall of the artificial anomalies introduced to the original real-life logs.

Even though the results look very promising, there is still room for improvement. For example, BINet suffers from forgetting when sequences of events are repeated in a case. This issue can be addressed in future work, for example, by using a special attention layer. An interesting option is the use of a bidirectional encoder-decoder structure to read in cases both from left to right and from right to left. Hereby, sequences of repeated events can be identified from two sides, as opposed to just one.

Overall, the results presented in this paper suggest that BINet is a reliable and versatile method for detecting—and classifying—anomalies in business process logs.


This project [522/17-04] is funded in the framework of Hessen ModellProjekte, financed with funds of LOEWE, Förderlinie 3: KMU-Verbundvorhaben (State Offensive for the Development of Scientific and Economic Excellence), and by the German Federal Ministry of Education and Research (BMBF) Software Campus project “AI.RPM” [01IS17050].



  • (1) T. Nolle, A. Seeliger, M. Mühlhäuser, BINet: Multivariate Business Process Anomaly Detection Using Deep Learning, in: Proceedings of the 16th International Conference on Business Process Management – BPM’18, 2018, pp. 271–287.
  • (2)

    W. M. P. van der Aalst, Process Mining: Data Science in Action, Springer, 2016.

  • (3) J. Han, J. Pei, M. Kamber, Data Mining: Concepts and Techniques, Elsevier, 2011.
  • (4) L. Wen, W. M. P. van der Aalst, J. Wang, J. Sun, Mining Process Models with Non-free-choice Constructs, Data Mining and Knowledge Discovery 15 (2) (2007) 145–180.
  • (5) F. Bezerra, J. Wainer, W. M. P. van der Aalst, Anomaly Detection Using Process Mining, in: Proceedings of the 10th International Workshop on Enterprise, Business-Process and Information Systems Modeling – BPMDS’09, Springer, 2009, pp. 149–161.
  • (6) F. Bezerra, J. Wainer, Anomaly Detection Algorithms in Logs of Process Aware Systems, in: Proceedings of the 23rd Annual ACM Symposium on Applied Computing – SAC ’08, 2008, pp. 951–952.
  • (7) K. Böhmer, S. Rinderle-Ma, Multi-perspective Anomaly Detection in Business Process Execution Events, in: Proceedings of On the Move to Meaningful Internet Systems, OTM’16, Springer, 2016, pp. 80–98.
  • (8) M. A. F. Pimentel, D. A. Clifton, L. Clifton, L. Tarassenko, A Review of Novelty Detection, Signal Processing 99 (2014) 215–249.
  • (9) V. Chandola, A. Banerjee, V. Kumar, Anomaly Detection for Discrete Sequences: A Survey, IEEE Transactions on Knowledge and Data Engineering 24 (5) (2012) 823–839.
  • (10) C. Warrender, S. Forrest, B. Pearlmutter, Detecting Intrusions Using System Calls: Alternative Data Models, in: Proceedings of the 1999 IEEE Symposium on Security and Privacy – SP’99, 1999, pp. 133–145.
  • (11)

    C. Wressnegger, G. Schwenk, D. Arp, K. Rieck, A Close Look on N-grams in Intrusion Detection: Anomaly Detection vs. Classification, in: Proceedings of the 2013 ACM Workshop on Artificial Intelligence and Security – AISec’13, 2013, pp. 67–76.

  • (12) B. Schölkopf, R. C. Williamson, A. J. Smola, J. Shawe-Taylor, J. C. Platt, et al., Support Vector Method for Novelty Detection, in: Proceedings of the 12th International Conference on Neural Information Processing Systems – NIPS’99, Vol. 12, 1999, pp. 582–588.
  • (13)

    C. Cortes, V. Vapnik, Support-vector Networks, Machine learning 20 (3) (1995) 273–297.

  • (14) N. Japkowicz, Supervised versus unsupervised binary-learning by feedforward neural networks, Machine Learning 42 (1) (2001) 97–122.
  • (15) T. Nolle, A. Seeliger, M. Mühlhäuser, Unsupervised anomaly detection in noisy business process event logs using denoising autoencoders, in: Proceedings of the 19th International Conference on Discovery Science – DS’16, Springer, 2016, pp. 442–456.
  • (16) T. Nolle, S. Luettgen, A. Seeliger, M. Mühlhäuser, Analyzing Business Process Anomalies Using Autoencoders, Machine Learning 107 (11) (2018) 1875–1893.
  • (17) J. Evermann, J.-R. Rehse, P. Fettke, A Deep Learning Approach for Predicting Process Behaviour at Runtime, in: Proceedings of the 14th International Conference on Business Process Management – BPM’16, Springer, 2016, pp. 327–338.
  • (18) J. Evermann, J.-R. Rehse, P. Fettke, Predicting Process Behaviour Using Deep Learning, Decision Support Systems 100 (2017) 129–140.
  • (19) N. Tax, I. Verenich, M. La Rosa, M. Dumas, Predictive Business Process Monitoring with LSTM Neural Networks, in: Proceedings of the 29th International Conference on Advanced Information Systems Engineering – CAiSE’17, Springer, 2017, pp. 477–492.
  • (20) S. Hochreiter, J. Schmidhuber, Long Short-term Memory, Neural Computation 9 (8) (1997) 1735–1780.
  • (21) E. Marchi, F. Vesperini, F. Eyben, S. Squartini, B. Schuller, A Novel Approach for Automatic Acoustic Novelty Detection Using a Denoising Autoencoder with Bidirectional LSTM Neural Networks.
  • (22) P. Malhotra, A. Ramakrishnan, G. Anand, L. Vig, P. Agarwal, G. Shroff, Lstm-based encoder-decoder for multi-sensor anomaly detection, arXiv preprint arXiv:1607.00148.
  • (23) A. Burattin, PLG2: Multiperspective Processes Randomization and Simulation for Online and Offline Settings, arXiv preprint arXiv:1506.08415.
  • (24) F. Bezerra, J. Wainer, Algorithms for Anomaly Detection of Traces in Logs of Process Aware Information Systems, Information Systems 38 (1) (2013) 33–44.
  • (25) K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, Y. Bengio, Learning Phrase Representations Using RNN Encoder-decoder for Statistical Machine Translation, arXiv preprint arXiv:1406.1078.
  • (26) T. Mikolov, K. Chen, G. Corrado, J. Dean, Efficient Estimation of Word Representations in Vector Space, arXiv preprint arXiv:1301.3781.
  • (27) P. De Koninck, S. vanden Broucke, J. De Weerdt, act2vec, trace2vec, log2vec, and model2vec: Representation Learning for Business Processes, in: Proceedings of the 16th International Conference on Business Process Management – BPM’18, Springer, 2018, pp. 305–321.
  • (28) D. Kingma, J. Ba, Adam: A Method for Stochastic Optimization, arXiv preprint arXiv:1412.6980.
  • (29) S. Ioffe, C. Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, in: Proceedings of the 32nd International Conference on Machine Learning – ICML’15, 2015, pp. 448–456.
  • (30) M. Friedman, The Use of Ranks to Avoid the Assumption of Normality Implicit in the Analysis of Variance, Journal of the American Statistical Association 32 (200) (1937) 675–701.
  • (31) P. Nemenyi, Distribution-Free Multiple Comparisons, Ph.D. thesis, Princeton University (1963).
  • (32) J. Demšar, Statistical Comparisons of Classifiers Over Multiple Data Sets, Journal of Machine Learning Research 7 (Jan) (2006) 1–30.