An Experiment on Using Bayesian Networks for Process Mining

03/25/2015
by   Catarina Moreira, et al.
0

Process mining is a technique that performs an automatic analysis of business processes from a log of events with the promise of understanding how processes are executed in an organisation. Several models have been proposed to address this problem, however, here we propose a different approach to deal with uncertainty. By uncertainty, we mean estimating the probability of some sequence of tasks occurring in a business process, given that only a subset of tasks may be observable. In this sense, this work proposes a new approach to perform process mining using Bayesian Networks. These structures can take into account the probability of a task being present or absent in the business process. Moreover, Bayesian Networks are able to automatically learn these probabilities through mechanisms such as the maximum likelihood estimate and EM clustering. Experiments made over a Loan Application Case study suggest that Bayesian Networks are adequate structures for process mining and enable a deep analysis of the business process model that can be used to answer queries about that process.

READ FULL TEXT VIEW PDF

page 13

page 15

page 26

02/14/2012

EDML: A Method for Learning Parameters in Bayesian Networks

We propose a method called EDML for learning MAP parameters in binary Ba...
01/10/2013

Confidence Inference in Bayesian Networks

We present two sampling algorithms for probabilistic confidence inferenc...
08/07/2020

A Technique for Determining Relevance Scores of Process Activities using Graph-based Neural Networks

Process models generated through process mining depict the as-is state o...
04/17/2022

Geo-Enabled Business Process Modeling

Recent advancements in location-aware analytics have created novel oppor...
07/10/2012

Etude de Modèles à base de réseaux Bayésiens pour l'aide au diagnostic de tumeurs cérébrales

This article describes different models based on Bayesian networks RB mo...
09/10/2013

Elementos de ingeniería de explotación de la información aplicados a la investigación tributaria fiscal

By introducing elements of information mining to tax analysis, by means ...
03/27/2013

Refinement and Coarsening of Bayesian Networks

In almost all situation assessment problems, it is useful to dynamically...

1 Introduction

Process mining is a technique that enables the automatic analysis of business processes based on event logs. Instead of designing a workflow, process mining consists in gathering the information of the tasks that take place during the workflow process and storing that data in structured formats called the event logs (van der, 2011). While gathering this information, it is assumed that (1) each event refers to a task in the business process, (2) each event is associated to an instance of the workflow and (3) since the events are stored by their execution time, it is assumed that they are sorted (van der Aalst et al., 2004).

During the last decade, process mining has been growing a lot of attention in the scientific community due to its promise to provide techniques for process discovery that will lead to an increase of productivity and to the reduction of costs (van der Aalst & de Medeiros, 2005).

Process modelling can be seen as the techniques to graphically represent a business process. This graphical representation describes dependencies between activities that need to be executed together in order to fulfil a business target (Weske, 2012).

Since in process mining the order of the events is taken into consideration, there are already many models that can be directly applied to represent the workflow. Some of those models include Markov Chains 

(Ferreira et al., 2007; Rebuge & Ferreira, 2012), Petri Nets (van der Aalst, 1998)

, Neural Networks 

(Cook & Wolf, 1998) and BPMN (van der, 2011). However, Markov Chains and Petri Nets are the models that are most used in the literature of process mining (Tiwari et al., 2008).

In this work, it is proposed an alternative representation of business process by using Bayesian Networks. A Bayesian Networks can be defined as an acyclic directed graph in which each node represents a random variable and each edge represents a direct influence from the source node to the target node (conditional dependencies) 

(Spirtes et al., 2001). They differ from Markov Chains, because of their cycle-free and directed structure. Moreover, Bayesian Networks have the advantage of dealing with uncertainty differently from Markov Chains. In the latter, business processes are modelled as a chain of events that are observed to occur. Under a Bayesian Network perspective, this does not apply: each task can either be present or absent in the business process. Therefore, it is possible to perform special analysis that will enable the computation of the probability of some task of the business process occurring, given that we do not know which tasks have already been performed (Pearl, 2009).

With this research work, we argue that the capabilities of Bayesian Networks provide a promising technique to model business processes, to perform analysis regarding risk management, cost reduction, finding irrelevant / repetitive tasks, etc.

The outline of this work is as follows. Section 2 presents a brief summary of Markov Chains. Section 3 makes an introduction to Bayesian Networks. It shows how to compute probabilistic inferences and presents some learning techniques that are used to automatically learn conditional probabilities in Bayesian Networks. Section 4 presents how Bayesian Networks can be applied in the realm of process mining. This section demonstrates how one can define the structure of a Bayesian Network and how one can perform automatic learning. Section 5 presents a case study in which we apply the proposed network. Finally, Section 6 summarises the current work, presents the main conclusions achieved and some directions for future work.

2 Markov Chains

A Markov Chain is defined by a state space and a model that defines, for every state a next-state distribution over . More precisely, the transition model specifies for each pair of states the probability of going from state to  (Koller & Friedman, 2009).

Figure 1: Example of a Markov Chain

In Markov Chains, the transition probability matrix must be stochastic, that is, each row of the matrix must sum to one. Matrix 1 represents the transition matrix of the Markov Chain in Figure 1.

(1)

Suppose that one is in state at time . In order to compute the evolution of the system for , one just needs to perform a matrix multiplication between the current state and the transition probability matrix. The current state

will be encoded as vector

.

(2)

The calculations in formula 2 show that the probability from transiting from state is . The probability of transiting from is . And the probability of transiting from state is .

Moreover, if one wishes to compute the probability of the sequence , one would need to perform the following calculations:

(3)

3 Bayesian Networks

Bayesian Networks are directed acyclic graphs in which each node represents a different random variable from a specific domain and each edge represents a direct influence from the source node to the target node (Pearl, 1997)

. The graph represents independence relationships between variables and each node is associated with a conditional probability table (CPT) which specifies a distribution over the values of a node given each possible joint assignment of values of its parents. The full joint distribution of a Bayesian Network, where

is the list of variables, is given by (Russell & Norvig, 2009):

(4)

The formula for computing classical exact inferences on Bayesian Networks is based on the full joint distribution (Equation 4). Let be the list of observed variables and let be the remaining unobserved variables in the network. For some query , the inference is given by:

(5)

The summation is over all possible , i.e., all possible combinations of values of the unobserved variables . The parameter, corresponds to the normalisation factor for the distribution  (Russell & Norvig, 2009).

3.1 Example of Application

Consider the Bayesian Network in Figure 2. Suppose that we want to determine the probability of raining given that we know that the grass is wet.

Figure 2: Example of a Bayesian Network.

In order to perform such inference on a Bayesian Network, one can use Equation 5 in the following way:

(6)
(7)
(8)

Given that Bayesian Networks are based on the Naïve Bayes rule, one needs to normalize the final probabilities by a factor . This normalisation factor corresponds to:

(9)

So, in order to compute , one also needs to compute the probability of not raining given that the grass is wet, Pr( R = F W = T ):

(10)
(11)
(12)

Going back to the normalisation factor in Equation 9, one can substitute by the result in Equation 8 and by the results in Equation 12.

(13)

Now that we have computed the normalisation factor, the final probabilities are:

(14)
(15)

3.2 Learning in Bayesian Networks

There are two main approaches to build a Bayesian Network. One is to construct the network by hand and to use the knowledge of an expert to estimate the conditional probability tables. The second is to use statistical models to automatically learn these probabilities (Koller & Friedman, 2009).

Estimating the conditional probabilities by hand with the knowledge of an expert is problematic for several reasons. In some situations, the network is so big that it is almost impossible for the expert to make a reliable assignment of the probabilities to the random variables. Moreover, in many situations, the distribution of the data varies according to its application and through time. This makes it impossible for an expert to reliably estimate the probabilities associated to the random variables of the Bayesian Network.

Statistical models, on the other hand, offer a mechanism to automatically learn a model that represents the probability distribution of some population.

According to the situation that one is modelling, one can have a fully observed dataset or have an incomplete dataset (or partially observed). For the scope of this work, we will only address the problem of learning in Bayesian Networks with a fully observed dataset and a known graphical structure. The data are considered fully observed if on each of the training instances there is a full instantiation to all the random variables of our sample space (Murphy, 2012).

3.2.1 Maximum Likelihood Estimation in Bayesian Networks

The maximum likelihood estimation is a statistical method that assumes that data follows a Gaussian probability distribution. The mean and the variance of the probability distribution can be estimated by only knowing a partial sample of the dataset 

(Bishop, 2007).

Suppose that we have a Bayesian Network just like specified in Figure 3. This network is parameterized by a parameter vector

which specifies the parameters for the conditional probability distribution of the network.

Figure 3: Example of a Bayesian Network structure with unspecified conditional probability tables.

The training instances regarding Figure 3 consist in a tuple of the form , where is an instance of the random variable , is an instance of the random variable and is the training example from the training dataset of size .

The likelihood function is given by:

(16)

Since in a Bayesian Network we can specify a full joint probability distribution

by the chain rule, then, Equation 

16 becomes:

(17)
(18)

Equation 18 shows that the likelihood function can be decomposed into two separate terms. If we had random variables, then Equation 18 would also have terms. Each of these terms is called a local likelihood function and can estimate how well a variable can predict its parents.

Moreover, one can expand the second term of Equation 18 for each instance of in the following way:

(19)

Going back to the simple Bayesian Network in Figure 3, if we analyse the first term of Equation19, we can see that it refers to the number of instances of the training data in which . This gives us two sets: and . Equation 20 discriminates these instances.

(20)

Then, Equation 20 becomes:

(21)
(22)

From Equation 22, we can see that the maximum likelihood estimate for a Bayesian Network with a known structure and fully observed data consists in simply counting how many times each of the possible assignments of and appear in the training data. In order to obtain a probability value, we normalize this score by counting the total number of instances that class appears.

3.3 SamIam

SamIam - Sensitivity, Analysis, Modeling, Inference and More - is a tool that enables the graphical modeling of Bayesian Networks. It was developed by the Automated Reasoning Group form the University of California111http://reasoning.cs.ucla.edu/samiam/.

SamIam is composed of a graphical interface and a reasoning engine. The graphical interface provides an easy way to model Bayesian Networks by specifying the random variables as nodes, causal connections as edges and the respective conditional probability tables. The reasoning engine, on the other hand, can perform classical inferences over the plotted Bayesian Network, make parameter estimations by learning mechanisms, sensitivity analysis, etc. For the scope of this work, only the classical inference and the learning mechanisms will be necessary.

Examples of SamIam’s graphical interface are given by Figures 6 to 9.

Figure 4: SamIam representation of the Bayesian Network of Figure 2.
Figure 5: Example of SamIam’s inference engine: , .
Figure 6: Example of SamIam’s inference engine: , .

In Figure 6, it is presented the Bayesian Network from Figure 2 under the SamIam graphical interface. The marginal probabilities for each node are automatically computed as soon as the user builds the Bayesian Network. Figure 6 shows that: , , , , , .

Figure 6 represents a graphical representation of the inference that was manually computed in Equations 8 and 12. The red markers represent variables which are observed. That is, variables, which have occurred. They can be seen as the conditions of probabilities. For instance, in the manually computed probability in Equation 6, the observed variable was the condition , that is, we are asking the probability of Raining given that it was observed that the grass was wet.

Figure 7: Example of SamIam’s inference engine: .
Figure 8: Example of SamIam’s inference engine: .
Figure 9: Example of SamIam’s inference engine: , .

For large Bayesian Networks, the inference process becomes very heavy and hard to be computed manually. Therefore, SamIam provides an easy interface that automatically performs such heavy operations.

In process mining, event logs are usually associated with a large amount of tasks, which can be mapped into nodes of a Bayesian Network. Consequently, for the scope of this work, we chose the capabilities of SamIam to automatically compute inferences related to the probability of certain sequences of tasks occurring. This mechanism will be more detailed in Section 4.2 of this work.

4 Bayesian Networks for Process mining

Probabilistic graphical models, such as Bayesian Networks, are usually used for probabilistic inferences, that is, asking queries to the model and receiving answers in the form of probability values.

Under the realm of process mining, Bayesian Networks can represent activities as nodes (i.e. random variables) and the edges between activities can be seen as transitions between these tasks. From this structure, it is possible to automatically learn the conditional probability tables from a complete log of events using the Maximum Likelihood Estimations (Section 3.2.1). If the log is incomplete, then a Bayesian Network can also automatically learn and estimate the probability tables through the usage of EM Clustering, just like used in the work of Bobek et al. (2013), who developed a Bayesian Network to recommend business processes.

In the literature, business processes that are learnt from event logs are usually represented by either Markov Chains or Petri Nets (Weske, 2012). In this work, however, we propose another approach to model business processes using Bayesian Networks. The reason why we do this is concerned with the fact that Bayesian Networks can deal with uncertainty more easily.

Bayesian Networks provide advantages in situations where we do not know if some task has occurred and we need to determine the probability of the process terminating or the probability of the process reaching some other task. Therefore, these structures provide more insights when there are high levels of uncertainty when compared to Markov Chains.

4.1 Defining the Strucuture

Another advantage of Bayesian Networks is that they allow the direct representation of business process diagrams by capturing the direct dependencies between tasks. However, they do not allow an explicit representation of cycles, because Bayesian Networks are directed acyclic graphs. To represent a cycle, in a Bayesian Network, one would need to create many instances of the same node, which is intractable to perform inferences, since the inference problem is NP-Complete (Figures 11 and 11).

Figure 10: Example of a representation of a Bayesian Network with cycles.
Figure 11: Example of a representation of a Markov Chain with cycles.

In this work, in order to eliminate cycles from the log of events, we used an heuristic that would choose the most probable transitions between nodes. For instance, suppose that there is a transition from nodes

that occurred 900 times. Suppose also that there is a transition from nodes that occurred 100 times. Following the proposed heuristic, we would only represent the Bayesian Network with the transition . Figures 13 and 13 illustrates this example.

Figure 12: Example of a Markov Chain with a cycle.
Figure 13: Conversion of the Markov Chain to a Bayesian Network by removing the weakest edge.

Another structure that Bayesian Networks cannot represent directly is concerned with mutual exclusion. Two events are mutually exclusive if they cannot occur at the same time. Bayesian Networks can capture mutually exclusive events through the notions of independence by manually adding new edges to the network. For instance, consider the business process represented by the Bayesian Network in Figure 14. Nodes and represent the end of the process, while node represents a task that begins the process. In this situation, and following the semantics of the business process, it is required that nodes and become mutually exclusive. That is, the process flow can only end in one of these nodes and not on both of them at the same time.

Figure 14: Example of a Bayesian Network with no mutually exclusive nodes.
Figure 15: Example of a Bayesian Network with no mutually exclusive nodes.
Figure 16: Example of a Bayesian Network with no mutually exclusive nodes.

As one can see in Figures 16 and 16

, the Bayesian Network cannot represent this mutual exclusion. When computing Bayesian Inferences, all nodes depend on each other. Therefore, in order to semantically represent

node B cannot occur at the same time as node C, one needs to add an extra edge between . This additional edge will create a new dependency between these nodes. One can manually configure the conditional probability table of node to represent this mutually exclusion: when node is set to , then the probability of occurring is zero and vice-versa. The mutual exclusion of the Bayesian Network in Figure 14 is illustrated in Figures 17 to 19.

Figure 17: Example of a Bayesian Network with mutual exclusion.
Figure 18: Example of a Bayesian Network with mutual exclusion.
Figure 19: Example of a Bayesian Network with mutual exclusion.

Note that, in Figures 17 to 19, the probability of node occurring when nothing is observed changed when compared to the Bayesian Network of Figure 14. This happened, because of the extra edge that was added in the later Bayesian Network, which ended up changing the configurations of the conditional probability tables and, consequently, final probability values.

4.2 SamIam: Designing a Bayesian Network

SamIam provides an intuitive interface for constructing Bayesian Network. There are two modes in SamIam: the query mode (for learning and inferences) and the edit mode (for network structure and definition of conditional probabilities). When SamIam is started, the edit mode appears by default. Figure 20 describes the general edit mode interface.

Figure 20: SamIam’s edit mode default interface.

The interface enables the creation/removal of nodes and the creation/removal of edges between nodes. For each node created, there will be a configuration window that can be accessed when the node is double-clicked. In this window, one must specify a unique identifier for the node and a name to be displayed in the SamIam interface. Additionally, one also needs to specify which states the node can have. For the scope of this work, we will only have binary random variables, so each node will have exactly two states: one representing the occurrence of the random variable and another representing its absence.

The conditional probability table can be accessed by clicking the tab Probailities. A windows, similar to the one presented in Figure 21, will appear.

Figure 21: SamIam’s interface to assign conditional probabilities to random variables.

In this window, a user can manually specify the conditional probabilities of the random variable. by default, Samiam fills these tables using a normal probability distribution, that is, each instance of each node has the same probability of occurring (Pr = 0.5 ).

The buttons Complement can be used to automatically assign the last probability value of the table. This takes into account the constraint that the probabilities of an event must sum to one. This way, the user can only manually specify entries of the table. SamIam computes the remaining probability by subtracting that value with 1: .

The button Normalize normalizes all the entries of the conditional probability table.

4.3 Learning

Given a log of events and a graphical structure, SamIam is able to find a statistical model that can automatically estimate the conditional probability tables of the given Bayesian Network. This learning process can be computed using the Maximum Likelihood Estimation (Section 3.2.1) if the log of events is complete or using the EM Clustering algorithm if the log of events is incomplete (Bishop, 2007).

In the scope of this work, since we were given a complete event log, the process of filling the conditional probability tables was given by the maximum likelihood estimation, that is, by counting the number of times each instance of the log of events was present and then by normalizing to obtain a probability value.

SamIam can automatically do this in the query mode. In the main SamIam interface, one can select the query mode just like presented in Figure 23. To go into the learning menu, one needs to find the option EM Learning (Figure 23).

Figure 22: Entering in query mode.
Figure 23: Entering in the learning menu.

Under the EM Learning menu, the user is presented with a window that asks for a training file, a probability threshold, the maximum number of iterations that the algorithm should perform and if the learning algorithm should ignore entries that lead to divisions by zero. Figure 24 illustrates these options.

Figure 24: SamIam learning menu.

In Figure 24, the field Max iterations corresponds to the total number of iterations that the EM Clustering should perform in case the algorithm does not converge. For the scope of this work, this entry is irrelevant since we are dealing with fully observed log of events. Consequently, the EM Clustering will collapse to the Maximum Likelihood Estimate.

The field Log-likelihood threshold is also used in the scope of the EM Clustering. This threshold specifies that the algorithm will converge when the change in the log-likelihood function falls bellow a certain threshold. It is a common practice in the literature to set this value to  (Bishop, 2007; Koller & Friedman, 2009).

The option Use bias to prevent divisions by zero should always be used, otherwise the Maximum Likelihood Estimate formula will try to perform a division by zero when it tries to compute the probability of an instance that does not exist in the training set.

In process mining, a training set consists in a portion of the log of events that is used to fit (train) a model for prediction of values. In the scope of this work, a training set will consist of 70% randomized entries of the log of events. The format of the training file contains the names of all random variables (nodes) in the first line. The remaining lines of the file correspond to the instances of the nodes that are specified in the log of events. In this work, we modeled binary random variables with the instances present to represent the occurrence of a task in the business process and absent to represent the non-existence of the task. Figure 25 shows the log of events (left) and the conversion of one instance of the log of events into a training file with the SamIan format (right).

Figure 25: Entering in the learning menu.

After SamIam learns the conditional probability tables, it is necessary to correct some semantics of the network. More specific the inclusion of mutually exclusive relationships. For instance, Figure 27 presents a conditional probability table that was automatically learned by SamIam. As one can see, when the node A_PARTLYSUBMITTED is absent, SamIam did not update the normal probability distribution, so the probabilities remained in the conditional probability table. This means that there were no events in the log that did not have an instance of the A_PARTLYSUBMITTED node. This happens, because in process mining, the activities that are performed are usually mutually exclusive, unless a special structure is used to say the contrary. In order to correct these probabilities, such that the mutual exclusion is captured, one just needs to fill the conditional probability table just like illustrated in Figure 27. When the preceding node is absent, then the posterior nodes should also become absent.

Figure 26: Learned conditinal probability table.
Figure 27: Corrected conditional probability table denoting mutual exclusion between the nodes.

5 Case Study: Loan Application

The event log that we use in this work is taken from a Dutch Financial Institute222http://www.win.tue.nl/bpi/2012/challenge. The event log represents a loan application belonging to a global financial organization, in which a customer requests a certain amount of money. The process is composed of three different sub processes. The first letter of each task corresponds to an identifier of the sub process it belongs to. The tasks that start with letter correspond to states of the application. The tasks that start with letter correspond to offers belonging to the application. And the tasks that start with letter correspond to the work item belonging to the application.

The general scenario is as follows. There is a webpage that enables the submission of loan applications. A customer selects a certain amount of money and then submits his request. Then, the application performs some automatic tasks and checks if an application is eligible. If it is eligible, then the customer is sent an offer by mail. After this offer is received, it will be evaluated. In case of any missing information, the offer goes back to the client and is again evaluated until all the required information is gathered. A final evaluation is done to the application. Finally, the application is approved and activated.

The log contains events and cases. The statistics of the log of events is summarised in Table 1.

Event Num Occurrences Event Num Occurrences
A_SUBMITTED 13 087 W_Nabellen incomplete dossiers 11 407
A_PARTLYSUBMITTED 13 087 W_Valideren aanvraag 7 895
A_PREACCEPT 7 367 W_Afhandelen leads 5898
A_CANCELLED 2 807 W_Beoordelen fraude 270
A_APPROVED 2 246 W_Wijzigen contractgegevens 0
A_REGISTERED 2 246 O_ACCEPTED 2 243
A_ACTIVATED 2 246 O_SELECTED 7 030
A_DECLINED 7 635 O_CREATED 7 030
A_FINALIZED 5 015 O_SENT 7 030
A_ACCEPTED 5 113 O_SENT_BACK 3 454
W_Completeren aanvraag 23 967 O_CANCELLED 3 655
W_Nabellen offertes 22 976 O_DECLINED 802
Table 1: Summary of the statistics of the Loan Application event log. Only COMPLETE events were taken into account.

5.1 Converting the Log of Events into a SamIam Bayesian Nework

In this work, a Java program was made that received as input the log of events in csv format and returned a Bayesian Network in a special file format that can be readable by the SamIam toolkit. The program parsed every line of the log of events and grouped all activities that were complete and that belonged to the same instance (had the same caseId). The program automatically created a graph in a matrix form representation and computed the frequency of the connections between nodes.

Given this matrix form graph representation, another Java program was made in order to convert this matrix into a network file recognized by SamIam. Figures 29 and 29 show an example of a network file readable by SamIam. This example shows a network of the following form: .

Figure 28: Example of a SamIam network file.
Figure 29: Example of a SamIam network file.

In a first attempt, we mapped the entire log of events into a Bayesian Network. However, the full log contained many tasks (about 24 random variables) and turned the process too big and complex to analyse. Figure 30 shows the network directly extracted from the log of events. The cycles that are present in this network were already expected, since the log of events contain many events that require cycles. Later in this work, we will specify an heuristic to remove such cyclic structures and turn any network into an acyclic directed graph.

Figure 30: Full representation of the Loan Application Bayesian Network.

Since the network in Figure 30 was too complex, we decided to choose only the nodes concerned with the tasks of the log of events, just like it was done in the works of (Adriansyah & Buijs, 2012; Bautista et al., 2012; Bose & van der Aalst, 2012; Kang et al., 2012).

The resulting Bayesian Network was smaller, containing only random variables. We then altered the Bayesian Network in order to add mutually exclusive relationships between the nodes A_DECLINED and A_CANCELLED and between the nodes A_APPROVED, A_DECLINED and A_CANCELLED.

The mutually exclusive relation between the nodes A_DECLINED and A_CANCELLED is straightforward. A loan application cannot be both declined and cancelled. Additionally, if an application is known to be declined, then the probability of being cancelled will be zero and vice-versa. Figure 31 presents this network and Figures 32 to 34 illustrate the mutual exclusion between nodes.

Figure 31: Bayesian Network representation of the loan application. Only A_ nodes were taken into account. Manually added mutually exclusive relationships between nodes A_DECLINED and A_CANCELLED and between nodes A_APPROVED, A_DECLINED and A_CANCELLED.
Figure 32: Mutual exclusion between nodes A_DECLINED, A_CANCELLED and A_APPROVED.
Figure 33: Mutual exclusion between nodes A_DECLINED, A_CANCELLED and A_APPROVED.
Figure 34: Mutual exclusion between nodes A_DECLINED, A_CANCELLED and A_APPROVED.

In order to compare our model with other works in the literature, we also created a Markov Chain from the same log of events (Section 2). We then computed the probability of each sequence of the test set occurring in the Bayesian Network and in the Markov Chain and then compared the results. Section 5.3 presents the main outcomes of these experiments.

5.2 Converting the Log of Events into a Markov Chain

As already mentioned, we also developed a Markov Chain by a script in Python with the same training set used to generate the Bayesian Network. The transition probabilities of the Markov Chain were computed by simply counting the number of occurrences of each sequence of events and then by normalizing to obtain a probability value. Figure 35 shows the computed Markov Network.

Figure 35: Markov Chain representation of the loan application.

5.3 Results

After defining the structure of the Bayesian Network for the loan application, we generated a training set in which we randomly selected 70% of cases in the event log as training set and then, we used the remaining as a test set to validate our model.

The training set was given as input to SamIam in order to learn the conditional probability tables. Then, to test the application, a MatLab program was developed in order to perform probabilistic inferences. Basically, the MatLab program received as input the SamIam’s network file and returned a Bayesian Network structure. From this program, we were able to compute full joint probability distributions and marginal probabilities. Another Java program received as input the test set and was able to validate the model. The validation was performed as follows: we computed the probability of some events occurring in the test set and then we compared this value with the probability given in the trained Bayesian Network. Tables 2 to 8 show the results obtained for different queries both in the test set and the training set.

Probability Test Set Training Set ERROR %
Pr( A_CANCELLED = present ) 0.0000 0.0000 0.0000
Pr( A_ACTIVATED = present ) 0.0000 0.0000 0.0000
Pr( A_ACCEPTED = present ) 0.1063 0.1099 0.3592
Pr( A_FINALIZED = present ) 0.1010 0.1067 0.5685
Pr( A_PREACCEPT = present ) 0.2307 0.2595 2.8799
Pr( A_SUBMITTED = present ) 1.0000 1.0000 0.0000
Table 2: Results obtain when the node A_DECLINED = present was observed.
Probability Test Set Training Set ERROR %
Pr( A_DECLINED = present ) 0.0000 0.0000 0.0000
Pr( A_ACTIVATED = present ) 0.0000 0.0000 0.0000
Pr( A_ACCEPTED = present ) 0.6098 0.6857 7.5916
Pr( A_FINALIZED = present ) 0.5882 0.6567 6.8532
Pr( A_PREACCEPT = present ) 1.0000 1.0000 0.0000
Pr( A_SUBMITTED = present ) 1.0000 1.0000 0.0000
Table 3: Results obtain when the node A_CANCELLED = present was observed.
Probability Test Set Training Set ERROR %
Pr( A_DECLINED = present ) 0.5773 0.5860 0.8715
Pr( A_ACTIVATED = present ) 0.1719 0.1715 0.0387
Pr( A_ACCEPTED = present ) 0.3911 0.3905 0.0638
Pr( A_FINALIZED = present ) 0.3830 0.3833 0.0310
Pr( A_PREACCEPT = present ) 0.5559 0.5659 1.0005
Pr( A_CANCELLED = present ) 0.2238 0.1315 9.2335
Table 4: Results obtain when the node A_SUBMITTED = present was observed.
Probability Test Set Training Set ERROR %
Pr( A_DECLINED = present ) 0.2396 0.2687 2.9121
Pr( A_ACTIVATED = present ) 0.3092 0.3030 0.6208
Pr( A_ACCEPTED = present ) 0.7036 0.6900 1.3619
Pr( A_FINALIZED = present ) 0.6890 0.6773 1.1660
Pr( A_SUBMITTED = present ) 1.0000 1.0000 0.0000
Pr( A_CANCELLED = present ) 0.4027 0.2324 17.0257
Table 5: Results obtain when the node A_PREACCEPT = present was observed.
Probability Test Set Training Set ERROR %
Pr( A_DECLINED = present ) 0.1523 0.1632 1.0939
Pr( A_ACTIVATED = present ) 0.4488 0.4475 0.1303
Pr( A_ACCEPTED = present ) 1.0000 1.0000 0.0000
Pr( A_PREACCEPT = present ) 1.0000 1.0000 0.0000
Pr( A_SUBMITTED = present ) 1.0000 1.0000 0.0000
Pr( A_CANCELLED = present ) 0.3438 0.2254 11.8350
Table 6: Results obtain when the node A_FINALIZED = present was observed.
Probability Test Set Training Set ERROR %
Pr( A_DECLINED = present ) 0.1569 0.1649 0.7999
Pr( A_ACTIVATED = present ) 0.4395 0.4392 0.0253
Pr( A_FINALIZED = present ) 0.9792 0.9815 0.2333
Pr( A_PREACCEPT = present ) 1.0000 1.0000 0.0000
Pr( A_SUBMITTED = present ) 1.0000 1.0000 0.0000
Pr( A_CANCELLED = present ) 0.3490 0.2310 11.7958
Table 7: Results obtain when the node A_ACCEPTED = present was observed.
Probability Test Set Training Set ERROR %
Pr( A_DECLINED = present ) = 0.0000 0.0000 0.0000
Pr( A_ACCEPTED = present ) = 1.0000 1.0000 0.0000
Pr( A_FINALIZED = present ) = 1.0000 1.0000 0.0000
Pr( A_PREACCEPT = present ) = 1.0000 1.0000 0.0000
Pr( A_SUBMITTED = present ) = 1.0000 1.0000 0.0000
Pr( A_CANCELLED = present ) = 0.0000 0.0000 0.0000
Table 8: Results obtain when the node A_ACTIVATED = present was observed.

The overall results show that the Bayesian Network learned from the log of events is a good approach for process mining, since the errors obtained were very low. The most significant errors come associated with the node A_CANCELLED. For instance, in Table 5, the probability Pr( A_CANCELLED = present A_PREACCEPT ) achieved an error of . One possible explanation can be given by the mutual exclusivities that were given to this node. Since in Bayesian Networks, all nodes depend of each other, then by adding new relationships to the nodes, we are introducing some non-trivial effects in the model.

Another experiment made was to compare the proposed Bayesian Network with a Markov Chain. We trained a Markov Chain in the same way we did for the Bayesian Network.

Processes Process Encoding Processes Process Encoding
A_SUBMITTED A_SUB A_APPROVED A_APPR
A_PARTLYSUBMITTED A_PART A_REGISTERED A_REG
A_PREACCEPT A_PRE A_ACTIVATED A_ACT
A_ACCEPTED A_ACC A_DECLINED A_DEC
A_FINALIZED A_FIN A_CANCELLED A_CAN
Table 9: Encodings of the nodes used in the Bayesian Network.

In order to validate both approaches, we leveraged on the test set and computed the probability of each sequence occurring in a Bayesian Network and in a Markov Chain. In the end, those probabilities were weighted with the number of occurrences of each sequence in the test set. The results obtained are discriminated in Table 10.

Chain Occ. Test Set BN MC ERROR %
A_SUB A_PART A_PRE 22 0.046426 0.0051 4.13 %
A_SUB A_PART A_PRE A_ACC 1 0.00154 0.0002 0.13 %
A_SUB A_PART A_PRE A_ACC A_FIN 83 0.0627 0.0266 3.61 %
A_SUB A_PART A_DEC 1744 0.433843 0.4340 0.01%
A_SUB A_PART A_PRE A_DEC 282 0.046369 0.0877 4.13 %
A_SUB A_PART A_PRE A_ACC A_DEC 12 0.000534 0.0019 0.13 %
A_SUB A_PART A_PRE A_ACC A_FIN A_DEC 229 0.02363 0.0626 3.90
A_SUB A_PART A_PRE A_CAN 343 0.041347 0.0826 4.13 %
A_SUB A_PART A_PRE A_ACC A_CAN 19 0.003809 0.0051 0.13 %
A_SUB A_PART A_PRE A_ACC A_FIN A_CAN 517 0.0864 0.1226 3.62 %
A_SUB A_PART A_PRE A_ACC A_FIN A_APPR A_REG A_ACT 675 0.1715 0.1715 0.0000 %
Total 0.2435 0.2561 1.2674 %
Table 10: Comparison of a Bayesian Network (BN) and a Markov Chain (MC) for process mining. The Error % was computed in the following way:

.

Table 10 shows that the probabilities computed in a Bayesian Network are almost identical to the ones computed by the Markov Chain. Individually, the probabilities of computing the sequences in the test set did not have an error percentage superior to , which is statistically insignificant given the total amount of data tested. Moreover, the overall error percentage between the proposed Bayesian Network and the Markov Chain was around , which is also statistically insignificant. This means that the Bayesian Networks have a similar performance as a Markov Chain. Consequently, one can conclude that Bayesian Networks are also good approaches to model business processes, with the advantage of being able to represent uncertainty (computing probabilities of tasks that we do not know if occurred).

5.4 Queries

As already mentioned, one of the capabilities of Bayesian Networks for process mining is their ability to deal with uncertainty. They enable the analysis of tasks that are not known to occur. For instance, for the Loan Application Bayesian Network, one can be interested in analyzing the probability of the business process ending successfully by only knowing that a couple of tasks were observed to occur. Combining this ability with SamIam’s graphical capabilities will enable a fast analysis of business processes as well as risk management.

Figure 36 shows the probabilities of some nodes of the Loan Application Bayesian Network, when it is only known that the application was declined, that is, the node A_DECLINED was observed to occur. From this analysis, one can conclude that the majority of the applications that are declined have a high probability of reaching the state A_PREACCEPT. Moreover, if an application is declined, then the nodes A_ACTIVATED and A_CANCELLED are never reached.

Figure 36: Analysis of the probabilities of reaching some nodes of the Loan Application Bayesian Network, when it is known that the application was declined.

Another example is given by Figure 37. When it is known that the application ended up in a cancelled state, then one can estimate with a probability that the process reached the task A_PREACCEPT and never reached the tasks A_DECLINED and A_ACTIVATED. Moreover, there is a high probability that the application was cancelled during the tasks A_ACCEPTED and A_FINALIZED.

Figure 37: Analysis of the probabilities of reaching some nodes of the Loan Application Bayesian Network, when it is known that the application was cancelled.

The maximum uncertainty in the loan application business process is given when one only knows that the process was started, which happens when the task A_SUBMITTED is observed to occur. In this situation, the proposed Bayesian Network estimates that there is a high probability of the process going to the task A_PREACCEPT or being declined (A_DECLINED). If one chooses task A_PREACCEPT, then from Figure 39 one can conclude that there is a high probability that the process will be either accepted or finalised.

Figure 38: Analysis of the probabilities of reaching some nodes of the Loan Application Bayesian Network, when it is known that the application was submitted.
Figure 39: Analysis of the probabilities of reaching some nodes of the Loan Application Bayesian Network, when it is known that the application was pre accepted.

6 Conclusion and Future Work

In this work, we propose the usage of Bayesian Networks as a new approach to represent business processes automatically extracted from event logs.

In a first step, we extracted the relationships between nodes from the log of events and then used this log to train and validate the proposed Bayesian Network.

Experiments made over a Loan Application Case study suggest that Bayesian Networks have the same performance as Markov Chains, so they are good models to make accurate predictions about sequences of events in the scope of process mining.

Moreover, by modelling a business process through Bayesian Networks, one is able to take advantage of the ability of these structures to deal with uncertainty. More specifically, Bayesian Networks enable the reconstruction of a flow by only taking into account partial observations in the business process.

As for future work, it would be interesting to extend the capabilities of Bayesian Networks to learn from incomplete logs of events. One could train such network using the EM Clustering in order to find an approximate probability distribution for the occurrence of the tasks. Moreover, together with SamIam, one could try to estimate the most probable sequences of business processes using the probabilities learned from the incomplete log.

References

  • (1)
  • Adriansyah & Buijs (2012) Adriansyah, A. & Buijs, J. (2012), Mining process performance from event logs: The bpi challenge 2012 case study, in ‘Proceedings of the 8th International Workshop on Business Process Intelligence’.
  • Bautista et al. (2012) Bautista, A. D., Wangikar, L. & Akbar, S. M. K. (2012), Process mining-driven optimization of a consumer loan approvals process: The bpic 2012 challenge, in ‘Proceedings of the 8th International Workshop on Business Process Intelligence’.
  • Bishop (2007) Bishop, C. (2007), Pattern Recognition and Machine Learning, Springer.
  • Bobek et al. (2013) Bobek, S., Baran, M., Kluza, K. & Nalepa, G. (2013), Application of bayesian networks to recommendations in business process modeling, in ‘Proceedings of the Central Europe Workshop’.
  • Bose & van der Aalst (2012) Bose, J. C. & van der Aalst, W. (2012), Process mining applied to the bpi challenge 2012: Divide and conquer while discerning resources, in ‘Proceedings of the 8th International Workshop on Business Process Intelligence’.
  • Cook & Wolf (1998) Cook, J. & Wolf, A. (1998), ‘Discovering models of software processes from event-based data’, Journal of ACM Transactions on Software Engineering and Methodology 7, 215–249.
  • Ferreira et al. (2007) Ferreira, D., Zacarias, M., Malheiros, M. & Ferreira, P. (2007), Approaching process mining with sequence clustering: Experiments and findings, in ‘In Proceedings of the 5th International Conference on Business Process Management’.
  • Kang et al. (2012) Kang, C. J., Shin, C. K., Lee, E. S., Kim, J. H. & An, M. A. (2012), Analyzing application process for a personal loan or overdraft of dutch financial institute with process mining techniques, in ‘Proceedings of the 8th International Workshop on Business Process Intelligence’.
  • Koller & Friedman (2009) Koller, D. & Friedman, N. (2009), Probabilistic Graphical Models: Principles and Techniques, MIT Press.
  • Murphy (2012) Murphy, K. (2012), Machine Learning: A Probabilistic Perspective, MIT Press.
  • Pearl (1997) Pearl, J. (1997), Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, Morgan Kaufmann.
  • Pearl (2009) Pearl, J. (2009), Causality: Models, Reasoning and Inference, Cambridge University Press.
  • Rebuge & Ferreira (2012) Rebuge, A. & Ferreira, D. (2012), ‘Business process analysis in healthcare environments: A methodology based on process mining’, Journal of Information Systems: Management and Engineering of Process-Aware Information Systems 37, 99–Ð116.
  • Russell & Norvig (2009) Russell, S. & Norvig, P. (2009), Artificial Intelligence: A Modern Approach, Prentice Hall.
  • Spirtes et al. (2001) Spirtes, P., Glymour, C. & Scheines, R. (2001), Causation, Prediction and Search, MIT Press.
  • Tiwari et al. (2008) Tiwari, A., Turner, C. & Majeed, B. (2008), ‘A review of business process mining: state-of-the-art and future trends’, Journal of Business Process Management 14, 5–22.
  • van der Aalst (1998) van der Aalst, W. (1998), ‘The application of petri nets to workflow management’, Journal of Circuit Systems and Computers 8, 21–66.
  • van der Aalst & de Medeiros (2005) van der Aalst, W. & de Medeiros, A. K. (2005), ‘Process mining and security: Detecting anomalous process executions and checking process conformance’, Journal of Electronic Notes in Theoretical Computer Science 121, 3–21.
  • van der Aalst et al. (2004) van der Aalst, W., Weijters, T. & Maruster, L. (2004), ‘Workflow mining: Discovering process models from event logs’, Journal of IEEE Transactions on Knowledge and Data Engineering 16, 1128 – 1142.
  • van der (2011) van der, W. (2011), Process Mining: Discovery, Conformance and Enhancement of Business Processes, Springer.
  • Weske (2012) Weske, M. (2012), Business Process Management: Concepts, Languages, Architectures, Springer.