From Low-Level Events to Activities -- A Session-Based Approach (Extended Version)

03/10/2019 ∙ by Massimiliano de Leoni, et al. ∙ Università di Padova 0

Process-Mining techniques aim to use event data about past executions to gain insight into how processes are executed. While these techniques are proven to be very valuable, they are less successful to reach their goal if the process is flexible and, hence, events can potentially occur in any order. Furthermore, information systems can record events at very low level, which do not match the high-level concepts known at business level. Without abstracting sequences of events to high-level concepts, the results of applying process mining (e.g., discovered models) easily become very complex and difficult to interpret, which ultimately means that they are of little use. A large body of research exists on event abstraction but typically a large amount of domain knowledge is required to be fed in, which is often not readily available. Other abstraction techniques are unsupervised, which give lower accuracy. This paper puts forward a technique that requires limited domain knowledge that can be easily provided. Traces are divided in sessions, and each session is abstracted as one single high-level activity execution. The abstraction is based on a combination of automatic clustering and visualization methods. The technique was assessed on two case studies that evidently exhibits a large amount of behavior. The results clearly illustrate the benefits of the abstraction to convey knowledge to stakeholders.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Nowadays, large, complex organizations leverage on well-defined processes to try to carry on their business more effectively and efficiently than their competitors. In a highly competitive world, organizations aim to continuously improve their business performance, which ultimately boils down to improving their process.

The first step towards improvement is to understand how processes are actually being executed. The understanding of the actual process enactment is the goal of process mining. This research field focuses on providing insights by reasoning on the actual process executions, which are recorded in so-called event logs [1]. Event logs group process events in traces, each of which contains the events related to a specific process-instance execution. An event refers to the execution of an activity (e.g., Apply for a loan) for a specific process instance (e.g. customer Mr. Bean

) at a specific moment in time (e.g. on January, 1st, 2018 at 3.30pm).

Figure 1: A model for a very flexible process, which shows an ocean of variability.

While process mining has proven to be effective in a wide range of application fields, it has shown its limitation when the process intrinsically allows for a high degree of flexibility [1], or information systems record executions into logs where events are at a lower-level granularity than the concepts that are relevant from a business viewpoint. Both of the problems lead to an “ocean” of observed process behavior. This means that, e.g., if one tries to discover a process model, one obtains a model that is very complex and/or low-level, thus being difficult to interpret. As a matter of fact, if the granularity is too low level, even the event-log visualization through dotted charts [1] is less useful: users are confronted with a chart with too many dots to draw insightful conclusions.

Extreme complexity and difficulty of interpretation contrast the initial purpose of process mining: conveying interpretable insights and knowledge to process stakeholders and owners. Typical examples are in health-care [16], customer journey [20], on-line retailer shops, supermarkets, hospitals, home automation, and IoT systems.

Similarly to existing related work (see Section 5), here we advocate the need of abstracting low-level events to high-level activities. However, differently from existing related work, we do not want to rely on the provision of an extensive amount of domain knowledge as existing approaches require: this can be hard in several domains. On the other hand, we want to avoid completely unsupervised approaches, which naturally show lower accuracy and/or rely on strong assumptions.

To balance accuracy and practical feasibility, we aim at a technique that requires process analysts to only feed in knowledge that is limited in quantity and easy to provide. In a nutshell, the idea is that events of the same trace can be clustered into sessions such that the time distance between the last event of a session and the first event of the subsequent session is larger than a user-defined threshold. Each trace is seen as a sequence of sessions of events. These sessions are encoded into data points to be clustered; this way, each session is assigned to one cluster. The abstract event log is created such that the entire session is replaced by a high-level event that indicates to which cluster the session belongs. The high-level events needs to be named: The centroids of the clusters provide meaningful information for a process stakeholder to identify the high-level activity that corresponds to each cluster. To support stakeholder in this identification, visualization techniques are foreseen, based on heat maps. However, the latter is optional: e.g., without domain knowledge, each cluster may be given a name that coincides with that of the most frequent activity in the sessions of the cluster, or with a concatenation of the names of those most frequent, if more than one clearly stand out.

The benefit and feasibility of the proposed technique was assessed on two real-life case studies. The first refers to the www.werk.nl web site. Results show that overcomplex, low-level process models can be converted into high-level counterparts that are accurate according to the process-mining metrics, and that are simply enough to be able to convey information that has business value.111Here, a process is intended as a set activities that are executed while complying given ordering constraints. The activities can be of any nature, ranging from those more traditional performed, e.g., by a bank or city-hall employee, till web-page visits or those executed by domotics or IoT systems, such as by/with TVs, ovens, bulbs, bath tubes, or heaters. However, the idea of a session-based clustering goes beyond analysing web sites; it certainly applies to other domains, including on-line retailer shops, supermarkets, hospitals, home automation, and IoT systems. In general, one can apply the proposed technique to any domain in which events happen in batches/sessions. A second case study showcases the wider applicability of the technique and focuses on the executions of a process to manage building-permit requests.

The technique is not only beneficial when discovering a model, but also in a wider range of applications of diverse process-mining techniques. As a support of this statement, we showcase an example: for the second case study, the abstract log is used to compare the management of building permits when different city-hall employees are responsible.

Section 2 introduces the initial motivating example of the www.werk.nl web site. Section 3 introduces the abstraction technique, while Section 4 reports on the evaluation on the two cases. Section 5 compares with the related work while Section 6 concludes the paper, delineating the avenues of future work.

2 Motivating example

The www.werk.nl web site is a very significant example of customer journey, intended as the product of the interaction between an organization and a customer throughout the duration of their relationship. Gartner highlights the importance of managing the customer’s experience, which is seen as “the new marketing battlefront”.222Key Findings From the Gartner Customer Experience Survey - https://www.gartner.com/smarterwithgartner/key-findings-from-the-gartner-customer-experience-survey/. The www.werk.nl web site is run by UWV, which is the social security institute that implements employee insurances and provide labour market services to residents in the Netherlands. Specifically, the web site supports unemployed Netherlands’ residents in the process of job reintegration. Once logged in the web site, people can upload their own CVs, search for suitable jobs and, more in general, interact with UWV via messages as well as they can ask questions, file complaints, etc. The www.werk.nl web site is structured into sections of pages and logged-in users can arbitrary switch from one to another. However, to improve the experience, it would be worthwhile introducing supporting wizards. The starting pointing for designing such wizards is to gain insights into the typical ways in which the web site is actually used.

Publicly available is an event log that collects the browsing behavior of the logged-in visitors in the period from July, 2015 to February, 2016.333The dataset is available at https://doi.org/10.4121/uuid:01345ac4-7d1d-426e-92b8-24933a079412 The event log is composed by 335655 events divided in 2624 traces. We tried to discover a model of the web-site interaction without abstracting the event log. Figure 1

shows the result obtained through the new Heuristic Miner 

[15]. Similar results are also obtained through other miners and all show the problems mentioned above: the model is overcomplex, with an “ocean” of activity dependencies. While this is certainly not surprising because of the freedom of visiting the web site, still one wants to discover a model that provides insights for the stakeholders.

Figure 2: The steps of the abstraction technique based on sessions.

3 Session-based Event-log Abstraction Technique

This section introduces the technique of clustering low-level events into high-level activities. This procedure consists of four main steps, as visualized in Figure 2. The starting point is an event log. All the traces of the event log are split into sessions, which are then clustered; the centroids of the found clusters are visualized on a heat map to provide support to assign a name to each cluster. Finally, the abstract event log is created: each session is replaced by two events (e.g.  and in figure) of the same name as that given to the cluster to which the session belongs. The two events refer to the starting and the completion of the session and, respectively, take on the timestamps of the first and the last event of the session.

3.1 Preliminaries

The starting point of our technique is an event log, which consists of a set of traces, each of which is a sequence of unique events:

Definition 1 (Event, Trace, Log)

Let be the universe of events. A trace is a sequence of events. An event log consists of a set of traces, i.e. .

Events carry on information: given an event , and respectively return the activity associated with and the timestamp when event occurred. In the remainder, indicates that there is a trace s.t. . Given a trace , returns the -th event of the trace, namely ; also, returns the number of , namely .

Furthermore, given a second trace , indicates the trace obtained by concatenating at the end of of , i.e. .

As mentioned in Section 1, we leverage on clustering techniques. In a nutshell, these take a multiset of -ples, elements of domain , and split it into a number of disjoint smaller multisets:444Given a (multi)set , denotes the powerset, namely the set of all sub(multi)sets of . The operator denotes the union of multisets, namely such that the cardinality of an element in the union is the sum of the cardinality of all elements of the joined multisets.

Definition 2 (Clustering)

Let be the set of all multisets of all data points defined over the cartesian product . A clustering technique can be abstracted as function that, given a multiset , returns a ’s clustering into a set of multisets such that and, for any , .

3.2 Creation of Sessions

The first step of the technique is to identify the sessions. We introduce a session threshold , a time range. For each trace in an event log, we iterate over its events and create a sequence of sessions . We create a session , subsequence of , if (1) the timestamp’s difference between and and and is larger than or equal to and (2) the timestamp’s difference between two consecutive events in is smaller than :

Definition 3 (Sessions of a Trace)

Let be a log trace. Let be a time interval. denotes the session sequence of : (1) for any , , and (2), for any and , , and (3) .

The third condition states that, if we concatenate the sessions in which was split, we obtain back. The following example further clarifies:

Example 1

Consider a trace of an event log . The letter indicates the activity name, and the subscript is the timestamp of the event’s occurrence (e.g. d occurred at time 13). Assume that the time interval . One can easily see that the time difference between the second occurrance of and the first of is greater than the given time interval (), thus resulting in two sessions: where and . Note that the concatenation results in : .

3.3 Clustering of Sessions

Once the sessions are identified, the next step is to cluster them. To apply clustering techniques, each session needs to be encoded as a vector

, point of a cartesian space . This encoding can be made using different policies. As an example, a session can be encoded into a vector that contains one integer dimension for each activity that, as value, takes on the number of occurrence of events referring to the activity in session . The encoding is abstracted as function that returns a tuple that encodes a session of a trace of an event log . Given an event log , we create the multiset of data points as follows:

(1)

which are then clustered into . The remainder illustrates two encodings that are of more general applicability. However, it is possible to seamlessly plug in new encodings.

Frequency-based Encoding. Let be an event log defined over an activity set . Given a session of a trace , the frequency-based encoding returns a tuple where each of its elements is associated with a different activity of and takes on a value that is the number of occurrence of the respective activity in the event log. For instance, the sessions and of Example 1 at page 1 are encoded as quadruples where the elements from the first to the fourth dimension take on values equal to the number of occurrences of respectively : namely, and . This encoding is useful when one wants to cluster on the basis of the frequency of occurrence of activities in sessions. Consider, for instance, an online retail shop where each log trace contains one event for each item of product that is added to the basket. Each web-site visit corresponds to a session. The frequency-based encoding makes a vector out of each session with as many dimensions as the products that can be potentially added to a basket: the value of a certain dimension coincides with the quantity bought of the product associated with that dimension. More formally:

Definition 4 (Frequency-based Encoding)

Let be an event log; let be the activities of , namely . Given a trace and a time interval , let be a session of . The frequency-based encoding of is

such that, for all , is the number of events for activity , .

Duration-Based Encoding. Given a session of a trace of event , the duration-based encoding returns a tuple where, for all in log activity set , returns the average duration of executions of activity in . The average duration of can be computed as the average

(2)

for all s.t.  and . For the last event , we compute the average duration of all executions of that were associated to all events in that were not the last in the respective sessions, i.e. for the events in for which Equation 2 can be computed. For further clarification, let us again consider the sessions and of Example 1: they are encoded as quadruples where the elements from the first to the fourth dimension take on values equal to the average duration of activities . Let and be the average duration of and in the event log of Example 1. For this example, we have and . Note that this way to compute is based on the idea that events record the starting of executing an activity, and none records the completion. The specific choice was driven by the analysis of the www.werk.nl web site: events are associated to starting visiting a web-site page and users remain on that page until they start visiting the next. However, new encoding can be put forward, which consider events as the execution’s completion, cross information about resource utilizations and activity executions [17], or which are based on the exactly duration, if derivable/present in the event.

(a)
Cluster Name 0 Visit page mijn_cv 1 Visit page wijzigin_ doorgeven 2 Visit page vacatures_zoeken 3 Visit page vacatures_bij_ mijn_cv 4 Visit page werkmap 5 Visit page mijn_werkmap 6 Visit mijn_documenten 7 Visit page vacatures 8 Visit page taken+home 9 Visit page mijn_tips
(b)
Figure 3: An example of heat map of the cluster centroids (part a) and of names that can be given to the clusters (part b)

3.4 Visualization of Heat Maps and Creation of Abstract Event Logs

Section 3.3 produces a set of clusters (cf. Equation 1 at page 1), which is the input to build the abstract event log. As mentioned, clusters need to be given names. Here, we advocate the use of heatmaps to visualize the cluster centroids and, hence, facilitate the assignment of names to clusters. An example is in Figure 3(a), which refers to the application to the werk.nl web-site. Each row and column respectively refer to a different low-level event, dimension of the clustering space, and to a different cluster.

In particular, the centroid of each cluster is normalized between 0 and 1, and shown on the heat maps through different red-color intensities, with 0 being white and 1 being the most intense red. The color for a column X and row Y is proportional to the value of the dimension for low-level event Y in the centroid of cluster X. The normalization of a given centroid is achieved by dividing by the sum of the centroid’s values: where . The following example well explains:

Example 2

Let us assume the following centroids: , , , , . The normalization produces , , , , .

Note that we do not normalize by simply dividing by the largest value, such as 42 in Example 2. If we did so in Example 2, the first, fourth and fifth centroids would be normalized to a vector with almost zero values for all dimensions.

If one obtains such a heatmap as that in Figure 3(a), the stakeholder is largely facilitated to assign names to clusters because almost each cluster is characterized by a centroid with predominant values for one or two dimensions, each associated to a different activity. This stakeholder’s involvement is optional: if this domain knowledge is absent, cluster can be given a name that just coincides with the predominant dimension or with the concatenations of those predominant. In sum, each cluster is given a name , which makes it possible to synthesize the abstract event log as follows.

1
Input: Event Log , a set of clusters with names
Result: Abstract Event Log
2
3 foreach  do
4       
5        foreach session do
6              
7               Pick s.t.
8               Create Events and s.t.
9              
10              
11              
12              
13              
14        end foreach
15       
16 end foreach
17return
Algorithm 1 Creation of an Abstract Event Log

Algorithm 1 illustrates the procedure. For each log trace , the algorithm builds a new trace to be added to the abstract log as follows: for each session , the algorithm determines the cluster to which session belongs (lines 5 and 6) and adds two events and to the tail of (lines 7 and 11). Events and respectively represent the start and the end of session with the corresponding timestamps(see lines 9 and 10), and they refer to the high-level activity (line 8).

4 Evaluation

The abstraction technique introduced in this paper has been implemented as a plug-in named Session-based Log Abstraction in the TimeBasedAbstraction package of the nightly-build version of ProM.555http://www.promtools.org/

To this date, the implementation features the clustering algorithms K-means and DBSCAN algorithms. As discussed in Section 

3.3, the cluster centroids are visualized on a heatmap to provide users with the necessary help to determine the high-level activity names: the heat-map visualization is provided via the JHeatChart library.666http://www.javaheatmap.com/ The rest of this section illustrates the application to two case studies, for process discovery and behavior comparison.

4.1 Experiments on the werk.nl website

This section focuses on illustrating the successful application to the case study of the werk.nl website.

To abstract the event log, we used a duration-based encoding (cf. Section 3.3): it is certainly more important to consider how long visitors stay on a web page, rather than just counting the number of visits of the different pages. For instance, three 1-minute visits of any page should not have more weight than a 30-minute visit of the page. The session threshold was set to 15 minutes, because it coincides with the timeout of www.werk.nl.

Initially, the data points that encode the sessions of the log traces were clustered via DBSCAN. The generation of the clusters with DBSCAN took nearly 2 hours on a low-profile laptop with 8 Gb of RAM. The clusters’ centroids were visualized through the heatmap in Figure 3(a). To help stakeholders, the plug-in removes the rows referring to low-level events that, when normalized, are associated with nearly-zero values of all centroids. The results in Figure 3(a) are certainly very interesting: the sessions of a certain cluster are characterized by few particular pages, long and often visited. Note that DBSCAN does not always return clusters: DBSCAN would have failed, if it had not been possible to cluster the data points. Without using additional domain knowledge, each cluster was named after the low-level event (i.e. web page) that refers to the dimension with the highest value in the centroid (the most intense red color). This led to the names in Table 3(b).

Once the names are assigned to clusters, we generated an accordant, abstract event log. To validate the quality of the abstract event log, this was randomly split into a 70% part, which was used for discovery, and a 30%, for testing. The DBScan algorithm naturally computes outliers, namely points that are not assigned to any cluster. Results show that, if those outliers are simply filtered out, the quality of the discovered model is significantly dropped (see discussion below, summarized in Table 

1). Therefore, we performed a post-processing where each outlier session is manually inserted into the cluster with the closest centroid. The abstract event log with the manual cluster assignment of outliers was used as input for the new Heuristic Miner [15], thus discovering the model in

Figure 4: Process model produced by the Heuristic Miner [15] on 70% of the abstract event log of the werk.nl dataset, clustering via DBSCAN.

Figure 4, using the Causal-Net notation [1].

Figure 5: Process model produced by the Heuristic Miner [15] on 70% of the abstract event log of the werk.nl dataset, clustering via K-Means.

The same procedure was employed to discover a high-level model with K-Means, using the same 70% of the traces for discovery, and the same temporal threshold and encoding as for DBSCAN. Note that, compared with DBSCAN, K-Means requires one to explicitly set the number of clusters to create. Our implementation features the Elbow Method to facilitate the setup [12]: when applied to the case study, creating ten clusters seemed to provide a good balancing between minimizing the error and not scattering the sessions among too many clusters (i.e. high-level activities). The resulting model is in Figure 5.

The quality of these models was assessed through the classical process-mining metrics of fitness, precision, generalization and simplicity [1]

. Fitness was computed on the 30% of abstract log that was not used for discovery. This is accordant with typical machine-learning methods of verifying process-model “recall” on traces that were not used for discovery. Conversely, precision and generalization were computed on the entire abstract log. Finally, simplicity was measured as the sum of activities, arcs and bindings of the causal nets. Since fitness, precision and generalization are traditionally defined on Petri nets 

[1], causal nets were converted to Petri nets using the implementation in [15]. The resulting Petri nets were manually adjusted to ensure soundness while not adding extra behavior. Of course, to keep the comparison fair, all models were discovered by the Heuristic Miner [15], using the same configuration of parameters. This includes the model in Figure 1.

K-Means DBSCAN With Post-Processing DBSCAN No Post-Processing
Fitness 0.6637 0.6270 0.2785
Precision 0.33192 0.74779 0.68247
Generalization 0.99962 0.99996 0.99998
Simplicity 81 91 79
Table 1: Measures of the quality of the models discovered on the log abstracted through K-Means and DBSCAN. For the DBSCAN, we report on the values when the postprocessing to manually insert outlier was and was not performed.

Table 1 illustrates the results of the comparison of the models discovered through the abstract event logs obtained via K-Means and DBSCAN. They equally generalize and are of similar complexity (variation of simplicity is around 10-12%). The abstract model when applying DBSCAN without post processing shows very poor fitness, which is conversely satisfactory when applying K-Means or DBSCAN with post processing. Focusing on precision, the model of DBSCAN with post-processing is characterized by a precision that is 2.25 times than the precision of the K-Means model. This leads to the conclusion that DBScan with post-processing has produced a better model, in terms of fitness, simplicity, precision and generalization. Intuitively, this is not surprising: DBScan is based on maximizing the cluster density, ensuring that “similar” sessions are put in the same cluster.

In conclusion, the model in Figure 4 is the most preferable, and unarguably more understandable, if compared with the non-abstract model in Figure 1. From a business viewpoint, it illustrates that typical users navigate the werk.nl web site as follows. During the first session, users visit the home page and, also, page taken (Dutch for tasks), where they can see the tasks assigned by UWV (e.g. to upload certain documents). If no tasks are assigned to do via the web site, the interaction with the web site completes. If any tasks are, users look for jobs to apply for (page vacatures_zoeken) and/or amend the information that they previously provided (page wijziging_doorgeven). If information is amended, usually an updated curriculum is uploaded (cf. the branch of the model starting with page mijn_cv) and/or the visitor looks and possibly applies for jobs (cf. the branches of the model starting with pages vacature and vacature_bij_mijn_cv, which are either both executed or both skipped). Looking at statistics, the mean and median duration of the web-site interaction (i.e. the log traces) is around 20 weeks (more than 4 months) and, hence, the visiting sessions are certainly temporarily spread. One can also observe that Every session type is usually repeated multiple times, and this is likely due to the fact that the corresponding tasks are carried on through similar sessions in consecutive days. It is, however, remarkable that the model does not contain larger loops involving different session types. This means that the web site is visited in conceptual sections: when users start access pages of a given section, the pages of previous sections will no longer be visited. Note that the web site does not define sections, nor does it restrict the order with which pages can be visited. In fact, this testifies the benefits of introducing wizards. We acknowledge that information is lost in the abstraction. However, this loss is justified by gaining comprehensible business knowledge. As a matter of fact, this model was shown to one UWV’s stakeholder, who literally said “this is the most understandable analysis of the web-site behavior that I have seen, certainly beyond the results seen for the BPI Challenge”.777Indeed, the BPI challenge in 2016 was based on the same event data - https://www.win.tue.nl/bpi/doku.php?id=2016:challenge.

4.2 Evaluation on a Building-Permit Process

Figure 6: Building-permit process model produced by the Inductive Miner without abstraction: overly complex to be insightful.

This section discusses a second case study to illustrate the applicability of the technique beyond werk.nl. This case study refers to the execution of process to manage building-permit applications in a Dutch municipality.888The event log is available at http://dx.doi.org/10.4121/uuid:63a8435a-077d-4ece-97cd-2c76d394d99c There are 304 different activities denoted by their respective English name as recorded in attribute taskNameEN. The event log spans over a period of approximately four years and consists of 44354 events divided in 832 cases. Figure 6 shows the model discovered with the Inductive Miner - Infrequent Behavior [14], using the default configuration. The model exactly shows the same problems as that in Figure 1: The large variability has made the miner discover an overly complex model. See, e.g., the large OR split around the area highlighted by a red circle in the picture. We applied the abstraction technique to the event log, using the frequency-based encoding (cf. Section 3.3) and the DB-SCAN clustering algorithm with post processing, which proved to perform better for the first case study reported in Section 4.1. A session threshold of 8 hours was employed so that the events of the same day were put in the same (work) session.

The clustering step resulted in the heatmap in Figure 3(a) where, similarly to the previous case study, infrequent activities are filtered out, and each cluster centroid has significantly non-zero values for the dimensions for one or few low-level activities. Analogously to the first case study, clusters were given the same name as the low-level activity with the most intense red colour in the heat map, possibly concatenated with the names of the additional activities with a significantly red color (see Table (b)b).

(a)
Cluster Name 0 enter date publication decision environmental permit 1 completed subcases content 2 register submission date request 3 enter senddate procedure confirmation 4 enter senddate decision environmental permit 5 set phase: phase permitting irrevocable & register deadline 6 enter senddate acknowledgement 7 record date of decision environmental permit 8 forward to the compotent authority & send confirmation receipt & regular procedure without MER & phase application received 9 register deadline
(b)
Figure 7: The heat map of the cluster centroids for the building-permit process (part a) and the names given to the clusters (part b)

The abstract event log was then generated and used as input for the Inductive Miner - Infrequent with default parameter values, namely the same as for the not-abstracted model in Figure 6. This yielded the model in Figure 8, which is unarguably simplified, emphasising the most salient behavioral aspects. This model is a good representation of the actual behavior: its fitness is 0.79. Unfortunately, it was not possible to compute precision and generalization because the reference ProM implementation (see [1]) got stuck and never terminated the computation.

Figure 8: Building-permit process model produced by the Inductive Miner with abstraction, clustering via DBSCAN. The fitness value is 0.79; the precision and generalization values are not reported because the reference software implementation never terminated.

We previously claimed that event abstraction is not only about model discovery, and it enables a fruitful application of a large repertoire of process-mining techniques. The remainder will provide a support to this claim. In particular, we will show that it makes it possible to highlight that the executions under the responsibility of certain resources are statistically different from those of other resources. To achieve this, we leveraged on the technique proposed in [5]. The technique allowed us (1) to find out that the executions under the responsibility of resource 560458 are remarkably different, and (2) to pinpoint what these differences are. The latter piece of knowledge can be gained by looking at the transition system obtained by the technique in [5], which is in Figure 9. In the transition system, nodes are the event’s activities and an arc between two activity nodes indicated that the event log shows that sometimes the source activity is followed by the destination activity. Nodes and arcs are coloured with different shades of blue and orange to indicate that the activity or transition is statistically more or is less frequent for 560458, respectively. The thickness of arcs and node’s borders signifies the frequency of occurrence. The colour’s darkness is proportional to the average difference. In Figure 9, e.g., high-level activity entersend date procedure confirmation stands out: it occurs in 67% of the cases of resource 560458 versus 13.7% of the cases where others are responsible. Similar proportions are also observed in high-level enter date publication decision environmental permit. Conversely, high-level register deadline is colored orange, showing that it is statistically more frequent for the cases in which other resources than 560458 are responsible. It follows quite naturally that, without abstraction, the behavior complexity represented in Figure 6 would generate such a complex transition system that no fruitful insights could be derived.

Figure 9: Comparison of the building-permit process behavior between executions when resource 560458 is responsible and when others are.

5 Related Work

A large body of research has been conducted on log abstraction. It can be grouped in two categories: supervised and unsupervised abstraction. The difference is that supervised abstraction techniques require process analysts to provide domain knowledge, while unsupervised does not rely on additional information.

Supervised Abstraction Methods. Baier et al. provide a number of approaches that, based on some process documentation, map events to higher-level activities [2, 3, 4], using log-replay techniques and solving constraint-satisfaction problems. The idea of replaying logs onto partial models is also in [16]: the input is a set of models of the life cycles of the high-level activities, where each life-cycle step is manually mapped to low-level events. Ferreira et al. [10]

rely on the provision of one Markov model, where each Markov-model transition is a different high-level activity. In turn, each transition is broken down into a new Markov model where low-level events are modelled. Fazzinga et al. 

[9] assume process analysts to provide a probabilistic process model with the high-level activities, along with a probabilistic mapping between low-level events and high-level activities. It returns an enumeration of all potential interpretations of each log traces in terms of high-level activities, ranked by the respective likelihood. In [18], authors propose a supervised abstraction technique that is applicable in those case in which annotations with the high-level interpretations of the low-level events are available for a subset of traces.

Unsupervised Abstraction Methods. Log abstraction is related with episode mining and its application to Process Mining (a.k.a. discovery of local process models) [13, 19]). In fact, Mannhardt and Tax propose a method that combines local process model discovery with the supervised abstraction technique in [16]. However, the technique relies on the ability to discover a limited number of local process models that are accurate and cover most of the low-level event activities. In [11], Günther et al. cluster events looking at their correlation, which is based on the vicinity of occurrences of events for the same low-level activity in the entire log. Clustering is also the basic idea of [7] to cluster events through a fuzzy k-medoids algorithm. Both [7] and [11] share the drawback that the time aspects are not considered and, thus, they can cluster events that are temporarily distant (e.g. web-site visits that are weeks far from each other). Also, [7] only aims to discover a fuzzy high-level model, instead of abstracting event logs to enable a broader process-mining application, whereas [11] assumes a transitive nature of the property of activity correlation, which does not always hold. See Figure 3(a): cluster 3 shows a correlation between Visit page werkmap and Visit page vacature_bij_mijn_CV and cluster 4 shows a correlation between Visit page werkmap and Visit page taken, while no correlation exists between Visit page vacature_bij_mijn_CV and Visit page taken. Finally, van Eck et al. [8] illustrate a technique to gather observations from sensor data, encode and cluster them in a similar way as our approach does. However, they assume that events (in fact, sensor observations) are generated at a constant rate.

Log Clustering vs Log Abstraction. This paper has discussed a log-abstraction technique that builds on machine-learning clustering techniques. Event-log clustering also leverages on the same techniques [6]. However, event-log clustering has a different purpose because it is based on the idea to split the traces into homogenous groups, without altering the contents of the traces themselves.

6 Final Remarks

Abstracting and grouping low-level events to high-level activities is a problem that is receiving a lot of attention. Often, event logs are not immediately ready to be used because they model concepts that are not at the right business level and/or they exhibit a too broad variety of behavior to be summarized into one simple model, map, diagram, etc.

Section 5

illustrates how, on the one hand, supervised methods often require vast domain knowledge (e.g. through process models, Markov chains or mapping ontologies), which is not always possible to provide. On the other hand, unsupervised methods show limitations, related to the absence of any external knowledge. This paper reports on a third way where very limited domain knowledge is necessary. The technique is based on the idea that a trace can be regarded as a sequence of sessions each of which terminates when no additional events occur within a user-defined time interval. The sessions are later clustered; finally, a heatmap visualization of the clusters is provided to domain experts, so that they could assign meaningful high-level concepts to the sessions, i.e. sequences of low-level events. Admittedly, the concept of sessions and the use of clustering techniques and heat-map visualizations are not novel in process mining, if each is taken in isolation. The innovation here is that we ensemble them to provide a solution to the problem of abstracting low-level events to high-level concepts, with the advantages mentioned above.

Section 4 reports on the successful evaluation of the proposed technique on two case studies, discussing both a quantitative and a qualitative analysis. The qualitative analysis shows that our log-abstraction technique (1) allows overly complex models to be simplified, by focusing on the higher-level concepts, and (2) is applicable beyond process discovery (see Figure 9). Quantitatively, the evaluation showed that the discovered models can reasonably balance the typical process-mining metrics: fitness, precision, generalization and simplicity [1].

The technique does not depend on any clustering algorithm, and this explains why concrete algorithms are only mentioned in Section 4. While we acknowledge that a more thorough assessment is necessary, Section 4 shows that the best performances are with DBScan, which has the advantage of automatically computing the best number of clusters. This motivates further that our technique requires to supply little knowledge: considering that the provision of the cluster names is optional (cf. Section 3.4), the only input for our technique is the session threshold.

The technique is applicable to all those domain fields where customers perform activities in batches/sessions, including retail shopping (e.g. Amazon or supermarkets), health care (e.g. hospitals) as well as scenarios of home automation and/or IoT. All of these domains loosely constrain the order with which the activities are executed, which ultimately leads to an “ocean” of alternative behavior.

In spite of the assessment reported in this paper, we acknowledge that further validation is needed, especially in such domains as those mentioned in the previous paragraph. In parallel, the technique can be further extended towards achieving better clustering. Firstly, the technique needs to be extended to consider the entire event payload, instead of just limiting to the sole activity names. For instance, for the werk.nl case study, one could add clustering dimensions related to the customer age, gender, geographic locations, etc., providing extra information towards a more accurate clustering. This is also very relevant for such domains as domotics, where, e.g., the use of an oven at 180 may be conceptually different than using it at 240

. Secondly, we plan to explore hierarchical clustering because it would allow one to tune the level of aggregation that is achieved through the log abstraction. Thirdly, the number of low-level activities is generally large. Therefore, it is worth investigating the benefits, if any, of reducing the low-level activities to consider when applying clustering.

Last and not least, we aim to specialize the general technique for the discovery of hierarchical processes and analyze the structure of the sessions within each separate cluster. The sessions within each cluster can be seen as traces of a sub log, which can be used as input to discover small (fragments of) models, to be later combined with the model discovered via abstract event logs.

References

  • [1]

    van der Aalst, W.M.P.: Process Mining - Data Science in Action. Springer (2016)

  • [2] Baier, T.: Matching Events and Activities. PhD dissertation, University of Potsdam (2015)
  • [3] Baier, T., Mendling, J.: Bridging abstraction layers in process mining by automated matching of events and activities. In: Proceedings of the 11th International Conference on Business Process Management. pp. 17–32. Springer Berlin Heidelberg, Berlin, Heidelberg (2013)
  • [4] Baier, T., Rogge-Solti, A., Mendling, J., Weske, M.: Matching of events and activities: an approach based on behavioral constraint satisfaction. In: Proceedings of the 30th Annual Symposium on Applied Computing. pp. 1225–1230. ACM (2015)
  • [5] Bolt, A., de Leoni, M., van der Aalst, W.M.P.: A visual approach to spot statistically-significant differences in event logs based on process metrics. In: International Conference on Advanced Information Systems Engineering. LNCS, vol. 9694, pp. 151–166. Springer (2016)
  • [6] De Weerdt, J.: Trace Clustering. Springer International Publishing, Cham (2018)
  • [7] van Dongen, B.F., Adriansyah, A.: Process mining: fuzzy clustering and performance visualization. In: Proceedings of the 7th International Conference on Business Process Management. pp. 158–169. Springer (2009)
  • [8] van Eck, M.L., Sidorova, N., van der Aalst, W.M.P.: Enabling process mining on sensor data from smart products. In: Proceedings of the Tenth IEEE International Conference on Research Challenges in Information Science (RCIS) (June 2016)
  • [9] Fazzinga, B., Flesca, S., Furfaro, F., Masciari, E., Pontieri, L.: A probabilistic unified framework for event abstraction and process detection from log data. In: Proceedings of the 23th OTM Confederated International Conference on Cooperative Information Systems. LNCS, vol. 9415, pp. 320–328. Springer (2015)
  • [10] Ferreira, D.R., Szimanski, F., Ralha, C.G.: Mining the low-level behaviour of agents in high-level business processes. International Journal of Business Process Integration and Management 8 6(2), 146–166 (2013)
  • [11] Günther, C.W., Rozinat, A., van der Aalst, W.M.P.: Activity mining by global trace segmentation. In: Proceeding of the 7th International Conference on Business Process Management. pp. 128–139. Springer (2009)
  • [12]

    Ketchen, D.J., Shook, C.L.: The application of cluster analysis in strategic man-agement research: An analysis and critique. Strategic Management Journal

    17(6), 441–458 (1996)
  • [13] Leemans, M., van der Aalst, W.M.P.: Discovery of frequent episodes in event logs. In: Data-Driven Process Discovery and Analysis - 4th International Symposium, SIMPDA 2014, Milan, Italy, November 19-21, 2014, Revised Selected Papers. LNBIP, vol. 237, pp. 1–31. Springer (2015)
  • [14] Leemans, S.J.J., Fahland, D., van der Aalst, W.M.P.: Discovering block-structured process models from event logs - A constructive approach. In: Proceedings of the 34th International Conference on Application and Theory of Petri Nets and Concurrency (Petri Net 2013). LNCS, vol. 7927, pp. 311–329. Springer (2013)
  • [15] Mannhardt, F., de Leoni, M., Reijers, H.A.: Heuristic mining revamped: An interactive data-aware and conformance-aware miner. In: Proceedings of the BPM Demo Track and BPM Dissertation Award at 15th International Conference on Business Process Management. CEUR Workshop Proceedings, vol. 1920. CEUR-WS.org (2017)
  • [16] Mannhardt, F., de Leoni, M., Reijers, H.A., van der Aalst, W.M.P., Toussaint, P.J.: From low-level events to activities - A pattern-based approach. In: Proceedings of the 14th International Conference on Business Process Management. LNCS, vol. 9850, pp. 125–141. Springer (2016)
  • [17] Nakatumba, J.: Resource-aware Business Process Management : Analysis and Sup-port. Ph.D. thesis, Eindhoven University of Technology (2013)
  • [18]

    Tax, N., Sidorova, N., Haakma, R., van der Aalst, W.M.P.: Event abstraction for process mining using supervised learning techniques. In: Bi, Y., Kapoor, S., Bhatia, R. (eds.) Proceedings of SAI Intelligent Systems Conference (IntelliSys) 2016. pp. 251–269. Springer International Publishing, Cham (2018)

  • [19] Tax, N., Sidorova, N., Haakma, R., van der Aalst, W.M.: Mining local process models. Journal of Innovation in Digital Ecosystems 3(2), 183 – 196 (2016)
  • [20] Terragni, A., Hassani, M.: Optimizing customer journey using process mining and sequence-aware recommendation. In: Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing. pp. 57–65. SAC ’19, ACM, New York, NY, USA (2019)