DeepAI
Log In Sign Up

A Framework for Efficient Memory Utilization in Online Conformance Checking

12/23/2021
by   Rashid Zaman, et al.
TU Eindhoven
0

Conformance checking (CC) techniques of the process mining field gauge the conformance of the sequence of events in a case with respect to a business process model, which simply put is an amalgam of certain behavioral relations or rules. Online conformance checking (OCC) techniques are tailored for assessing such conformance on streaming events. The realistic assumption of having a finite memory for storing the streaming events has largely not been considered by the OCC techniques. We propose three incremental approaches to reduce the memory consumption in prefix-alignment-based OCC techniques along with ensuring that we incur a minimum loss of the conformance insights. Our first proposed approach bounds the number of maximum states that constitute a prefix-alignment to be retained by any case in memory. The second proposed approach bounds the number of cases that are allowed to retain more than a single state, referred to as multi-state cases. Building on top of the two proposed approaches, our third approach further bounds the number of maximum states that the multi-state cases can retain. All these approaches forget the states in excess to their defined limits and retain a meaningful summary of them. Computing prefix-alignments in the future is then resumed for such cases from the current position contained in the summary. We highlight the superiority of all proposed approaches compared to a state of the art prefix-alignment-based OCC technique through experiments using real-life event data under a streaming setting. Our approaches substantially reduce memory consumption by up to 80

READ FULL TEXT VIEW PDF

page 7

page 8

page 9

11/30/2022

I Will Survive: An Online Conformance Checking Algorithm Using Decay Time

Process executions in organizations generate a large variety of data. Pr...
12/02/2019

Conformance Checking Approximation using Subset Selection and Edit Distance

Conformance checking techniques let us find out to what degree a process...
08/15/2022

Conformance Checking for Trace Fragments Using Infix and Postfix Alignments

Conformance checking deals with collating modeled process behavior with ...
09/09/2022

Alignment-based conformance checking over probabilistic events

Conformance checking techniques allow us to evaluate how well some exhib...
04/30/2020

Using Decision Diagrams to Compactly Represent the State Space for Explicit Model Checking

The enormous number of states reachable during explicit model checking i...
08/20/2019

Evaluating Alignment Approaches in Superimposed Time-Series and Temporal Event-Sequence Visualizations

Composite temporal event sequence visualizations have included sentinel ...
10/20/2021

Class Incremental Online Streaming Learning

A wide variety of methods have been developed to enable lifelong learnin...

1. Introduction

Process mining (van der Aalst, 2016; Carmona et al., 2018), a young discipline, bridges the gap existing between data mining and process science (Brocke et al., 2021) fields. Business process models and event data are its most prominent inputs. A process model depicts a business process. It can also be viewed as an amalgam of the business rules or behavioral relations which guide the execution sequence of the various activities in a business process. An event represents the execution of an atomic activity in a business process. A business process therefore serves as a concept in the process mining landscape. Events belonging to the same instance or run of a process are treated as a single case. A case syntactically acts as a data point for process mining. Events, as the entities constituting a case, therefore are analogous to the features of a case. A case is usually expected to follow a sequence of the behavioral relations inscribed in the process model, implying that the order of the events is also important.

Figure 1. A simple overview of an online conformance checking (OCC) scenario.

Like data streams, event streams are characterized by a high throughput of events generated by a single or multiple concept instances running in parallel. Online process mining techniques (Burattin, 2018) are tailored to process such event streams. There, online conformance checking (OCC) techniques analyze in-progress cases for the purpose of checking their conformance to the relevant process model. CC techniques require the sequence of all the events constituting a case rather than individual events. This requirement implies that even previously processed events shall be retained in order to ensure the completeness of its relevant case. The restriction of having a finite memory for coping with infinite streams is an established concern for data streams processing (Hassani, 2015). The aforementioned requirement makes the finite memory constraint much more involved for OCC techniques.

In order to reduce memory consumption, OCC literature (Burattin et al., 2018; van Zelst, 2019) suggests forgetting the least recently updated cases, assuming them to be concluded. While this assumption may hold in some process domains, usually cases in real-life processes exhibit very diverse behavior with respect to the temporal distribution of their events. Additionally, the number of active in-parallel running cases may be higher than that allowed by the aforementioned techniques to be stored in the memory simultaneously. Accordingly, events belonging to the forgotten cases may be observed in the future, referred to as orphan events in the literature (Zaman et al., 2021). Such orphan events, without undergoing any remedial treatment, will be considered by the CC techniques to be belonging to newly initiated cases. Accordingly, these events will be improperly marked as nonconforming to the process model, primarily because of their forgotten prefix. This improper penalization is termed as missing-prefix problem in the literature (Zaman et al., 2021).

Alignments-based CC techniques (Adriansyah, 2014) discover a sequence of activities in the relevant process model, referred to as an alignment, which maximally matches the sequence of the activities represented by the events in a case. Alignments encode events, their counterpart activity in the reference process model, and their conformance statistics as moves. Prefix-alignments (Adriansyah et al., 2013) is a variant of conventional alignments which can properly deal with incomplete cases. Moves are termed as states by the incremental prefix-alignments (van Zelst et al., 2019). Figure 1 provides a simplistic overview of a prefix-alignment-based OCC setup. An event observed on the event stream is appended to its respective case in an infinite memory. This updated case is accordingly subjected to CC with the reference process model by the prefix-alignments. If we limit the memory through bounding the number of events in cases or bounding the number of maximum cases then we have to forget prefixes of cases or entire cases respectively. Orphan events for such partially or fully forgotten cases, for instance, Case “14”, are improperly marked as non-conforming to the reference process model, as depicted in its prefix-alignment.

We present three incremental approaches to effectively reduce the memory footprint or in other words accommodate more cases using the same storage in prefix-alignments-based OCC techniques. Our proposed approaches partially or entirely forget the states constituting the prefix-alignments of cases and yet avoid the missing-prefix problem.

Our first proposed approach imposes a limit on the number of states to be retained by any case in the storage. In a first-in-first-out fashion, states in excess of the specified limit are forgotten. We avoid the missing-prefix problem by resuming the prefix-alignments computation for the orphan events from the position reached by the forgotten prefix states, through retaining their summary as a special state. Our second approach imposes a limit on the number of multi-state cases, i.e., cases that can grow freely in terms of the states constituting their prefix-alignments. For this purpose, we prudently forget cases through a defined forgetting criteria

to conform to the defined limit. The main intuition behind our forgetting criteria is the maximization of the probability of correctly estimating the conformance of the cases with orphan events. To further reduce the memory consumption, our third proposed approach imposes a limit on the number of states to be retained by the fixed number of multi-state cases. As with the first approach, we retain a summary of the forgotten prefix states in the second and third approaches as well in order to avoid the missing-prefix problem.

Through experiments with real-life event data, by emitting its events on the basis of their actual timestamps as a stream, we demonstrate the effectiveness of our proposed approaches. We conclude that without any loss on the estimated (non)conformance in the first approach and with tolerable loss in the second and third approach, the memory footprint and the number of computations of the prefix-alignments-based OCC techniques can be significantly reduced through adopting these approaches.

This paper extends the approaches and the results presented in the short paper of the authors (Zaman et al., 2022) with two additional novel memory utilization methods and a further extensive experimental evaluation.

The rest of this paper is organized as follows. Section 2 provides an overview of the existing relevant work. Section 3 defines and explains some key concepts which are necessary for elaborating our three proposed approaches which we present in Section 4. Details and findings of the experiments conducted for evaluation of the proposed approaches are provided in Section 5. Finally, Section 6 concludes the paper along with some ideas for future work.

2. Related Work

A landscape of the online process mining techniques was provided by (Burattin, 2018). Being part of the process mining manifesto (van der Aalst et al., 2011), OCC is receiving attention from the research community. Using regions theory (Van Dongen et al., 2007), (Burattin and Carmona, 2017) added deviating paths to the process model and accordingly non-conformant cases following those paths are detected. Alignments (Adriansyah, 2014) is the state-of-the-art underlying technique for CC of completed cases (Carmona et al., 2018). Accordingly, prefix-alignments (Adriansyah et al., 2013) were introduced for CC of incomplete cases with a process model, without penalizing them for their incompletion. Decomposition (van der Aalst, 2013) and later Recomposition (Lee et al., 2018) techniques were proposed to divide and then unite the alignments computation for efficiency.

Incremental prefix-alignments (van Zelst et al., 2019) combined prefix-alignments with a lightweight model semantics based method for efficiently checking conformance of in-progress cases in streaming environments. The technique of  (Burattin et al., 2018) determines the conformance of a pair of streaming events by comparing them to the behavioral patterns constituting the reference process model. Their technique is computationally less expensive in comparison to alignments and is also able to deal with a warm start scenario, where the event in an observed case does not represent a starting position. This approach somehow abstracts from the reference process model and markings there, therefore, it is hard to closely relate cases to the reference process model. The online conformance solution of (Lee et al., 2021)

used Hidden Markov Models (HMM) to first locate cases in their reference process model and then assess their conformance.

The availability of a limited memory, implying the inability to store the entire stream, has been sufficiently investigated in the data stream mining field (Bahri et al., 2021; Gomes et al., 2019). In contrast, the memory aspect of online process mining in general and OCC in specific has attracted less attention in the literature. The majority of such techniques therefore assume the availability of an infinite memory. (Hassani et al., 2019) generically suggested maintaining an abstract intermediate representation of the stream to be used as input for various process discovery techniques. (Burattin et al., 2018) limited the number of cases to be retained in memory by forgetting inactive

cases. Forgetting cases on inactivity criteria may lead to the missing-prefix problem in many process domains. Prefix imputation approach 

(Zaman et al., 2021) has been proposed as a two-step approach for bounding memory but at the same time avoiding missing-prefix problem in OCC. The technique selectively forgets cases from memory and then imputes their orphan events with a prefix guided by the normative process model.

3. Preliminaries

In this section, we briefly explain the concepts related to our proposed approaches.

Process model:

A business process is the concept in the process mining domain, which is represented by a process model. A process generates the events which constitute the data points, i.e., cases, of process mining. Activities are spatially laid down in accordance with certain behavioral relations to synthesize a process model. For instance, activity A in a process must always be followed by activity B, the relation termed as a sequence. Other relations include choices, concurrency and looping. While multiple representations exist for modelling a process, we use the highly formal Petri nets. A Petri net is represented as a tuple , where represents a finite set of places, a finite set of transitions, a set of flow relations between places and transitions. is a labelling function assigning transitions in with labels from the set of the activity labels , where is the universe of activities.

The stage or position of a case is represented through a marking in the process model. A marking is essentially a multiset of tokens over the places in the process model , i.e., . The initial Marking is the stage which ideally every case shall start in and the final marking is a stage where ideally every case shall eventually end. Apart from regular transitions (representing business activities), silent transitions (Taus) are used in Petri nets for the completion of routing. A transition having a token in each of its input places, as part of a marking , is said to be enabled, represented as . An enabled transition can fire or execute thereby consuming a token from each of its input places and accordingly producing a token in each of its output places, resulting in a change of marking from to represented as . The consecutive firing of a sequence of (enabled) transitions starting from a marking and leading to is referred to as a firing or execution sequence of and represented as . Typically, the set of execution sequences for a process model is finitely large in absence of loops and infinitely large in presence of a loop. An execution sequence starting from and ending in is referred to as a complete execution sequence of , i.e., .

The top left side of Figure 1 contains an example process model as a Petri net. The circles represent places. Rectangle-shaped transitions map the process model to the corresponding process activities through labels . Directed arcs through connecting places and transitions enforce the behavioral relations. For instance, the output arc from transition enters place which is connected to transition through its output arc, hence the two transitions form a sequential relation. Transitions are in a choice relation with transitions through an XOR-split. Similarly, transitions and are parallel by virtue of an AND-split. With the start place having a single token, the Petri net is in initial marking , or simply . Only transition is enabled in this marking and its firing results in marking . is one of the execution sequences, resulting in marking which is therefore a reachable marking of the . The sequence represents a complete execution sequence which puts the token in place indicating that final marking is reached. For sound understanding of the mentioned and other related concepts, interested readers are referred to (van der Aalst, 2016).

Event id Case id Activity Timestamp
1 1 A 2021-10-01 12:45
2 2 A 2021-10-01 13:03
3 1 B 2021-10-02 10:07
4 2 B 2021-10-09 14:31
5 3 A 2021-10-09 17:29
6 3 E 2021-10-13 16:49
7 3 F 2021-10-13 16:59
8 3 G 2021-10-20 11:23
9 1 C 2021-10-20 11:23
Table 1. An example event log.

Events:

The execution of activities in a business process are logged in the form of events. Events belonging to the same case are ordered to generate an event log . Formally, an event minimally consists of 1) the case identifier to which the event belongs, 2) the corresponding activity name represented as , and 3) the timestamp of the execution of the corresponding activity represented as . It is important to note that every event is unique and distinct. Events referring to exactly the same activity, having the same timestamp, and belonging to the same case are by context different and distinct. The sequence of events belonging to a case is referred to as the trace of the case . denotes the length, i.e., the number of events, in a trace.

Table 1 depicts an excerpt of example event log obtained through firing of some execution sequences of the process model contained in Figure 1. Each row in Table 1 represents an event . For instance, the first row is event with event id of “1” and corresponds to execution of the activity = A in the context of Case “1” and at timestamp ”. Events and constitute the trace for Case “3”, which in essence corresponds to an execution sequence of the process model depicted in Figure 1. The trace for Case “3” can simply be denoted in the sequence of activities form as . Notice that all these cases are running in parallel. Event streams are characterized by a large number of in-parallel running cases.

Events stream:

An event log consists of historical or completed cases. On contrary, the cases observed on a stream evolve and new cases arrive as well. Formally, let be the universe of case identifiers and be the universe of activities. An event stream is an infinite sequence of events over , i.e., . A stream event is represented as , denoting that activity has been executed in the context of case . Like events in general, every stream event is unique and distinct, even if their respective activities, the case identifiers, and even the arrival times are exactly the same. Observed stream events are required to be stored under the notion of their respective cases. Event streams are characterized by continuous and unbounded emission of stream events .

Conformance Checking:

By virtue of a variety of contextual factors (van der Aalst, 2016; Carmona et al., 2018), activities in cases are executed in diverse ways. The sequence of these activities may even deviate from the behavioral relations envisaged as a process model. Conformance Checking (CC) is the comparison of cases with their reference process model to highlight deviations, if any. The detected deviations provide insights for remedial actions to mitigate the sources of non-conformance.

Many techniques have been developed but alignments have been positioned as the de facto standard technique for checking conformance of cases. Alignments, represented as , explain the sequence of events (or simply activities) in a case through a complete execution sequence in the reference process model (Carmona et al., 2018), i.e., . A case is considered conformant or fitting if a complete execution sequence of the process model exists that fully explains or reproduces its trace, otherwise non-conformant. The extent of the non-conformance of cases is measured through the degree of mismatch existing between their traces and a maximally-explaining complete execution sequence, termed as optimal alignment .

Figure 2. Example (prefix)alignments for Case “2” of event log of Table 1 with the process model of Figure 1.

The non-shaded part of Figure 2 shows two of the many possible alignments of the trace for Case “2”, i.e., , of the event log depicted in Table 1 with the process model of Figure 1. The trace part of these alignments (neglecting ’s) is the same as the trace for Case “2” while their model part (neglecting ’s) is a complete execution sequence of the process model. A corresponding trace and model entry in the alignment is known as a move and represented as a pair, for instance, . Moves without skip-symbol are termed as synchronous moves and imply that an enabled transition with the same label as the activity represented by the event of the pair is available in the current marking. Moves with in the trace part of the pair are referred to as model moves and illustrate that the trace is missing an activity for the transition in the pair which is enabled in the current marking and the execution sequence requires it to be fired. Moves with in the model part of the pair are referred to as log moves which signal the missing of a fired transition in the complete execution sequence for the activity in the trace part. For instance, the first move in the middle alignment of Figure 2 is a synchronous move, while the second and third moves are log and model moves respectively.

As evident from Figure 2, multiple alignments of a trace with a process model are possible. Therefore, moves are associated with a move cost in order to rank alignments. Usually, synchronous moves and model moves with silent transitions are assigned a zero cost. The sum of the costs of all the individual moves of an alignment is referred to as trace fitness cost which is represented as . CC looks for an optimal alignment which bears the least trace fitness cost. It is worth mentioning that even multiple optimal alignments may exist for the same trace.

Alignments assume cases to be completed while cases in an event stream may not be completed yet. For checking the conformance of such evolving cases, the prefix-alignments variant of the alignments is more appropriate. Prefix-alignments, represented as , explain the sequence of events in a case through an execution sequence of the process model, rather than a complete execution sequence. The rest of the concepts, such as moves and their associated costs, are the same as for alignments.

Consider the trace for Case “2”, i.e., , of the event log of Table 1. The shaded part of Figure 2 shows one of the possible prefix-alignments of this trace with the process model of Figure 1. The trace part of this prefix-alignment still corresponds to the trace while the model part is an execution sequence of the process model. As with conventional alignments, different types of moves in prefix-alignments are associated with move costs in order to rank them for identifying the optimal one, i.e., . The optimal prefix-alignment evolves and changes with the evolution of the trace.

In contrast to a single alignment computation per case in conventional alignments, a prefix-alignment needs to be computed upon observing every single stream event in event streams. (Prefix)alignments computation through a shortest path search in a synchronous product (Adriansyah et al., 2011) is quite compute-intensive. Therefore, the approach in (van Zelst et al., 2019) tailored prefix-alignments to be efficient in checking conformance on streaming events, referred to as incremental prefix-alignments in this work. This approach first checks if activity of the observed stream event corresponds to a transition that is enabled in the marking of the previously computed prefix-alignment of (or if the event is the first for the case). If found, a synchronous move is appended to the previously computed prefix-alignment existing in case-based memory . We refer to this method of extending a prefix-alignment as model semantics based prefix-alignment. If unsuccessful, then a fresh optimal prefix-alignment is computed for the trace of through a shortest path search in the synchronous product, starting from the initial marking . We refer to this method as shortest path search based prefix-alignment. The latter method is computation-wise expensive than the former. Moves are stored as states constituting the prefix-alignments of cases. A prefix-alignment therefore can simply be represented as a sequence of states which we represent as , where is the index of the state in the prefix-alignment and is the total number of states in , i.e., . A state additionally stores its move cost and the marking of its case reached through the states {}. We can represent this marking as . The term state may not be confused with that used in the process mining literature to represent a marking. Interested readers are referred to (Carmona et al., 2018) for a deeper understanding of the CC, alignments, prefix-alignments, and related concepts.

4. Our Memory-Efficient Framework For Online Conformance Checking

For activity of an observed stream event , the model semantics based prefix-alignments method requires only the current marking of its previously computed to extend it for . In contrast, the shortest path search based prefix-alignment reverts to , but only for the sake of computing an optimal for the trace of . Exploiting the aforementioned facts, we present three effective memory consumption approaches in the context of prefix-alignments-based OCC techniques.

4.1. Bounded states with carryforward marking and cost

As our first approach, we define a limit on the number of states that can be retained by the prefix-alignments of cases in , i.e., . Upon observing an event for an existing case, after the computation of its prefix-alignment, we forget the earliest prefix state(s) in excess of the defined limit. However, we retain a summary of the forgotten states as a special state prepended to the surviving states such that the total number of states is . This summary consists of marking reached with the forgotten states and the cumulative cost of its moves . When a shortest path search based prefix-alignment is necessitated, we resume the prefix-alignment computation of the orphan events from , i.e., provisionally , thereby avoiding the missing-prefix problem. Similarly, we minimize the underestimation of their fitness costs by adding the cost incurred by the forgotten states as residual to that incurred by the orphan events, i.e., . The maximum memory required therefore is reduced to states. Algorithm 1 provides an algorithmic summary of the proposed approach.

1:
2:
3:while true do
4:     ;
5:     ;
6:     ;
7:     copy all alignments of to for all ;
8:     compute through model semantics or shortest path search (van Zelst et al., 2019)
9:     if  then
10:          
11:          ;      
12:     ;
Algorithm 1 Prefix-alignment-based OCC with bounded states

With forgetting states, we actually forget the embedded events as well. The model semantics based prefix alignment will not be affected by this forgetting of events as it requires only the current marking reached by the prefix alignment of the previously observed events, and not the events themselves. The shortest path search based prefix alignment however will revisit the prefix alignment computation from for only the events represented by the retained states. As a consequence, the freshly computed prefix alignment may not be a global optimum. This approach also reduces the number of computations performed in shortest path search based prefix alignment as the prefix-alignments are computed only for a subsequence of the observed events for cases whose prefix states have been forgotten.

4.2. Bounded cases with carryforward marking and cost

The approach presented in Section 4.1 reduces the memory consumption to states but it can still grow unbounded after storing an infinite number of cases, i.e., when . In order to further reduce the memory consumption by , we define a limit on the maximum number of multi-state cases. Hence, only cases can retain more than a single state in their prefix-alignments, i.e., . For all the other cases, their prefix states are forgotten and only a single special state is retained in a repository . This single state contains the summary of the forgotten prefix states, i.e., marking reached with the prefix states and the cumulative cost of its moves. The maximum memory consumption at any point is therefore states, where is the average number of states over the multi-state prefix-alignments in .

Once the number of observed cases surpasses the limit , we forget prefix-alignments of some cases to accommodate the newly arrived cases. Along with reducing the memory consumption, our goal is to also reduce any loss of the conformance of the cases. Therefore, instead of naïvely forgetting, we prudently forget the prefix-alignments of cases according to a forgetting criteria which is explained in the following section. As discussed in the previous paragraph, while forgetting the states of a prefix-alignment of a case, we retain its summary as a single special state in . Upon observing an orphan event, we compute its prefix-alignment starting from marking of its forgotten prefix which we retrieve from . Also, we add the retained cost as residual to that incurred by the orphan event(s) so that the effective trace fitness cost of is . Through the aforementioned measures, we increase the probability of correctly estimating (non)conformance of cases even with forgotten prefixes. Algorithm 2 provides an algorithmic summary of this proposed approach.

1:
2:
3:while true do
4:     ;
5:     ;
6:     ;
7:     if  then
8:          copy all alignments of to for all ;
9:     else
10:          ;
11:          if  then
12:               select most suitable case through forgetting criteria;
13:               ;
14:               forget of ;                
15:     compute through model semantics or shortest path search (van Zelst et al., 2019)
16:     
Algorithm 2 Prefix-alignment-based OCC with bounded cases

Forgetting Criteria:

Our forgetting criteria consist of a set of conditions. A single-pass forgetting approach traverses through and assigns cases residing therein a forgetting preference score in accordance with the condition it qualifies. Once a case with a certain forgetting preference is found, we narrow the search to cases with higher forgetting preferences. The only exception to the aforementioned optimization is finding a case qualifying Condition 1 below, where we completely stop the search process as we already found the most suitable case to be forgotten. In the following, we briefly explain these conditions:

  1. The first condition looks for a complaint monuple case, i.e., case with a single event , such that . In absence of noise, the prefix alignment of the orphan events of such a case will likely still be a global optimum. Such a case therefore is assigned the highest forgetting preference.

  2. Cases with residual cost imply that their forgotten prefix was non-conformant. Since the prefix-alignment for a forgotten prefix cannot be revisited, these cases will remain non-conformant forever. Such cases are assigned with the second highest forgetting preference.

  3. The prefix-alignments of complete conformant cases, i.e., cases with a 0.0 fitness cost, are optimal. The prefix-alignments of their orphan events are expected to remain optimal starting from their current marking. Such cases are therefore assigned the third highest forgetting preference.

  4. Cases with zero residual cost but non-zero total fitness cost imply that their in-memory events are not fitting. We assign such cases a least forgetting preference in light of the probability that they can get optimal prefix-alignments with less fitness cost in the future.

(a) Stored states
(b) Fitness costs
(c) Classification of cases
(d) Average processing time per event
Figure 3. Results of the experiment with bounded states .

4.3. Combined bounded cases and states with carryforward marking and cost

The approach presented in Section 4.1 reduces the memory consumption by limiting the number of states to which each case in can retain. The approach presented in Section 4.2 reduces the memory consumption by limiting to . Our third approach combines these two approaches by limiting the number of cases in to and the number of states that can be retained by each of these cases to . Since both and the number of states per case in is bounded, this deterministically reduces memory consumption with respect to both previously presented approaches.

As in Section 4.1, we forget the earliest prefix state(s) in excess of on every prefix-alignment computation for cases in and prepend their summary as a special state to the surviving states. As in Section 4.2, the prefix-alignments of the cases in excess of are forgotten using the forgetting criteria and their summary is individually stored in as a single state. Upon observing an orphan event of a case, we compute its prefix-alignment starting from marking of its forgotten prefix which we retrieve from . We add the cost retained therein as residual to the cost incurred by the orphan event(s). Hence, the effective trace fitness cost of such a case takes into account the cost of its forgotten prefix states as well. The maximum memory consumption at any point therefore is states. Algorithm 3 provides an algorithmic summary of this proposed approach.

1:
2:
3:while true do
4:     ;
5:     ;
6:     ;
7:     if  then
8:          copy all alignments of to for all ;
9:     else
10:          ;
11:          if  then
12:               select most suitable case through forgetting criteria;
13:               ;
14:               forget of ;                
15:     compute through model semantics or shortest path search (van Zelst et al., 2019)
16:     if  then
17:          
18:          ;      
19:     ;
Algorithm 3 Prefix-alignment-based OCC with bounded cases and states

5. Experimental Evaluation

The proposed approaches are evaluated through a prototype implementation111https://www.github.com/rashidzaman84/MemoryEfficientOCC. It is dependent on the Online Conformance package (van Zelst et al., 2019) which uses the algorithm for shortest path search based prefix-alignment computation. This package requires a Petri net process model , its initial marking , and its final marking . Additionally, our first approach requires the state limit , the second approach requires the case limit , while the third approach requires both and . The experiments are conducted on a Windows 10 64 bit machine with an Intel Core i7-7700HQ 2.80GHz CPU and 32GB of RAM.

We use the application process and its integral offer subprocess event data of Business Process Intelligence Challenge (BPIC’12)222http://www.win.tue.nl/bpi/2012/challenge in all our experiments. This real event data is related to loan applications made to a Dutch financial institute and contains 13087 cases consisting of 92093 events. The reference process model has been developed by a process modelling expert in consultation with the domain knowledge experts from the financial institute. The reasons for selecting this event data include: 1) the high complexity of the reference process model, 2) the multiple types of event noise prevailing in the data, and 3) the high arrival rate of cases. We realize an event stream through dispatching events in the data on the basis of their actual timestamps. This way we ensure that the case and event distributions of the data are preserved and that also the number of cases running in parallel surpasses the limit .

(a) Stored states
(b) Fitness costs
(c) Classification of cases
(d) Average processing time per event
Figure 4. Results of the experiment with bounded cases .
(a) Stored states
(b) Fitness costs
(c) Classification of cases
(d) Average processing time per event
Figure 5. Results of the experiment with bounded cases and states , indicated as

We are presenting the results in an event-window notion where all the windows consist of the same number of observed events. We are considering state of the art incremental prefix-alignments with an infinite memory, hence always resulting in optimal prefix-alignments, as our baseline. For each event-window, the results of our experiments are comprised of four statistics: the number of maximum states in memory (representing memory footprint), the root mean square error (RMSE) of the fitness cost, the for classification of cases as conformant or non-conformant, and the average processing time per event (APTE). To calculate the trace fitness cost, all our experiments use the default unit cost of for both log and model moves, while synchronous and model moves with silent transition incur a cost of . The RMSE and are calculated with reference to the baseline. We use five state limits, i.e., =1, 2, 3, 4, and 5, in experiments with the first approach. For experiments with our second approach, we use case limits = 100, 200, 300, 400, and 500. To evaluate our third approach, we use combinations of state limits = 1, 2, , 5 with case limits = 100, 200, , 500. We consider both and equal to , i.e., infinite memory, for our baseline. Note that the Y-axis of the does not start with 0.0 but starts with 0.94 for highlighting the minor differences. For APTE, we replicate the event stream times, renaming the cases in each replication. This effectively means we process cases and events, mimicking a much larger event stream. We report the APTE as the mean value over these iterations.

5.1. Results

Our first set of experiments, the results of which are provided in Figure 3, are related to the bounded states related approach presented in Section 4.1. As expected and depicted in Figure 2(a), we store fewer cumulative states for all the state limits =1, 2, 3, 4, and 5 in comparison to the baseline. The states storage is similar for states limit of 1 and 2 as for the former we need to retain the special summary state in addition to the allowed one state. The results for fitness costs, as depicted in Figure 2(b), are quite interesting. Even with retaining very few states per case, the RMSE is not significant. With being close to zero for =3 and 4, it is exactly zero for =5. Referring to Figure 2(c), the statistics regarding the classification of cases is also interesting. For all our state limits , the

of 1.0 indicates that our approach always correctly classifies cases as either conformant or non-conformant.

Referring to Figure 2(d), all the state limits are performing far better and almost persistent on the APTE metric as well. Interestingly, the curve for baseline in Figure 2(d) is depicting an increasing trend portraying that the APTE is increasing with increase in observed (and accordingly stored) events and cases which is actually misleading. Through further experiments and analysis, we realized that the increasing trend is mainly attributed to 1) the uneven number of shortest path search based prefix-alignment computations among the event-windows and 2) the trace-length directly contributing to the computational complexity of algorithm. The proposed approach is therefore stream-friendly in terms of computation performance.

Our second set of experiments, results provided as Figure 4, are related to the bounded cases related approach presented in Section 4.2. As depicted in Figure 3(a), our proposed approach is highly frugal to storing states in all the case limits in comparison to the baseline. However, the differences between these varying case limits are not significant. In all these case limits, prefix alignments are continuously forgotten to remain in the limit , therefore cases do not grow significantly in terms of the number of states in their prefix-alignments. Hence, the maximum number of states retained in all these case limits is somehow comparable.

Referring to Figure 3(b), we observe a higher RMSE with respect to the previous approach, implying that this approach generally overestimates the fitness cost of cases. Although not proportionally, the RMSE decreases with increase in the case limit . Investigation of this overestimation revealed an interesting factor. There are two completely identical execution sequences in the process model which lead to two different reachable markings. This anomalous behavior exists because two transitions share the same label and input place and hence are in a choice relation. The incremental prefix-alignments approach, lacking any information about the future events, always fires the transition with which the final marking can be reached earlier. A fraction of the future events of some of the cases corresponds to the transitions belonging to the execution sequence of the alternate transition. For such cases, the alternate transition should have been fired instead. A considerable number of cases are forgotten at this stage by our forgetting approach. Since the marking of the forgotten states is considered as an initial marking for computing the prefix-alignments for their orphan events, the mentioned fraction of orphan events does not fit anymore and hence is improperly considered as log moves. Linked to the aforementioned effect, some events prematurely lead their cases to the final marking and hence all the preceding events are improperly marked as log moves.

Referring to Figure 2(c), the for all the values is almost 1.0. Hence, besides the overestimation of non-conformance discussed in the previous paragraph, this approach is highly accurate in binary classification, i.e., conformant and non-conformant cases. Regarding the APTE, due to the continuous forgetting of cases, the state size of multi-state cases does not grow significantly and hence even the shortest path search based prefix-alignment computations do not take long. Therefore, all the case limits sustain almost a uniform APTE, as can be seen in Figure 3(d). Hence, this approach is also computation-wise stream-friendly.

Our third set of experiments, results provided in Figure 5, are related to the approach combining bounded cases and bounded states which is presented in Section 4.2. For clarity, we are reporting the results for only = 1 and 5 in combination with =100 and 500. As evident in Figure 4(a), we are highly frugal to consuming states in the reported state and case limit combinations. Interestingly, the state consumption is comparable for the same state limit even with different case limits. This peculiarity can be explained by the fact that in all the limits the prefix-alignments are frequently forgotten such that the multi-state cases do not necessarily reach the limit . Referring to Figure 4(b), as expected, the two-dimensional bounding causes a slight increase in RMSE with respect to the previous experiments. We can notice that increasing the and (disproportionately) reduces the RMSE. Interestingly, we observe an close to 1.0 for all the and combinations in Figure 4(c). Regarding APTE, referring to Figure 4(d), as with the previous experiments, this approach also performs far better and persistently in all and combinations.

5.2. Discussion

We stressed our proposed approaches with a high noise-bearing event data where only 17% of the cases having a prefix-length of 9 or more are conformant. 45% cases of the 83% non-conformant cases have multiple events with, probably different types of, associated noise. The multiple identical execution sequences leading to different markings property of the reference process model added interesting dimensions to the experiments. We conclude that the bounded states approach with a suitable state limit is very light on memory and performs as good as the baseline. A suitable state limit mainly depends on the number of past events required to revisit a prefix-alignment for optimality. For the approach with bounded cases, though we save considerably on the memory, the fitness costs are overestimated. A case limit equal to the number of maximum in-parallel running non-conformant cases will result in statistics that are as good as those of the baseline, which can likely be the situation in real event streams. The combined bounded cases and bounded states approach is much more suitable for processes with long traces and a binary classification task of cases as conformant or non-conformant. However, all these presented approaches may at some point grow unbounded, with the second and third approaches relatively withstanding for long as the states summary in consists of just a single state. Therefore, a systematic mechanism to completely forget cases should complement these approaches. Besides the problem arising due to anomalous execution sequences in process models, log moves at exactly the end of a prefix-alignment may get unaccounted in the future if the prefix-alignment is forgotten at this stage, even with retaining the current marking .

6. Conclusion and Future Work

We presented three incremental approaches to deal with scarcity of memory and at the same time avoid missing-prefix problem in prefix-alignment-based OCC of event streams. The effectiveness of these approaches is established through experiments with real-life event data by mimicking it as an event stream. The proposed approaches considerably reduce memory consumption and positively impact the overall processing time of events. These approaches are equally applicable in general (prefix)alignments and can easily be extended to other CC techniques.

Based on the observations and findings of the conducted experiments, we foresee the need for devising techniques to completely bound memory in event streams processing applications. Our last two proposed approaches suffer from the ‘hard-coding of initial marking’ problem caused by anomalous execution sequences existing in process models. A technique that, upon reaching a threshold of non-conformance in orphan events, revisits the past alignment decisions can mitigate the mentioned problem.

Acknowledgements.
The authors have received funding within the BPR4GDPR project from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 787149.

References

  • A. Adriansyah, N. Sidorova, and B. F. van Dongen (2011) Cost-based fitness in conformance checking. In 2011 Eleventh International Conference on Application of Concurrency to System Design, pp. 57–66. Cited by: §3.
  • A. Adriansyah, B. F. Van Dongen, and N. Zannone (2013) Controlling break-the-glass through alignment. In 2013 International Conference on Social Computing, pp. 606–611. Cited by: §1, §2.
  • A. Adriansyah (2014) Aligning observed and modeled behavior. Cited by: §1, §2.
  • M. Bahri, A. Bifet, J. Gama, H. M. Gomes, and S. Maniu (2021) Data stream analysis: foundations, major tasks and tools. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, pp. e1405. Cited by: §2.
  • J. v. Brocke, W. Aalst, T. Grisold, W. Kremser, J. Mendling, J. Recker, M. Roeglinger, M. Rosemann, and B. Weber (2021) Process science: the interdisciplinary study of continuous change. pp. . Cited by: §1.
  • A. Burattin and J. Carmona (2017) A framework for online conformance checking. In BPM, pp. 165–177. Cited by: §2.
  • A. Burattin, S. J. van Zelst, A. Armas-Cervantes, B. F. van Dongen, and J. Carmona (2018) Online conformance checking using behavioural patterns. In BPM, pp. 250–267. Cited by: §1, §2, §2.
  • A. Burattin (2018) Streaming process discovery and conformance checking. In Encyclopedia of Big Data Technologies, S. Sakr and A. Zomaya (Eds.), pp. 1–8. External Links: ISBN 978-3-319-63962-8 Cited by: §1, §2.
  • J. Carmona, B. van Dongen, A. Solti, and M. Weidlich (2018) Conformance checking. Springer. Cited by: §1, §2, §3, §3, §3.
  • H. M. Gomes, J. Read, A. Bifet, J. P. Barddal, and J. Gama (2019) Machine learning for streaming data: state of the art, challenges, and opportunities. ACM SIGKDD Explorations Newsletter 21 (2), pp. 6–22. Cited by: §2.
  • M. Hassani, S. J. van Zelst, and W. M. van der Aalst (2019) On the application of sequential pattern mining primitives to process discovery: overview, outlook and opportunity identification. Wiley interdisciplinary reviews: data mining and knowledge discovery 9 (6), pp. e1315. Cited by: §2.
  • M. Hassani (2015) Efficient clustering of big data streams. Apprimus Wissenschaftsverlag. Cited by: §1.
  • W. L. J. Lee, A. Burattin, J. Munoz-Gama, and M. Sepúlveda (2021) Orientation and conformance: a hmm-based approach to online conformance checking. Information Systems 102, pp. 101674. External Links: ISSN 0306-4379 Cited by: §2.
  • W. L. J. Lee, H. Verbeek, J. Munoz-Gama, W. M. van der Aalst, and M. Sepúlveda (2018) Recomposing conformance: closing the circle on decomposed alignment-based conformance checking in process mining. Information Sciences 466, pp. 55–91. Cited by: §2.
  • W. van der Aalst, A. Adriansyah, A. K. A. De Medeiros, F. Arcieri, T. Baier, T. Blickle, J. C. Bose, P. Van Den Brand, R. Brandtjen, J. Buijs, et al. (2011) Process mining manifesto. In International conference on business process management, pp. 169–194. Cited by: §2.
  • W. M. van der Aalst (2013) Decomposing petri nets for process mining: a generic approach. Distributed and Parallel Databases 31 (4), pp. 471–507. Cited by: §2.
  • W. van der Aalst (2016) Data science in action. In Process mining, pp. 3–23. Cited by: §1, §3, §3.
  • B. F. Van Dongen, N. Busi, G. M. Pinna, and W. M. van der Aalst (2007) An iterative algorithm for applying the theory of regions in process mining. In FABPWS’07, pp. 36–55. Cited by: §2.
  • S. van Zelst (2019) Process mining with streaming data. Technische Universiteit Eindhoven. Cited by: §1.
  • S. J. van Zelst, A. Bolt, M. Hassani, B. F. van Dongen, and W. M. van der Aalst (2019) Online conformance checking: relating event streams to process models using prefix-alignments. International Journal of Data Science and Analytics 8 (3), pp. 269–284. Cited by: §1, §2, §3, §5, 8, 15, 15.
  • R. Zaman, M. Hassani, and B. F. Van Dongen (2021) Prefix imputation of orphan events in event stream processing. Frontiers in Big Data 4, pp. 80. External Links: ISSN 2624-909X Cited by: §1, §2.
  • R. Zaman, M. Hassani, and B. F. Van Dongen (2022) Efficient memory utilization in conformance checking of process event streams. In SAC 2022, External Links: Document Cited by: §1.