Conformance checking techniques aim at assessing to which extent the execution of a process is compliant with respect to a process model representing the expected behavior . Since deviating from expected behavior can be costly and/or expose an organization to frauds, conformance checking represents a crucial asset for modern organizations. State of the art approaches are able both to assess the overall level of compliance of executions and to pinpoint where deviations occurred, thus providing the analyst with valuable diagnostics.
Nevertheless, nowadays techniques still suffer from some limitations. Among them, in this work we focus on the lack of flexibility in compliance analysis. Processes often involve several alternative execution paths, whose choice can depend on the values of one or more data variables. While this aspect has been traditionally neglected in conformance checking, typically focused on the control flow perspective [22, 23, 4, 2] , recently few approaches have been proposed to assess process compliance with respect to multiple perspectives [12, 18]. However, existing techniques consider an activity performed at a given point of an execution either completely wrong or completely correct. Such a crisp distinction is often not suitable in many real-world processes, where decisions on data-guards are often characterized by some level of uncertainty, which poses some challenges in drawing exact lines between acceptable/not acceptable values. As a result, in these domains there often exists some tolerance to deviations. For example, let us assume that in a medical process there is a guideline stating that in between two procedures there must be an interval of at most five hours. Adopting a crisp evaluation, 4 hours 59 minutes would be considered fully compliant, while 5 hours and 1 minute would be fully not compliant, which is intuitively unreasonable. Such an approach can lead to generating misleading diagnostics, where executions marked as deviating actually correspond to acceptable behaviors. Furthermore, the magnitude of the deviations is not considered; small and large deviations are considered at the same level of compliance, which can easily be misleading. It is worth noting that this approach can also hamper the overall process resilience, making it very sensible even to small exceptions/disruptions. For instance, if process executions are monitored real-time, every small deviations can lead to raise some alarms and/or to stop the execution.
To deal with these challenges, in the present work we perform an exploratory study on the use of fuzzy sets in conformance checking. Fuzzy sets have been proven to be a valuable asset to represents human decisions making process, since they allow to formalize the uncertainty often related to these processes. In particular, elaborating upon fuzzy sets concepts, we propose a new multi-perspective conformance checking technique that accounts for the degree of deviations. Taking into account the severity of the occurred deviations allows to a) improving the quality of the provided diagnostics, generating a more accurate assessment of the deviations, and b) enhancing the flexibility of compliance checking mechanisms, thus paving the way to improve the robustness of the process management system with respect to unforeseen exceptions, that is a necessary step towards the development of resilient systems . As a proof-of-concept, we tested the approach over a synthetic dataset.
The rest of this work is organized as follows. Section 2 discusses related work; Section 4 introduces basic concepts used throughout the paper; Section 3 introduces a running example to discuss the motivation of this work; Section 5 illustrates the approach; Section 6 discusses results obtained by a set of synthetic experiments; finally, Section 7 draws some conclusions and future work.
2 Related work
During last decades, several conformance checking techniques have been proposed. Some approaches [7, 9, 21] propose to check whether event traces satisfy a set of compliance rules, typically represented using declarative modeling. Rozinat and van der Aalst  propose a token-based technique to replay event traces over a process model to detect deviations.Although this technique can deal with infinite behavior, it has been shown that token-based techniques can provide misleading diagnostics . Recently, alignments have been proposed as a robust approach to conformance checking . Alignments are able to pinpoint deviations causing nonconformity based on a given cost function. While most of alignment-based approaches use the standard distance cost function as defined by , some variants have been proposed to enhance the quality of the provided diagnostics. For example, the work of Alizadeh et al. 
proposes an approach to compute the cost function by analyzing historical logging data, with the aim of obtaining probable explanations of nonconformity. Besides the control flow, there are also other perspectives like data, or resources, that are often crucial for compliance checking analysis. Few approaches in literature have investigated how to include these perspectives in the analysis. extends the approach in , to enhance the accuracy of the probable non-conformity explanations by taking into account data describing the contexts in which the activities occurred in previous process executions. Some approaches proposed to compute the control-flow firstm then assessing process executions compliance with respect to the data perspective, e.g., , [hoffmansapproach]. These methods assume that the control flow is more important than other perspectives for an optimal alignment, with the result that some important deviations can be missed.  introduces a cost function able to account for all kind of deviations at the same time, thus obtaining well-rounded diagnostics considering all the desired perspectives. The approaches mentioned so far assume a crisp evaluation of deviations, according to which a deviation is completely wrong or completely correct. In this work, we aim at considering the severity of the detected deviations by using fuzzy sets notions. Several studies in literature have proven that fuzzy sets can be successfully employed to represent humans’ decision making processes; among them, we can mention, for example  to study a fuzzy approach to modelling Vietnames farmers’ decision process in adopting adopting integrated farming systems. However, to the best of our knowledge, no previous work has investigated the use of fuzzy sets concept for conformance checking.
3 Motivating Example
Consider, as a running example, a loan management process derived from previous work on the event log of a financial institute made available for the BPI2012 challenge [1, 14]. Fig. 1 shows the process in BPMN notation. The process starts with the submission of an application. Then, the application passes through a first assessment, aimed to verify whether the applicant meets the requirements. If the requested amount is greater than 10000 euros, the application also goes through a more accurate analysis to detect possible frauds. If the application is not eligible, the process ends; otherwise, the application is accepted. An offer to be sent to the customer is selected and the details of the application are finalized. After the offer has been created and sent to the customer, the latter is contacted to discuss the offer with him/her, possibly adjusting according to her preferences. At the end of the negotiation, the agreed application is registered on the system. At this point, further checks can be performed on the application, if the overall duration is still below 30 days, before approving it.
Let us consider the following example traces111We use the notation to denote the occurrence of activity in which variables are assigned to corresponding values : ;
. Both these executions violate the guard on the value; indeed, the activity should have been skipped, being the requested loan amount lower than 10000. It is worth noting, however, that there is a significant difference in terms of their magnitude. Indeed, while in the first execution the threshold was not reached only by few dozens of euros, the second violation is several thousands of euros below the limit. Since state-of-the art conformance checking techniques adopt a crisp logic, where the value of a data variable can be marked only either as correct or wrong, this difference between and remains undetected.
We argue that taking into account the severity of the violations when assessing execution compliance allows to obtain more accurate diagnostics, especially in contexts where there exists some uncertainty related to the guards definition. Indeed, in these cases guards often represent more guidelines, rather than strict, sharp rules, and there might be some tolerance with respect to violations. In our example, could model an execution considered suspicious for some reasons, making a a fraud check worthy, since the amount is only slightly less than 10000. On the other hand, the violation in desreves some attention, since the amount is so far from the threshold that the additional costs needed for the fraud check are probably not justified.
Differentiating among different levels of violations also impacts the interpretation of the deviations. Often, multiple interpretations are returned by conformance checking techniques. For example, in our case possible interpretations can be 1) the activity should have been skipped, or 2) the execution of the activity is correct but it occurred with unexpected value of the variable . Differentiating between the severity of the deviations would make the second interpretation the preferred one when the deviation is limited, like in , thus providing more guidance to the analyst during process diagnostics.
This section introduces a set of definitions and concepts that will be used through the paper. First, we recall important conformance checking notions; secondly, we introduce basic elements of fuzzy sets theory.
4.1 Conformance Checking: Aligning Event Logs and Models
Conformance checking techniques detect discrepancies between a process model describing the expected process behavior and the real process execution.
The expected process behavior is typically represented as a process model. Since the present work is not constrained to the use of a specific modeling notation, here we refer to the notation used in , enriched with data-related notions explained in .
Definition 1 (Process model)
A process model is a transition system defined over a set of activities and a set of variables , with states , initial states , final states and transitions . The function defines the admissible data values, i.e., represents the domain of for each ; the function is a guard function, that associates an activity to a guard, i.e., a boolean formula expressing a condition on the values of the data variables; is a write function, that associates an activity with the set of variables which are written/updated by the activity; finally, is a function that associates each state with the corresponding pairs variable=value.
When a variable appears in a guard , it refers to the value just before the occurrence of ; however, if , it can also appear as , and refers to the value after the occurrence. The firing of an activity in a state is valid if: 1) is enabled in ; 2) writes all and only the variables in ; 3) is when evaluate over . To access the components of we introduce the following notation: , . Function is also overloaded such that if and if . The set of valid process traces of a process model is denoted with and consists of all the valid firing sequences that, from an initial state lead to a final state .
Process executions are often recorded by means of an information system in so-called event logs. In particular, an event log consists of traces, each collecting the sequence of events recorded during the same process execution. Formally, let be the set of (valid and invalid) firing of activities of a process model ; an event log is a multiset of traces . Given an event log , conformance checking builds an between and , whose goal consists in relating activities occurred in the event log to the activities in the model and vice versa. To this end, we need to map “moves” occurring in the event log to possible “moves” in the model. However, since the executions may deviate from the model and/or not all activities may have been modeled or recorded , we might have log/model moves which cannot be mimicked by model/log moves respectively. These situations are modeled by a “no move” symbol “ ”. For convenience, we introduce the set . Formally, we set to be a transition of the events in the log, to be a transition of the activities in the model. A move is represented by a pair such that:
is a move in log if and
is a move in model if and
is a move in both without incorrect write operations if , and and
is a move in both with incorrect write operations if , and and
Let be the set of all legal moves. The alignment between two process executions is such that the projection of the first element (ignoring ) yields , and the projection on the second element (ignoring ) yields .
Let us consider the simple model represented in Fig. 2 and the trace . Table 1 shows two possible alignments and for . For , the pair is a move in both with incorrect data, since the value of is not allowed when activity is executed; while in the move is matched with a , i.e., it is a move on log.
As shown in Example 1, there can be multiple possible alignments for a given log trace and process model. Our goal is to find the optimal alignment, i.e., a complete alignment as close as possible to a proper execution of the model. To this end, the severity of deviations is assessed by means of a cost function:
Definition 2 (Cost function, Optimal Alignment)
Let , be a log trace and a model trace, respectively. Given the set of all legal moves , a cost function assigns a non-negative cost to each legal move: . The cost of an alignment between and is computed as the sum of the cost of all the related moves: . An optimal alignment of a log trace and a process trace is one of the alignments with the lowest cost according to the provided cost function.
4.2 Basic Fuzzy Sets Concepts
Classic sets theory defines crisp, dichotomous functions to determine membership of an object to a given set. For instance, a set N of real numbers smaller than 5 can be expressed as . In this setting, an object either belongs to or it does not. Although crisp sets have proven to be useful in various applications, there are some drawbacks in their use. In particular, human thoughts and decisions are often characterized by some degree of uncertainty and flexibility, which are hard to represent in a crisp setting .
Fuzzy sets theory aims at providing a meaningful representation of measurement uncertainties, together with a meaningful representation of vague concepts expressed in natural language and close to human thinking . Formally, a fuzzy set is defined as follows:
Definition 3 (Fuzzy Set)
Let be a collection of objects. A fuzzy set over
is defined as a set of ordered pairs. is called the membership function (MF) for the fuzzy set F, and it’s defined as . The set of all points in such that is called the support of the fuzzy set, while the set of all points in in which is called core.
The goal of this work is introducing a compliance checking approach tailored to take into account the severity of the deviations, in order to introduce some degree of flexibility when assessing compliance of process executions and to generate diagnostics more accurate and possible closer to human interpretation. To this end, we investigate the use of fuzzy sets theory. In particular, we propose to use fuzzy membership functions to model the cost of moves involving data; then, we employ off-shelf techniques based on the use of A* algorithm to build the optimal alignment. The approach is detailed in the following subsections.
5.1 Fuzzy cost function
The computation of an optimal alignment relies on the definition of a proper cost function for the possible kind of moves (see Section 4). Most of state-of-the art approaches adopt ( variants of) the standard distance function defined in , which sets a cost of 1 for every move on log/model (excluding invisible transitions), and a cost of 0 for synchronous moves. Furthermore, the analyst can use weights to differentiate between different kind of moves.
The standard distance function is defined only accounting for the control-flow perspective. However, in this work we are interested in the data-perspective as well. In this regards, a cost function explicitly accounting for the data perspective has been introduced by  and it is defined as follows.
Definition 4 (Data-aware cost function)
Let be a move between a log trace and a model execution, and let, with a slight abuse of notation, to represent write operations related to the activity related to . The cost is defined as:
In the previous definition, data costs are computed as a)number of data variables not written/updated because the corresponding activity was skipped , b) number of data variables in a move whose values are not allowed according to the process model. The previous function considers every move either as completely wrong or completely correct; namely, it is a dichotomous function. To differentiate between different magnitude of deviations, in this work we propose to use fuzzy membership functions as cost functions for the alignment moves. Note that here we focus on data moves. Indeed, when considering other perspectives the meaning of the severity of the deviation is not that straightforward. For example, when considering control-flow deviations, usually an activity is either executed or skipped. Nevertheless, fuzzy costs can be defined also for other process perspectives, for instance, to differentiate between skip of activities under different conditions. We plan to explore these directions in future work.
Following the above discussion, we define our fuzzy cost function as follows:
Definition 5 (Data-aware fuzzy cost function)
Let be a move between a process trace and a model execution, and let be a fuzzy membership function returning the degree of deviation of a data variable in a move with incorrect data. The cost is defined as:
To define the fuzzy cost function in (2), we first need to determine over which data constraints we want to define a 222Note that multiple functions can be defined for the same data variable, if it is used in multiple guards.. Then, for each of them first we need to define a tolerance interval; in turn, this implies to define a) an interval for the core of the function, and b) an interval for the support of the function, (see Section 4). This choice corresponds to determine, for a given data constraint, which values should be considered equivalent and which ones not optimal but still acceptable. Once the interval is chosen, we need to select a suitable membership function. In literature, several different have been defined (see, e.g.,  for an overview), with different level of complexity and different interpretations. It is straightforward to see that determining the best to explicit the experts’ knowledge is not a trivial task. For the sake of space, an extended discussion over the modeling is out of the scope of this paper, and left for future work. Nevertheless, we would like to point out that this is a well-studied issue in literature, for which guidelines and methodologies have been drawn like, e.g., the one presented by . The approach can be used in combination of any of these methodologies, since it does not depend on the specific chosen.
Let us consider again the alignment in Table 1 and the model in Fig. 2. According to a crisp cost function, the cost for the second move would be 1, since variable does not fulfill the corresponding guard. Now, let us assume to interview an expert of this process, who tells us that values of up to 40 are still acceptable, even though not optimal. Let us represent this knowledge using a so-called R-function as , that are commonly used for their simplicity when no further information is available, defined as follows.
where represents the upper bound the analyst is willing to accept, while represents the ideal value represented by the constraint. With , and , we would obtain a move cost equal to 0.5.
5.2 Alignment building: using A* to find the optimal alignment
The problem of finding an optimal alignment is usually formulated as a search problem in a directed graph . Let be a directed graph with edges weighted according to some cost structure. The A* algorithm finds the path with the lowest cost from a given source node to a node of a given goals set . The coast for each node is determined by an evaluation function , where:
gives the smallest path cost from to ;
gives an estimate of the smallest path cost fromto any of the target nodes.
If is admissible,i.e. t underestimates the real distance of a path to any target node , A* finds a path that is guaranteed to have the overall lowest cost.
The algorithm works iteratively: at each step, the node v with lowest cost is taken from a priority queue. If v belongs to the target set, the algorithm ends returning node v. Otherwise, v is expanded: every successor is added to priority queue with a cost .
Given a log trace and a process model, to employ A* to determine an optimal alignment we associate every node of the search space with a prefix of some complete alignments. The source node is an empty alignment , while the set of target nodes includes every complete alignment of and . For every pair of nodes , is obtained by adding one move to .
The cost associated with a path leading to a graph node is then defined as , where , with defined as in (2); is the number of moves in the alignment; and is a negligible cost, added to guarantee termination. Note that tthe cost has to be strictly increasing. While a formal proof is not possible for the sake of space, it is however straight to see that
is obtained in our approach by the sum of all non negative elements; therefore, while moving from an alignment prefix to a longer one, the cost can never decrease. For the definition of the heuristic cost functiondifferent strategies can be adopted. Informally, the idea is computing, from a given alignment, the minimum number of moves (i.e., the minimum cost) that would lead to a complete alignment. Different strategies have been defined in literature, e.g., the one in , which exploits Petri-net marking equations, or the one in , which generates possible states space of a BPMN model.
6 Implementation and Experiments
This section describes a set of experiments we performed to obtain a proof-of-concept of the approach. To this end, we compared the diagnostics returned by a crisp conformance checking approach with the outcome obtained by our proposal. In order to get meaningful insights on the behavior we can reasonably expect by applying the approach in the real world, we employ a realistic synthetic event log, introduced in a former paper , obtained starting from one real-life logs, i.e., the event log of the BPI2012 challenge 333https://www.win.tue.nl/bpi/doku.php?id=2012:challenge. We evaluated the compliance of this log against a simplified version of the process model in , to which we added few data constraints (see Fig.1). The approach has been implemented as an extension to the tool developed by , designed to deal with BPMN models. In the following we describe the experimental setup and the obtained results.
The log in  consists of 5000 traces, where a predefined set of deviations was injected. The values for the variables ”” were collected the from the BPI2012 log, while for calculating ”” a random time window ranging from 4 to 100 hours has been put in between each pair of subsequent activities, and the overall duration was then increased of by 31 days for some traces. For more details on the log construction, please check .
Our process model involves two constraints for the data perspective, i.e., to execute the activity , and to execute the activity . For the crisp conformance checking approach, we use the cost function provided by (1); while for the fuzzy approach, the cost function in (2).
Here we assume that and
represent a tolerable violation range for the variables. Since we cannot refer to experts’ knowledge here, we derived these values from simple descriptive statistics. In particular, we draw the distributions of the values for each variable, considering values falling within the third quartile as acceptable. The underlying logic is that values which tend to occur repeatedly are likely to indicate acceptable situations. Regarding the shape of the membership function, here we apply the following R function explained in Example2, reported below. and are abbreviated to and .
We compare the diagnostics obtained by the crisp approach and by our approach in terms of a)kind of moves regarding the activities ruled by the guard, and b)distribution of fitness values, computed according to the definition in . Table 2 shows differences in terms of number and kind of moves detected for the activities and within the crisp/fuzzy alignments respectively, considering also the possible existence of multiple optimal alignments. Namely, when the same move got different interpretations in different alignments, we count the move as both move in log and move in data. Note, however, that multiple optimal alignments with the same interpretation for the move count one. It is worth noting that while we obtained the same number of move-in-log, move-in-data for the crisp approach, these values change when considering the fuzzy approach, for which move in log are in general less. Indeed, the alignments obtained by the fuzzy approach can differ according to the severity of the data deviations. In particular, when the deviation is within the tolerance defined by the membership function, then the move-in-data has a smaller cost than the move-in-log: hence, there exists only one optimal alignment for these cases. For example, from Table 2 we can derive that for the first activity in 567 traces (744-177) the data deviation was indeed within the range, and hence we obtained only the move-in-data in output.
Boxplots in Fig.4 show the distributions of data deviation severity.We can see that the ranges are similar for both the constraints, with most of the values remaining below 0.65. These distributions suggest that data deviations are mostly quite limited in our dataset; therefore, we expect relevant differences in fitness values computed by the fuzzy and the crisp approaches. Fig. 4 shows a scatter plot in which each point represents one trace. The x-axis is the fitness level of alignment with crisp costs, while the y-axis represents the value corresponding to the fuzzy cost. For all traces that are above the main diagonal,which amounts to 24.3% of all traces, the fuzzy approach obtained higher values of fitness. For all traces on the main diagonal, the fitness level remains unchanged.
In the following, we provide a practical example of the impact of the fuzzy cost on the diagnostic of a single trace.
Let us consider
. Fig.5 shows the different alignments obtained adopting a crisp (Fig.5.a) and a fuzzy (Fig.5.b) cost function. For the sake of space, here we report only the lines of the alignments related to the activities ruled by the data guards.For each move, we report the position of the move in the alignment followed by . Note that here we report the default optimal alignment returned by the tool, even though alternative alignments were possible. In particular, while for the second deviation multiple interpretations were returned by both the approaches, either as move-in-log or a move-in-data, since the amount of deviation is outside the tolerance range, the first deviations is always considered as a move-in-data in the fuzzy approach. Furthermore, the fuzzy approach returned a higher fitenss value for the trace than the crisp one; this is reasonable, since the first deviation is still close enough to the ideal value.
Summing-up, the performed comparison did highlight how the use of a fuzzy cost led to improved diagnostics. In particular, the results show that the fuzzy approach allows to obtain a more fine-grained evaluation of traces compliance levels, allowing the analyst to differentiate between reasonably small and potentially critical deviations; furthermore, it allows to establish a preferred interpretation in cases in which the crisp function would consider possible options as equivalent, thus reducing ambiguities in interpretation.
7 Conclusion and Future work
The present work investigated the use of fuzzy sets concepts in multi-perspective conformance checking. In particular, we shown how fuzzy sets notions can be used to take into account the severity of deviations when building the optimal alignment. We implemented the approach and performed a proof-of-concept over a synthetic dataset, comparing results obtained adopting a standard crisp logic and our fuzzy logic. The obtained results confirmed the capability of the approach of generating more accurate diagnostics, as shown both by a)the difference in terms of fitness of the overall set of executions, due to a more fine-grained evaluation of the magnitude of the occurred deviations, and b) by the differences obtained in terms of the different preferred explanations provided by the alignments of the different approaches.
Since this is an exploratory work, there are several research directions still to be explored. First, in future work we plan to test our approach in real-world experiments, to generalize the results obtained so far. Furthermore, as we mentioned in Section 5, in the present work we investigated a fuzzy modeling only on the data perspective. We plan to investigate this extension in future work. Similarly, we intend to address issues related to possible relations among data variables, incorporating this information to enhance the accuracy of the alignment. Another research direction we intend to explore, consists in introducing different aggregation function; while here we used the classic sum operator to assess the overall trace conformance, in literature several fuzzy aggregation functions have been defined for membership functions, which can be used to tailor the cost function to the process analysts’ needs. Finally, in future work we intend to investigate how to exploit our flexible conformance checking approach to enhance the system on-line resilience to exceptions and unforeseen events.
The research leading to these results has received funding from the Brain Bridge Project sponsored by Philips Research.
-  (2012) Mining process performance from event logs: the bpi challenge 2012. In Case Study. BPM Center Report BPM-12-15, BPMcenter. org, Cited by: §3.
-  (2012) Alignment based precision checking. In International Conference on Business Process Management, pp. 137–149. Cited by: §1.
-  (2010) Towards robust conformance checking. In International Conference on Business Process Management, pp. 122–133. Cited by: §2.
-  (2013) Memory-efficient alignment of observed and modeled behavior. BPM Center Report 3. Cited by: §1.
-  (2014) History-based construction of alignments for conformance checking: formalization and implementation. In International Symposium on Data-Driven Process Discovery and Analysis, pp. 58–78. Cited by: §2.
-  (2015) Constructing probable explanations of nonconformity: a data-aware and history-based approach. In 2015 IEEE Symposium Series on Computational Intelligence, pp. 1358–1365. Cited by: §2.
-  (2014) Conformance checking and diagnosis for declarative business process models in data-aware scenarios. Expert Systems with Applications 41 (11), pp. 5340–5352. Cited by: §2.
-  (2005) Fuzzy modelling of farmer motivations for integrated farming in the vietnamese mekong delta. In The 14th IEEE International Conference on Fuzzy Systems, 2005. FUZZ’05, United States, pp. 827–832 (English). External Links: Cited by: §2.
-  (2013) Comprehensive rule-based compliance checking and risk management with process mining. Decision Support Systems 54 (3), pp. 1357–1369. Cited by: §2.
-  (1992) Fuzzy linear operators and fuzzy normed linear spaces. In First International Conference on Fuzzy Theory and Technology Proceedings, Abstracts and Summaries, pp. 193–197. Cited by: §1.
-  (2003) Elicitation of expert knowledge for fuzzy evaluation of agricultural production systems. Agriculture, Ecosystems & Environment 95 (1), pp. 1–18 (English). External Links: Cited by: §5.1.
Aligning event logs and process models for multi-perspective conformance checking: an approach based on integer linear programming. In Business Process Management, pp. 113–129. Cited by: §1, §2, §4.1, §6.
-  (1985) Generalized best-first search strategies and the optimality of a. Journal of the ACM (JACM) 32 (3), pp. 505–536. Cited by: §5.2.
-  (2018) Discovering anomalous frequent patterns from partially ordered event logs. Journal of Intelligent Information Systems 51 (2), pp. 257–300. Cited by: §3.
-  (2019) Predicting critical behaviors in business process executions: when evidence counts. In International Conference on Business Process Management, pp. 72–90. Cited by: §6, §6.
-  (1997) Neuro-fuzzy and soft computing-a computational approach to learning and machine intelligence [book review]. IEEE Transactions on Automatic Control 42 (10), pp. 1482–1484. Cited by: §4.2.
-  (1995) Fuzzy sets and fuzzy logic: theory and applications. Prentice-Hall, Inc., Upper Saddle River, NJ, USA. External Links: Cited by: §4.2, §4.2, §5.1.
-  (2016) Balanced multi-perspective checking of process conformance. Computing 98 (4), pp. 407–437. Cited by: §1, §2, §5.1.
-  (2013) Resilience-a new research field in business information systems?. In International Conference on Business Information Systems, pp. 3–14. Cited by: §1.
-  (2008) Conformance checking of processes based on monitoring real behavior. Information Systems 33 (1), pp. 64–95. Cited by: §2.
-  (2014) Compliance checking of data-aware and resource-aware compliance requirements. In OTM Confederated International Conferences” On the Move to Meaningful Internet Systems”, pp. 237–257. Cited by: §2.
-  (2011) Process mining manifesto. In International Conference on Business Process Management, pp. 169–194. Cited by: §1, §1.
-  (2012) Replaying history on process models for conformance checking and performance analysis. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 2 (2), pp. 182–192. Cited by: §1, §2, §4.1, §4.1, §5.1, §5.2.
Aligning event logs to task-time matrix clinical pathways in bpmn for variance analysis. IEEE Journal of Biomedical and Health Informatics 22 (2), pp. 311–317. Cited by: §5.2, §6.