BePT: A Process Translator for Sharing Process Models

04/22/2019 ∙ by Chen Qian, et al. ∙ Penn State University Tsinghua University 0

Sharing process models on the web has emerged as a widely used concept. Users can collect and share their experimental process models with others. However, some users always feel confused about the shared process models for lack of necessary guidelines or instructions. Therefore, several process translators have been proposed to explain the semantics of process models in natural language (NL) in order to extract more value from process repositories. We find that previous studies suffer from information loss and generate semantically erroneous descriptions that diverge from original model behaviors. In this paper, we propose a novel process translator named BePT (Behavior-based Process Translator) based on the encoder-decoder paradigm, encoding a process model into a middle representation and decoding the representation into a NL text. The theoretical analysis demonstrates that BePT satisfies behavior correctness, behavior completeness and description minimality. The qualitative and quantitative experiments show that BePT outperforms the state-of-the-art methods in terms of capability, detailedness, consistency, understandability and reproducibility.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 5

page 6

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

A process consists of a series of interrelated tasks. Its graphical description is called process model (van der Aalst et al., 2004). Over the past decade, a specific kind of process model - scientific workflow - has been established as a valuable means for scientists to create reproducible experiments (Starlinger et al., 2014). Several scientific workflow management systems (SWFM) have become freely available, easing scientific models’ creation, management and execution (Starlinger et al., 2014). However, creating scientific models using SWFMs is still a laborious task and complex enough to impede non computer-savvy researchers from using these tools (Boulakia and Leser, 2011). Therefore, cloud repositories emerged to allow sharing of process models, thus facilitating their reuse and repurpose (Goble et al., 2010; Goble and Roure, 2007; Starlinger et al., 2014). As shown in Figure 1, the model developers use modeling tools such as PIPE111http://pipe2.sourceforge.net, WoPED222http://woped.dhbw-karlsruhe.de and Signavio333http://www.signavio.com to build and manage process models which can be hosted in the cloud, then run as a service or be downloaded to users’ local workspaces. Popular examples of such scientific model platforms or repositories include myExperiment, Galaxy, Kepler, CrowdLabs, Taverna, VisTrails, e-BioFlow, e-Science and SHIWA (Starlinger et al., 2014; Hull et al., 2006; Goecks et al., 2010; Goble et al., 2010; Goble and Roure, 2007). As for users, reusing shared models from public repositories is much more cost-effective than creating, testing and tuning a new one.

Figure 1. The Model Sharing Scenario: The users collect the shared models developed by the third-party developers.

However, those models are difficult to reuse since they lack necessary NL guidelines or instructions to explain the steps, jump conditions and related resources (Gemino, 2004; Leopold et al., 2012a, 2014; Qian et al., 2017). For example, the repository offered with myExperiment currently contains more than 3910 process models from various disciplines including bioinformatics, astrophysics, earth sciences or particle physics (Starlinger et al., 2014), but only 1288 out of them have NL documents444https://www.myexperiment.org/home, which shows the gap between the shared models and their NL descriptions. This real-world scenario illustrates that the cloud platforms do not have effective means to address this translation problem (Qian et al., 2017; Wu et al., 2018), i.e., automatically translating the semantics of process models in NL, thus making it challenging for users to reuse the shared models. Now that means to translate a process model are able to help users understand models and improve shared models’ reusability (Gemino, 2004), a growing interest in exploring automatic process translators - process to text (P2T) techniques - has emerged.

Related Work. The structure-based approach (Leopold et al., 2012a) was proposed to generate the text of a process model. It first extracts language information before annotating each label (Leopold et al., 2011, 2012b; Leopold, 2013). Then it generates the annotated tree structure before traversing it by a depth first search. Once sentence generation is triggered, it employs NL tools to generate corresponding NL sentences (Lavoie and Rambow, 1997). This work solved the annotation problem, but it only works for structured models and ignores unstructured parts. Another approach (Leopold et al., 2014) was subsequently proposed to handle unstructured parts. It recursively extracts the longest path of a model on the unstructured parts to linearize each activity. However, it only works on some patterns and is hard to be extended to much more complex situations. Along this line, another structured-based method was proposed (Qian et al., 2017)

which can handle more elements and complex patterns. It first preprocesses a model by trivially reversing loop edges and splitting multiple-entry-multiple-exit gateways. Then, it employs heuristic rules to match every source gateway node with the defined goals. Next, it unfolds the original model based on those matched goals. Finally, it generates the texts of the unfolded models. Although this structure-based method maintains good paragraph indentations, it neglects the behavior correctness and completeness. Other “to-text” works that take BPMN

(Malik and Bajwa, 2013), EPC (Aysolmaz et al., 2018), UML (Meziane et al., 2008), image (Vinyals et al., 2015) or video (Li et al., 2015) as inputs are difficult to be applied into the process-related scenarios or are not for translation purpose. Hence, our motivation is to design a novel process translator.

We define our problem as follows: given a process model, our approach is going to generate the textual descriptions for the semantics of the model. Note that although there are many process modeling languages, we choose Petri nets as our process modeling language (van der Aalst, 1998) because of: 1) their formal semantics; 2) an abundance of analysis techniques and tools; 3) ease of transformation from/to other process modeling languages. Hence, a process model in any other language can be easily converted into a Petri net.

Our approach - BePT - first embeds the structure information and the language information of a Petri net into a tree representation, and then linearizes the tree representation by extracting its behavior paths. Finally, it generates sentences for each sorted path. The theoretical analysis and the conducted experiments demonstrate that BePT satisfies three desirable properties and outperforms the state-of-the-art P2T approaches in several dimensions. To summarize, our contributions are listed as follows:

  1. Behavior based: To the best of our knowledge, this work is the first attempt that fully considers model behaviors in process translation. BePT avoids the semantic error problem. The theoretical analysis proves its three key properties: behavior correctness, behavior completeness and description minimality.

  2. Powerful capability: BePT can handle more model patterns including easy structured parts and complex unstructured parts. Besides, the problematic modeling features can be detected and appended as auxiliary warning messages to the final output.

  3. Better expressiveness: BePT linearizes each element on a behavior-unfolded graph, organizes the text in an intuitive way into various templates and grammar patterns. The corresponding experiments show that it is more expressive in terms of detailedness, consistency and understandability.

  4. Better reproducibility: Due to the clean organization, detailedness and understandability, users can better understand the generated texts from which they can reproduce the original models.

  5. More accurate statistics: The experiments are conducted on ten-time larger (compared with previous works) datasets collected from industry and academic fields to better reveal the statistical characteristics.

  6. Tool development: An easy-to-use web tool with a proper GUI has been implemented and we make it publicly available. It can be used as an independent software, and also be integrated into existing systems or platforms.

The rest of the paper is organized as follows. Section 2 introduces background knowledge. Section 3 illustrates a motivating example. Section 4 describes BePT and Section 5 evaluates current P2T methods before Section 6 concludes the paper.

2. Preliminaries

Before going further into the main idea, we introduce some background knowledge: Petri net (Murata, 1989; Nielsen et al., 1979; Liu and Kumar, 2005), Refined Process Structure Tree (RPST) (Vanhatalo et al., 2008), Complete Finite Prefix (CFP) (McMillan and Probst, 1995; Esparza et al., 2002; Engelfriet, 1991) and Deep Syntactic Tree (DSynT) (Lavoie and Rambow, 1997; Ballesteros et al., 2014). These four concepts are respectively used for process modeling, structure analysis, behavior unfolding and sentence generation.

(a) A bioinformatic Petri net ().
(b) The RPST of .
(c) The CFP of ().
(d) The DSynT of of .
Figure 2. An example of a bioinformatic process model.

2.1. Petri Net

Definition 0 (Petri Net, Net System, Boundary node).

A Petri net is a tuple , where is a finite set of places, is a finite set of transitions, is a set of directed arcs. A marking of , denoted , is a bag of tokens over . A net system is a Petri net with an initial marking . The input set and output set of a node are respectively denoted as and . The source set and sink set of a net are respectively denoted as and . These boundary elements are called boundary nodes of .

Definition 0 (Firing Sequence, TAR, Trace).

Let be a net system with . A transition can be fired under a marking , denoted , iff each contains at least one token. After fires, the marking changes to (Firing Rule). A sequence of transitions is called a firing sequence iff holds. Any transition pair that fires contiguously () is called a transition adjacency relation (TAR). A firing sequence is a trace of iff the tokens completely flow from all source(s) to sink(s).

Example 0 ().

Figure 2(a) shows a real-life bioinformatic process model expressed by Petri net. contains one token so that the current marking is (over ). According to the firing rule, each node in the input set of () contains at least one token so that can be fired. After firing , the marking becomes , i.e., . The TAR set of is . The trace set of is .

2.2. Refined Process Structure Tree (RPST)

Definition 0 (Component, RPST, Structured, Unstructured).

A process component is a sub-graph of a process model with a single entry and a single exit (SESE), and it does not overlap with any other component. The RPST of a process model is the set of all the process components. Let be a set of components of a process model. is a trivial component iff only contains a single arc; is a polygon component iff the exit node of is the entry node of ; is a bond component iff all sub-components share same boundary nodes; Otherwise, is a rigid component. A rigid component is a region of a process model that captures arbitrary structure. Hence, if a model contains no rigid components, we say it is structured, otherwise it is unstructured.

Example 0 ().

The colored backgrounds in Figure 2(a) demonstrate the decomposed components which naturally form a tree structure - RPST - shown in Figure 2(b). The whole net (polygon) can be decomposed into three first-layer SESE components (, , ), and these three components can be decomposed into second-layer components (, , , , , ). The recursive decomposition ends at a single arc (trivial).

2.3. Complete Finite Prefix (CFP)

Definition 0 (Cut-off transition, mutual, CFP).

A branching process is a completely fired graph of a Petri net satisfying that 1) ; 2) no element is in conflict with itself; 3) for each , the set is finite. The mapping function maps each CFP element to the corresponding element in the original net. If two nodes in CFP satisfy , we say they are mutual (places) to each other.

A transition is a cut-off transition if there exists another transition such that where denotes a set of transitions of satisfying TAR closure () and . A CFP is the greatest backward closed subnet of a branching process containing no transitions after any cut-off transition.

Example 0 ().

Figure 2(c) shows the branching process of (including the light-gray part). Since each original node corresponds to one or more CFP nodes, thus we append “id” to number each CFP node. As so that and are mutual (places). In , so that is a cut-off transition (transitions after are cut). The cut graph is CFP of (excluding the light-gray part).

In particular, we emphasize two reasons for using the CFP concept (McMillan and Probst, 1995; Esparza et al., 2002; Engelfriet, 1991): 1) the CFP contains no false behaviors of its original model; 2) the CFP is a complete and minimal behavior-unfolded graph of the original model.

2.4. Deep Syntactic Tree (DSynT)

A DSynT is a dependency representation of a sentence. In a DSynT, each node carries a verb or noun decorated with meta information such as the tense of the main verb or the number of nouns etc, and each edge can denote three dependencies - subject (I), object (II), modifier (ATTR) - between two adjacency nodes.

Example 0 ().

Figure 2(d) shows the DSynT of in . The main verb “extract” is decorated by class “verb” and the voice “active”. The subject and the object of “extract” are “experimenter” (assigned by the model developer) and “gene”. This DSynT represents the dependency relations of the sentence “the experimenter extracts the genes”.

3. A Motivating Example

In this part, we use a motivating example to illustrate some issues that exist in previous P2T techniques. We name three existing process translators as Leo (Leopold et al., 2012a), Hen (Leopold et al., 2014), Goun (Qian et al., 2017) respectively. Leo text, Hen text and Goun text respectively denote the generated NL text via the corresponding translator. Figure 3 shows two motivating examples. Due to the limited space, we only present the structural skeleton.

For ease of representation, we employ “” to mark each paragraph, the underline format “” represents the subject-predicate-object sentence for a single activity , and the right arrow “” represents the conjunction adverb.

As for , Leo text, Hen text and Goun text are almost the same:

[style=myboxstyle,frametitle=] One of the following branches is executed:
.
.
When all of the above branches are executed, …

Note that and in are in a conflict relation (starting from the same place), but finally turn into a concurrent relation (ending at the same transition). Therefore, is a deadlocked activity according to the firing rule, i.e., will never fire. Thus,

is incorrectly modeled. But in the above generated text,

and imply that can be fired after or . Besides, the singular adverb “is” contradicts with the plural adverb “are” in the text. This is the so-called semantic error problem (Qian et al., 2017), i.e., misleading descriptions diverging from the semantics of the original model. As for , Leo and Hen fail to handle it since parallel behaviors (caused by transitions with multiple outgoing arcs) in a rigid region have no longest node-by-node path, whereas Goun utilizes a structure-based unfolding strategy. Goun text also contains diverging descriptions: “”, which will never occur according to the firing rule. Thus, also reveals the semantic error problem.

(a) A bond component ().
(b) A rigid component ().
Figure 3. Two representative components.

These two examples show that state-of-the-art methods - Leo, Hen, Goun - not only cannot identify the modeling features, but also cannot guarantee the correctness of model semantics in the generated descriptions (Qian et al., 2017), i.e., semantic errors exist. Therefore, we investigated the literature and tried to break through the main bottlenecks. Here, we list some challenges to be solved:

  1. How to analyze and decompose the structure of a complex model, such as an unstructured or multi-layered one?

  2. For each element, how to analyze the language pattern of a short label, extract the main linguistic information and create semantically correct descriptions?

  3. How to transform a non-linear process model into linear representations, especially when it contains complex patterns?

  4. How to extract the correct behaviors of process models and avoid behavior space explosion?

  5. How to design language templates and simplify textual descriptions to express more naturally? How to make the results more intuitive to read and understand?

  6. How to avoid semantic errors and redundant descriptions?

4. Our Method

To solve the six challenges (C1-C6) listed in the previous section, we propose BePT which is built on the encoder-decoder framework inspired from machine translation systems (Schulz et al., 2018; Cho et al., 2014b, a). The encoder creates an intermediate tree representation from the original model and the decoder generates the NL descriptions from it. Figure 4 presents a high-level framework of BePT, including four main phases: Structure Embedding, Language Embedding, Text Planning and Sentence Planning (Leopold et al., 2012a, 2014; Qian et al., 2017):

Figure 4. High-level view of BePT’s framework.
  1. Structure Embedding (C1): Embedding the structure information of the original model into the intermediate representation.

  2. Language Embedding (C2): Embedding the language information of the original model into the intermediate representation.

  3. Text Planning (C3, C4): Linearizing the non-linear tree representation by extracting its behavior paths (defined below). This phase is the most important and challenging.

  4. Sentence Planning (C5, C6): Generating NL text by employing pre-defined language templates and NL tools.

4.1. Structure Embedding

We take a simplified model , shown in Figure 5, as our running example due to its complexity and representativeness. A complex sub-component (any structure is possible) in the original model is replaced by the black single activity . The simplified model is also complex since it contains a main path and two loops.

We employ a simplification algorithm from (Qian et al., 2017) to replace each sub-model with a single activity to obtain a simplified but behavior-equivalent one because a model containing many sub-models may complicate the behavior extraction (Qian et al., 2017). In the meantime, the simplification operation causes no information loss (Qian et al., 2017) since the simplified part will be visited in the deeper recursion. We emphasize that this simplification step is easy and extremely necessary for behavior correctness (see Appendix A).

Next, we analyze its structural skeleton and then create the RPST of . Finally, we embed its structure information - RPST - into a tree representation (as shown in the upper part of Figure 6).

Figure 5. A simplified model (). The original complex component (any structure is possible) is simplified by the black element (a single activity).

4.2. Language Embedding

4.2.1. Extract Linguistic Information

This step sets out to recognize NL labels and extract the main linguistic information (Leopold et al., 2011, 2012b; Leopold, 2013). For each NL label, we first examine prepositions and conjunctions. If prepositions or conjunctions are found, respective flags are set to true. Then we check if the label starts with a gerund. If the first word of the label has an “ing” suffix, it is verified as a gerund verb phrase (e.g., “extracting gene”). Next, WordNet (Miller, 1995) is used to learn if the first word is a verb. If so, the algorithm refers it to a verb phrase style (e.g., “extract gene”). In the opposite case, the algorithm proceeds to check prepositions in the label. A label containing prepositions the first of which is “of” is qualified as a noun phrase with of prepositional phrase (e.g., “creation of database”). If the label is categorized to none of the enumerated styles, the algorithm refers it to a noun phrase style (e.g., “gene extraction”). Finally, we similarly categorize each activity label into four labeling styles (gerund verb phrase, verb phrase, noun phrase, noun phrase with of prepositional phrase).

Lastly, we extract the linguistic information - role, action and objects - depending on which pattern it triggers. For example, in , the label of triggers a verb phrase style. Accordingly, the action lemma “remove” and the noun lemma “impurity” are extracted.

4.2.2. Create DSynTs

Once this main linguistic information is extracted, we create a DSynT for each label by assigning the main verb and main nouns including other associated meta information (Ballesteros et al., 2014; Lavoie and Rambow, 1997) (as shown in the lower part of Figure 6).

For better representation, we concatenate each DSynT root node to its corresponding RPST leaf node, and we call this concatenated tree RPST-DSynTs (RDT). The RDT of is shown in Figure 6.

Figure 6. The RDT of . Some parts are replaced by the ellipsis due to the limited space.

So thus far, we have embedded the structural information (RPST) and the linguistic information (DSynTs) of the original process model into the intermediate representation RDT. Then, it is passed to the decoder phase.

4.3. Text planning

The biggest gap between a model and a text is that a model contains sequential and concurrent semantics (Goltz and Reisig, 1983), while a text only contains sequential sentences. Thus, this step focuses on transforming a non-linear model into its linear representations.

In order to maintain behavior correctness, we first create the CFP of the original model because a CFP is a complete and minimal behavior-unfolded graph of the original model (McMillan and Probst, 1995; Esparza et al., 2002; Engelfriet, 1991). Figure 7 shows the CFP of . According to Definition 2.6, and are two cut-off transitions, thus, no transitions follow them.

Besides, we introduce a basic concept: shadow place. Shadow places () are those CFP places that are: 1) mutual with CFP boundary places or 2) mapped to the boundary places of the original model.

Example 0 ().

In Figure 7, the five colored places are shadow places of () as are mutual with the CFP boundary places, and is mapped to the boundary places of the original model (). Intuitively, a shadow place represents the repetition of a boundary place in the original model or its CFP.

Figure 7. The CFP of (). The five colored places are shadow places. A shadow place is shown in same (different) color as its mutual (non-mutual) places.

4.3.1. Behavior Segment

Since we have obtained the behavior-unfolded graph, i.e., CFP, now, we define (behavior) segments which capture the minimal behavioral characteristics of a CFP.

Definition 0 (Behavior Segment).

Given a net and its CFP , a behavior segment is a connected sub-model of satisfying:

  1. , i.e., all boundary nodes are shadow places and all other places are not (-bounded).

  2. If each place in contains one token, after firing all transitions in , each place in contains just one token while other places in are empty (replay-hold).

Example 0 ().

According to Definition 4.2, if we put (in ) a token, (in ) can be fired, and after this firing, only (in ) contains a token. Therefore, the sub-model containing nodes (in ) and their adjacency arcs is a behavior segment. All behavior segments of are shown in Figure 9(a) (careful readers might have realized that these four segments belong to sequential structures, i.e., all segments contain only SESE nodes. However, a behavior segment can be a non-sequential structure, i.e., containing multiple-incoming or multiple-outgoing nodes. For example, the behavior segment of in Figure 3 is homogeneous to itself, containing four multiple-incoming or multiple-outgoing nodes).

4.3.2. Linking Rule

Behavior segments capture the minimal behavioral characteristics of a CFP. In order to portray the complete characteristics, we link these segments to obtain all possible behavior paths by applying the linking rule below.

Definition 0 (Linking Rule).

For two segments and , if we say they are linkable. If two places are mutual, we say is the joint place of denoted as where is the joint function. If , . The linked segment of two linkable segments satisfies:

  1. , i.e., the places of a linked segment consist of all places in and all non-entry places in .

  2. , i.e., the transitions of a linked segment consist of all transitions in and .

  3. , i.e., the arcs of a linked segment are the -replaced arcs of and .

Similarly, denotes the recursive linking of two segments and . The graphical explanation of the linking rule is shown in Figure 8.

Figure 8. The graphical explanation of linking two segments . The joint nodes are shown in red/blue color.
(a) The behavior segments of .
(b) The partial behavior paths of .
Figure 9. The behavior segments and the partial behavior paths of .

4.3.3. Behavior Path

According to the linking rule, we can obtain all linked segments. However, a linked segment might involve infinite linking due to concurrent and loop behaviors (Goltz and Reisig, 1983). Hence, we use truncation conditions to avoid infinite linking, which leads to the definition of a (behavior) path. Behavior paths capture complete behavioral characteristics of a CFP.

Definition 0 (Behavior Path).

A segment of is a behavior path iff one of the following conditions holds:

  1. , i.e., starts from the entry of and ends at one of the exits of .

  2. , i.e., starts from a shadow node (set) and ends at this node (set), i.e., loop structure.

Example 0 ().

Take Figure 9(a) as an example. Since
=, it follows that and are linkable with , and . Thus, the linked segment is a behavior path ( in Figure 9(b)). Partial behavior paths of are shown in Figure 9(b).

After defining behavior path, we can extract all behavior paths from any process model. However, to linearize each path, we employ heuristic strategies (path-level linearization) corresponding a user’s understanding way, presented in Algorithm 1. A path starting from source node(s) holds higher priority, ending in sink node(s) holds lower priority (Line 2-3), and a shortest path first strategy (Line 4-5) is used.

Input: : A CFP; : The behavior path set of .
Output: The sorted behavior paths .
1 foreach  in  do
2       if  or  then
3             set
4       else if  then
5             set
6      
return -sorted paths
Algorithm 1 Path-Linearization Algorithm

The node-level linearization is realized by recursion of behavior paths. Each path is recognized as a polygon component and then put into the recursive algorithm - BePT. The end point is a non-decomposable trivial component. When encountering a gateway node (split or join node), the corresponding DSynT (a pre-defined language template) is retrieved from RDT or pre-defined XML-format files. When encountering a SESE node, the corresponding DSynT is extracted from the embedded RDT. After obtaining all DSynTs, the sentence planning phase is triggered.

4.4. Sentence planning

Sentence planning sets out to generate a sentence for each node. The main idea here is to utilize a DSynT to create a NL sentence (Lavoie and Rambow, 1997; Ballesteros et al., 2014; Leopold et al., 2014). The generation task is divided into two levels: template sentence and activity sentence generation.

  1. Template sentences focus on describing the behavioral information related to the non-terminal RPST nodes. We provide 28 language template555Anonymous Github Link DSynTs (including split, join, dead transition, deadlock (van der Aalst et al., 2004) etc,) to represent corresponding semantic(s). Choosing which template depends on three parameters (Leopold et al., 2012a, 2014; Qian et al., 2017): 1) the existence of a gateway label; 2) the gateway type; 3) the number of outgoing arcs. For instance, for a place with multiple outgoing arcs, the corresponding template sentence “One of the branches is executed” will be retrieved and output.

  2. Activity sentences focus on describing a single activity related to the terminal (leaf) RPST nodes. RDT representation has embedded all DSynT messages, thus, for each activity, we can directly access its DSynT from RDT.

After preparing all DSynTs in the text planning phase, we employ three steps to optimize the expression before final generation:

  1. Checking whether each DSynT lacks necessary grammar meta-information to guarantee its grammatical correctness.

  2. Pruning redundant TARs to ensure the selected TARs will not be repeated (Pruning Rule). For example, derived by or in Figure 9(b) is a redundant TAR because it has been concluded in .

  3. Refining the DSynT messages containing the same linguistic component between two consecutive sentences and making use of three aggregation strategies: role aggregation, action aggregation and object aggregation (Leopold et al., 2014; Qian et al., 2017).

After expression optimization, we employ the DSynT-based realizer RealPro (Lavoie and Rambow, 1997) to realize sentence generation. RealPro requires a DSynT as input and outputs a grammatically correct sentence (Ballesteros et al., 2014). In a loop, every DSynT is passed to the realizer. The resulting NL sentence is then added to the final output text. After all sentences have been generated, the final text is presented to the end user.

Example 0 ().

The generated text of in Figure 5 is as follows (other state-of-the-art methods cannot handle this model): [style=myboxstyle,frametitle=]

  1. The following main branches are executed:

  2. The experimenter extracts the genes. Then, he sequences the DNA. Subsequently, the experimenter records the data.

  3. Attention, there are two loops which may conditionally occur:

  4. After sequencing DNA, the experimenter can also remove impurities if it is not clean. Then, he continues extracting genes.

  5. After recording the data, there is a series of activities that need to be finished before DNA sequencing:

  6. ***

  7. Once the data is enough, the process ends.

Template sentences (1, 3, 7) describe where the process starts, splits, joins and ends. Activity sentences (2, 4, 5) describe each sorted behavior path. The paragraph placeholder (6) can be flexibly replaced according to the sub-text of the simplified component . We can see that BePT first describes the main path (“”) before two possible loops (“”, “”). These three paragraphs of the generated text correspond to three correct firing sequences of the original model, the generated text contains just enough descriptions to reproduce the original model without redundant descriptions.

4.5. Property Analysis

We emphasize BePT’s three strong properties - correctness, completeness and minimality. Specifically, given a net system and its TAR set . The behavior path set of by the linking rule (Definition 4.4) satisfies: 1) behavior correctness, ; 2) behavior completeness, ; 3) description minimality, each TAR by the pruning rule is described only once in the final text. Please see Appendices A, B and C for detailed proofs.

Source Type N SMR Place Transition Arc RPST depth
min ave max min ave max min ave max min ave max
SAP Industry 72 100.00% 2 3.95 13 1 3.12 12 2 6.75 24 1 1.85 5
DG Industry 38 94.74% 3 7.65 22 2 7.85 17 4 16.02 44 1 2.55 7
TC Industry 49 81.63% 6 10.10 17 6 10.62 19 14 21.87 38 1 3.92 7
SPM Academic 14 57.00% 2 7.28 12 1 7.40 15 2 15.49 30 1 2.93 5
IBM Industry 142 53.00% 4 39.00 217 3 26.46 145 6 79.84 456 1 5.21 12
GPM Academic 36 42.00% 4 11.15 19 3 11.55 24 6 24.92 48 1 3.22 5
BAI Academic 38 28.95% 4 10.54 21 2 9.93 24 6 22.92 49 1 3.24 5
Table 1. Statistics of the evaluation datasets. N=Total number of models per source; SMR=The ratio of structured models to all models; min=The minimum value per source; ave=The average value per source; max=The maximum value per source.

5. Evaluation

We have conducted extensive qualitative and quantitative experiments. In this section, we report the experimental results to answer the following research questions:

  1. Capability: Can BePT handle more complex model patterns?

  2. Detailedness: How much information does BePT provide?

  3. Consistency: Is BePT text consistent to the original model?

  4. Understandability: Is BePT text easier to understand?

  5. Reproducibility: Can the original model be reproduced only from its generated text?

5.1. Experimental Setup

In this part, we describe our experimental datasets, the baselines and the experiment settings.

5.1.1. Datasets

We collected and tested on seven publicly accessible datasets666Anonymous Github Link: SAP, DG, TC, SPM, IBM, GPM, BAI (Leopold et al., 2012a, 2014; Qian et al., 2017; Qian and Wen, 2018). Among them, SAP, DG, TC, IBM are from industry (enterprises etc,) and SPM, GPM, BAI are from academic areas (literatures, online tutorials, books etc,). The characteristics of the seven datasets are summarized in Table 1 (sorted by the decreasing ratio of structured models

). There are a total of 389 process models consisting of real-life enterprise models (87.15%) and synthetic models (12.85%). The number of transitions varies from 1 to 145 and the depth of RPSTs varies from 1 to 12. The statistical data is fully skewed due to the different areas, amounts and model structures.

5.1.2. Baseline Methods

We compared our proposed process translator BePT with the following three state-of-the-art methods:

  1. Leo (Leopold et al., 2012a). It is the first structured-based method focusing mainly on structured components: trivial, bond and polygon.

  2. Hen (Leopold et al., 2014). It is the extended version of Leo focusing mainly on rigid components with longest-first strategy.

  3. Goun (Qian et al., 2017). It is a state-of-the-art structured-based method focusing mainly on unfolding model structure without considering its behaviors.

5.1.3. Parameter Settings

We implemented BePT based on jBPT777https://code.google.com/archive/p/jbpt/. An easy-to-use version of BePT is also publicly available888Anonymous Github Link. We include an editable parameter for defining the size of a paragraph and predefine this parameter with a value of 75 words. Once this threshold is reached, we use a change of the performing role or an intermediate activity as indicator and respectively introduce a new paragraph. Besides, we use the default language grammar style of subject-predicate-object and object-be-predicated-by-subject to express a sentence (Leopold et al., 2012a, 2014; Qian et al., 2017). Finally, we set all parameters valid for all methods, i.e., to generate intact textual descriptions without any reduction.

5.2. Results

5.2.1. Capability (RQ1)


As discussed earlier, a rigid is a region that captures arbitrary model structure. Thus, these seven datasets are representative enough as the varies from 100% (structured easy models) to 28.95% (unstructured complex models). We analyzed and compared all process models. Table 2 reports their handling capabilities w.r.t some representative complex patterns (Qian et al., 2017; Liu and Kumar, 2005).

Type Pattern Leo Hen Goun BePT
T, B, P Trivial
Polygon
Easy Bond
Easy Loop
Unsymmetrical Bond
R Place Rigid
Transition Rigid
Mix Rigid
Intersectant Loop
Non-free-choice Construct
Invisible or Duplicated Task
Multi-layered Embedded
Extra Modeling Information
Multi-layered Paragraph
Total 4 5 9 14
Table 2. The handling capabilities of four P2T methods w.r.t. some representative patterns.

First, we can see that BePT shows the best handling capabilities. Among the 14 patterns, BePT can handle them all, which is better than Goun that can handle 9 patterns. Second, four methods can handle structured models well, while Goun and BePT can handle unstructured models, and BePT can further provide extra helpful messages. Third, the R and the Extra parts show that BePT can handle rigids of arbitrary complexity even the model is unsymmetrical, non-free-choice or multi-layered. From these results we can conclude that the behavior-based method BePT is the best one.

5.2.2. Detailedness (RQ2)


In the sentence planning phase, BePT checks the grammatical correctness of each DSynT so that the generated text can accord with correct English grammar, including various conjunctions, adverbial clauses and transitional sentences. Here, instead of comparing the grammatical correctness, we summarize the structural characteristics of all generated texts in Table 3.

Words/Text Sentences/Text
Leo Hen Goun BePT Leo Hen Goun BePT
SAP 38.0 38.0 38.1 38.1 6.0 6.0 6.2 6.2
DG 74.0 79.7 79.6 85.3 13.0 15.0 15.0 15.7
TC 99.2 110.8 112.4 135.0 12.2 15.5 15.7 18.7
SPM 41.5 54.1 55.6 100.9 5.8 7.9 8.1 14.1
IBM 140.2 180.7 182.9 191.9 74.2 80.7 81.7 86.2
GPM 38.3 50.8 53.8 147.0 6.2 7.5 7.9 16.2
BAI 25.7 31.7 32.6 111.3 2.7 4.4 4.6 15.7
Total 66.7 78.0 79.3 115.3 17.2 19.6 19.9 22.3
Table 3. Average number of words and sentences per text. Red numbers denote the maximum and green numbers denote the minimum per dataset.

A general observation is that BePT texts are longer than the other texts. Leo texts, Hen texts, Goun texts contain an average of 66.7, 78.0 and 79.3 word length and 17.2, 19.6, 19.9 sentence length respectively, while BePT texts include an average of 115.3 words and 22.3 sentences. However, this does not imply that BePT texts are verbose, using longer sentence to describe the same content. Rather, the main reason is that Leo, Hen, Goun ignore some modeling-level messages related to soundness and safety (van der Aalst, 2000; van der Aalst et al., 2004), but BePT supplements them.

Therefore, we conclude that BePT generates more detailed messages on account of additional useful information. Certainly, although all parameters are set to be valid in this experiment, BePT is actually configurable, i.e., users can set parameters to determine whether to generate these complementary details or not.

5.2.3. Consistency (RQ3)


We always hold the belief that the textual paragraphs will hugely influence readability since paragraph indentation can reflect the number of components, the modeling depth of each activity, etc. Considering the generated text of the motivating example in Section 3, if the text contains no paragraph indentation (each paragraph starts from the bullet point “”), it will be much harder to fully reproduce the model semantics (Leopold et al., 2014; Qian et al., 2017).

In this part, we aim at the detection of structural consistency between a process model and its corresponding textual descriptions. This task requires an alignment of a model and a text, i.e., activities in the texts need to be related to model elements and vice versa (de Leoni et al., 2015; Weidlich et al., 2011). For an activity , its modeling depth is the RPST depth of , and its description depth is how deep it is indented in a text. For the activity set of a model, the modeling depth distribution is denoted as and the description depth distribution is denoted as . We employ a correlation coefficient to evaluate the consistency between two distributions of and as follows:

(1)

where is the expectation function and

is the variance function. The value of the

function ranges from -1.0 (negatively related) to 1.0 (positively related).

Figure 10 shows the consistency results of four P2T methods. First, BePT gets the highest consistency value in every dataset, meaning that BePT positively follows the depth distribution of original models to the maximum extent. Notice that all methods obtain 1.00 consistency on the SAP dataset since all SAP models are structured. However, on the SPM dataset, BePT achieves 0.86 consistency, while the other methods are only at around 0.25. The main reason is that SPM contains plenty of close-to-structured rigids which directly reflects the other methods’ drawbacks. Second, with lower , the consistency performance rapidly decreases. The most obvious updates occur in GPM and BAI where Leo, Hen and Goun even get negative coefficient values, which demonstrates that they negatively relate the distribution of the original models even causing the opposite distribution, while BePT obtains 0.42 and 0.25 which shows that BePT is still positively related even while facing unstructured situations. Hence, we conclude that BePT texts conform better to the original models.

Figure 10. The consistency distribution. The red color denotes the positive coefficient while the blue color denotes the negative coefficient.
(a) Information gain line.
(b) Perplexity distributions on all datasets.
Figure 11. The graphical representation of information gain line and the perplexity distributions.

5.2.4. Understandability (RQ4)


In this part, we discuss the textual perplexity which reflects the document-level understandability. It quantifies how hard to understand a model-text pair. This information entropy-based metric (Jianhua, 1991)

is inspired from the natural language processing techniques

(Roark et al., 2007).

Consider a model-text pair in which the text consists of a sequence of paragraphs . denotes the information gain of paragraph :

(2)

where is the described activity set and is the neglected activity set. This formula employs the information entropy to describe the confusion of all activities in a paragraph. Its exponent value has a same magnitude of . We notice that if any activity cannot be generated in the text, the text system should reduce the understandability value with the original model, i.e., improve the perplexity of the text system, hence, it multiplies .

When describing single paragraph , the information gain (Han et al., 2012) of the text system is . After describing paragraph , the information gain changes to . Similarly, after describing all paragraphs, the information gain is . These values are mapped to points shown in Figure 11(a). We call the broken line linking all points information gain line . Then, we can define the perplexity of the text system (the integral of all sentence perplexity):

(3)

intuitively measures whether the model-text pair system is understandable. We calculated this metric for each datasets and reported the results.

Figure 11(b) shows the perplexity results. We can see that BePT achieves the lowest perplexity in all datasets, i.e., best understandability. On average, the perplexity has been reduced from to . This results also show that the perplexity trend is Leo Hen Goun BePT, i.e., the understandability trend is Leo Hen Goun BePT.

5.2.5. Reproducibility (RQ5)


This part evaluates the reproducibility of the generated text, i.e., could the original model be reproduced from the generated text?

For each model-text pair (), we manually back-translate (extract) the process model from the generated text and compare the elements between the original and the extracted models. All back-translators are provided only the generated texts without them knowing any information of the original models. They reproduce the original models from the texts according to their own understanding. After translation, we evaluate the structural and behavioral reproducibility between the original model and the extracted one. If an isomorphic model can be reproduced, we can believe that the text contains enough information to reproduce the original model, i.e., excellent reproducibility.

We evaluate the P2T performance using the measure (harmonic average of recall and precision) which is inspired from data mining field (Han et al., 2012):

(4)

where is the balance weight. In our experiments, equal weights () are assigned to balance recall and precision. The higher the is, the better the reproducibility is.

Structural Reproducibility. Figure 12 shows the results of four dimensions (place, transition, gateway, element).

(a) score on places
(b) score on transitions
(c) score on gateways
(d) score on all elements
Figure 12. The measures on structural dimensions.

First, we can see that the value of the four methods falls from 100% to a lower value w.r.t. decreasing . For GPM and BAI datasets, Leo achieves only around 40%. The low-value cases significantly affect the ability to understand or reproduce the original model, and it reflects the general risk that humans may miss elements when describing a model, i.e., they lose around 60% information. Still, Hen achieves around 90% while BePT hits 100%, i.e., Goun and BePT lose least information. We can conclude that, among the four P2T methods, BePT achieves highest reproducibility, followed by Goun and then Hen. The structural reproducibility performance also shows the trend, Leo Hen Goun BePT.

Behavioral Reproducibility. Behavioral reproducibility aims to evaluate the extent of correctly expressed behavior, i.e., how many correct behaviors are expressed in the generated texts. We also use to evaluate behavioral performance. In this part, we use TAR (local) and trace (global) to reflect the model behaviors. As trace behaviors exist space explosion problem, thus, for trace F-measure, we only evaluate these models without loop behavior.

Figure 13 shows the results for the behavior dimension (TAR, trace). The results show that BePT outperforms Leo, Hen and Goun significantly in terms of both TAR and trace performance. Leo performance falls sharply with decreasing , while Hen and Goun drop more gently than Leo and they achieve around 70% on BAI for trace . BePT gets the highest of around 100% for both TAR and trace measures, and BePT also produces a distinct improvement on TAR and trace over other methods. From these two performance results, we can conclude that BePT showcases the best reproducibility over the state-of-the-art P2T methods.

(a) score on TARs
(b) score on traces
Figure 13. The measures on behavioral dimensions.

6. Conclusion and Future Work

We present a behavior-based process translator. It first embed RDT and decode it by extracting the behavior paths. Then, we use NL tools to generate the textual descriptions. Our experiments show the significant improvements on capability, detailedness, consistency, understandability and reproducibility. This approach can unlock the hidden value that lies in large process repositories in the cloud, and make them more reusable.

Furthermore, empirical testing shows that BePT handles models within on average (efficiency). Besides, BePT is model-independent and language-independent (expandability). The adaptation to other modeling languages requires model transformation and the replacement of corresponding elements, while adaptation to other languages only requires the replacement of three resources: the NL analysis, the DSynT templates and the realizer. BePT not only can be applied in cloud-sharing scenarios but also in engineering analysis (Cardoso, 2009), or integrated into real-life intelligent devices.

We also list some potential limitations of this study. Above all, when the model is unsound, BePT informs the user that the model contains non-sound or wrong parts but without giving any correction advice. Another drawback concerns manual extraction of the NL text because of the limited number of the participants. We cannot guarantee that each extraction rule for a generated text is identical. Thus, generating the correction advice and automatic reverse translation would also be of interest in future studies.

Appendices

Appendix A The Proof of Behavior Correctness

Property 1 ().

Given a net system and its TAR set . The behavior path set of its CFP by the linking rule (Definition 4.4) satisfies behavior correctness, .

Proof 1 ().

Given two Petri nets , we assume . Then, consider two situations: a) inside a single segment; b) between the linking of two segments:

  1. The initial (default) marking is also the the initial marking of , i.e., the marking is reachable. According to the definition of behavior segment, is reachable from , and the firing rule guarantees . After executing , is reachable as , so that holds. Similarly, holds.

  2. For two segments , we use the notation to denote the TAR set in the joint points, i.e., . Since guarantees that can be fired after firing , i.e., . Therefore, .

According to the above two points, we can conclude that .

Appendix B The Proof of Behavior Completeness

Property 2 ().

Given a net system and its TAR set . The behavior path set of its CFP by linking rule (Definition 4.4) satisfies behavior completeness, .

Proof 2 ().

For any TAR , the place set is denoted as . The sub-model is denoted as . We use to denote , i.e., is a sub-model of . Then, consider the following situations:

  1. When , there is no that can be the boundary node of a segment according to Definition 4.2 (-bounded). Hence, can only exist in the middle of a segment, i.e., .

  2. When , is split, being the sink set of a certain segment and the source set of a certain segment (-bounded), i.e., always holds. Hence, .

  3. When , there is no can be the boundary node of a segment, or it contradicts Definition 4.2 (reply-hold). Hence, can only exist in the middle of a segment, i.e., .

  4. When , i.e., and are in a concurrent relation. There always exists a concurrent split transition . According to Definition 4.2, (reply-hold). Thus, .

Appendix C The Proof of Description Minimality

Property 3 ().

For a net system , the pruned TARs by the Pruning Rule satisfies description minimality.

Proof 3 ().

According to Appendices B, for any TAR , it can always be derived from a certain behavior path, i.e., . Hence, for two TARs of the original model with . If , always holds, while always holds if . Therefore, the pruning rule always keeps TARs appearing at the first time, i.e., satisfies behavior minimality.

References

  • (1)
  • Aysolmaz et al. (2018) Banu Aysolmaz, Henrik Leopold, Hajo A. Reijers, and Onur Demirörs. 2018. A semi-automated approach for generating natural language requirements documents based on business process models. Information and Software Technology 93 (2018), 14–29. https://doi.org/10.1016/j.infsof.2017.08.009
  • Ballesteros et al. (2014) Miguel Ballesteros, Bernd Bohnet, Simon Mille, and Leo Wanner. 2014. Deep-Syntactic Parsing. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. Dublin City University and Association for Computational Linguistics, 1402–1413. http://www.aclweb.org/anthology/C14-1133
  • Boulakia and Leser (2011) Sarah Cohen Boulakia and Ulf Leser. 2011. Search, adapt, and reuse: the future of scientific workflows. SIGMOD Record 40 (2011), 6–16.
  • Cardoso (2009) Joao Paulo A. Almeida Cardoso, Evellin Cristine Souza. 2009. Requirements engineering based on business process models: A case study. In 2009 13th Enterprise Distributed Object Computing Conference Workshops. 320–327. https://doi.org/10.1109/EDOCW.2009.5331974
  • Cho et al. (2014a) Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014a.

    On the Properties of Neural Machine Translation: Encoder–Decoder Approaches. In

    Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. Association for Computational Linguistics, 103–111.
    https://doi.org/10.3115/v1/W14-4012
  • Cho et al. (2014b) Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, 1724–1734. https://doi.org/10.3115/v1/D14-1179
  • de Leoni et al. (2015) Massimiliano de Leoni, Fabrizio M. Maggi, and Wil M.P. van der Aalst. 2015. An Alignment-based Framework to Check the Conformance of Declarative Process Models and to Preprocess Event-log Data. Inf. Syst. 47, C (Jan. 2015), 258–277. https://doi.org/10.1016/j.is.2013.12.005
  • Engelfriet (1991) Joost Engelfriet. 1991. Branching processes of Petri nets. Acta Informatica 28, 6 (1991), 575–591. https://doi.org/10.1007/BF01463946
  • Esparza et al. (2002) Javier Esparza, Stefan Römer, and Walter Vogler. 2002. An Improvement of McMillan’s Unfolding Algorithm. Formal Methods in System Design 20, 3 (2002), 285–310. https://doi.org/10.1023/A:1014746130920
  • Gemino (2004) Andrew Gemino. 2004. Empirical comparisons of animation and narration in requirements validation. Requirements Engineering 9, 3 (2004), 153–168. https://doi.org/10.1007/s00766-003-0182-0
  • Goble et al. (2010) Carole A Goble, Jiten Bhagat, Sergejs Aleksejevs, Don Cruickshank, Danius Michaelides, David Newman, Mark Borkum, Sean Bechhofer, Marco Roos, Peter Li, and David De Roure. 2010. myExperiment: a repository and social network for the sharing of bioinformatics workflows. Nucleic Acids Res 38, Web Server issue (Jul 2010), W677–82. https://doi.org/10.1093/nar/gkq429
  • Goble and Roure (2007) Carole Anne Goble and David Charles De Roure. 2007. myExperiment: social networking for workflow-using e-scientists. In WORKS@HPDC.
  • Goecks et al. (2010) Jeremy Goecks, Anton Nekrutenko, James Taylor, and The Galaxy Team. 2010. Galaxy: a comprehensive approach for supporting accessible, reproducible, and transparent computational research in the life sciences. Genome Biology 11, 8 (2010), R86. https://doi.org/10.1186/gb-2010-11-8-r86
  • Goltz and Reisig (1983) U. Goltz and W. Reisig. 1983. The non-sequential behaviour of Petri nets. Information and Control 57, 2 (1983), 125–147. https://doi.org/10.1016/S0019-9958(83)80040-0
  • Han et al. (2012) Jiawei Han, Micheline Kamber, and Jian Pei. 2012. Data Mining: Concepts and Techniques. Morgan Kaufmann, Boston, 327–391. https://doi.org/10.1016/B978-0-12-381479-1.00008-3
  • Hull et al. (2006) Duncan Hull, Katy Wolstencroft, Robert Stevens, Carole Goble, Mathew R Pocock, Peter Li, and Tom Oinn. 2006. Taverna: a tool for building and running workflows of services. Nucleic Acids Research 34, Web Server issue (07 2006), W729–W732. https://doi.org/10.1093/nar/gkl320
  • Jianhua (1991) Lin Jianhua. 1991. Divergence measures based on the Shannon entropy. IEEE Transactions on Information Theory 37, 1 (1991), 145–151. https://doi.org/10.1109/18.61115
  • Lavoie and Rambow (1997) Benoit Lavoie and Owen Rambow. 1997. A Fast and Portable Realizer for Text Generation Systems. In Proceedings of the Fifth Conference on Applied Natural Language Processing (ANLC ’97). Association for Computational Linguistics, Stroudsburg, PA, USA, 265–268. https://doi.org/10.3115/974557.974596
  • Leopold (2013) Henrik Leopold. 2013. Parsing and Annotating Process Model Elements. In Natural Language in Business Process Models, Henrik Leopold (Ed.). Springer International Publishing, Cham, 49–80.
  • Leopold et al. (2012a) Henrik Leopold, Jan Mendling, and Artem Polyvyanyy. 2012a. Generating Natural Language Texts from Business Process Models. In Advanced Information Systems Engineering, Jolita Ralyté, Xavier Franch, Sjaak Brinkkemper, and Stanislaw Wrycza (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 64–79.
  • Leopold et al. (2014) Henrik Leopold, Jan Mendling, and Artem Polyvyanyy. 2014. Supporting Process Model Validation through Natural Language Generation. IEEE Transactions on Software Engineering 40, 8 (2014), 818–840. https://doi.org/10.1109/TSE.2014.2327044
  • Leopold et al. (2011) Henrik Leopold, Sergey Smirnov, and Jan Mendling. 2011. Recognising Activity Labeling Styles in Business Process Models. Enterprise Modelling and Information Systems Architectures 6 (2011), 16–29.
  • Leopold et al. (2012b) Henrik Leopold, Sergey Smirnov, and Jan Mendling. 2012b. On the refactoring of activity labels in business process models. Information Systems 37, 5 (2012), 443–459. https://doi.org/10.1016/j.is.2012.01.004
  • Li et al. (2015) Guang Li, Shubo Ma, and Yahong Han. 2015.

    Summarization-based Video Caption via Deep Neural Networks. In

    Proceedings of the 23rd ACM International Conference on Multimedia (MM ’15). ACM, New York, NY, USA, 1191–1194.
    https://doi.org/10.1145/2733373.2806314
  • Liu and Kumar (2005) Rong Liu and Akhil Kumar. 2005. An Analysis and Taxonomy of Unstructured Workflows. In Business Process Management, Wil M. P. van der Aalst, Boualem Benatallah, Fabio Casati, and Francisco Curbera (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 268–284.
  • Malik and Bajwa (2013) Saleem Malik and Imran Sarwar Bajwa. 2013. Back to Origin: Transformation of Business Process Models to Business Rules. In Business Process Management Workshops, Marcello La Rosa and Pnina Soffer (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 611–622.
  • McMillan and Probst (1995) K. L. McMillan and D. K. Probst. 1995. A technique of state space search based on unfolding. Formal Methods in System Design 6, 1 (1995), 45–65. https://doi.org/10.1007/BF01384314
  • Meziane et al. (2008) Farid Meziane, Nikos Athanasakis, and Sophia Ananiadou. 2008. Generating Natural Language specifications from UML class diagrams. Requirements Engineering 13, 1 (2008), 1–18. https://doi.org/10.1007/s00766-007-0054-0
  • Miller (1995) George A. Miller. 1995. WordNet: A Lexical Database for English. Commun. ACM 38, 11 (Nov. 1995), 39–41. https://doi.org/10.1145/219717.219748
  • Murata (1989) T. Murata. 1989. Petri nets: Properties, analysis and applications. Proc. IEEE 77, 4 (1989), 541–580. https://doi.org/10.1109/5.24143
  • Nielsen et al. (1979) Mogens Nielsen, Gordon Plotkin, and Glynn Winskel. 1979. Petri nets, event structures and domains. In Semantics of Concurrent Computation, Gilles Kahn (Ed.). Springer Berlin Heidelberg, Berlin, Heidelberg, 266–284.
  • Qian and Wen (2018) Chen Qian and Lijie Wen. 2018. Solving algorithm for scheduling problem with control-flow constraints. Computer Integrated Manufacturing Systems 24, 7 (2018), 1598–1607.
  • Qian et al. (2017) Chen Qian, Lijie Wen, Jianmin Wang, Akhil Kumar, and Haoran Li. 2017. Structural Descriptions of Process Models Based on Goal-Oriented Unfolding. In Advanced Information Systems Engineering, Eric Dubois and Klaus Pohl (Eds.). Springer International Publishing, Cham, 397–412.
  • Roark et al. (2007) Brian Roark, Murat Saraclar, and Michael Collins. 2007.

    Discriminative N-gram Language Modeling.

    Comput. Speech Lang. 21, 2 (April 2007), 373–392. https://doi.org/10.1016/j.csl.2006.06.006
  • Schulz et al. (2018) Philip Schulz, Wilker Aziz, and Trevor Cohn. 2018. A Stochastic Decoder for Neural Machine Translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 1243–1252. http://aclweb.org/anthology/P18-1115
  • Starlinger et al. (2014) Johannes Starlinger, Bryan Brancotte, Sarah Cohen-Boulakia, and Ulf Leser. 2014. Similarity Search for Scientific Workflows. Proc. VLDB Endow. 7, 12 (Aug. 2014), 1143–1154. https://doi.org/10.14778/2732977.2732988
  • van der Aalst et al. (2004) W. van der Aalst, T. Weijters, and L. Maruster. 2004. Workflow mining: discovering process models from event logs. IEEE Transactions on Knowledge and Data Engineering 16, 9 (2004), 1128–1142. https://doi.org/10.1109/TKDE.2004.47
  • van der Aalst (1998) W. M. P. van der Aalst. 1998. Three Good Reasons for Using a Petri-Net-Based Workflow Management System. Springer US, Boston, MA, 161–182. https://doi.org/10.1007/978-1-4615-5499-8_10
  • van der Aalst (2000) W. M. P. van der Aalst. 2000. Workflow Verification: Finding Control-Flow Errors Using Petri-Net-Based Techniques. Springer Berlin Heidelberg, Berlin, Heidelberg, 161–183. https://doi.org/10.1007/3-540-45594-9_11
  • Vanhatalo et al. (2008) Jussi Vanhatalo, Hagen Völzer, and Jana Koehler. 2008. The Refined Process Structure Tree. In Business Process Management, Marlon Dumas, Manfred Reichert, and Ming-Chien Shan (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 100–115.
  • Vinyals et al. (2015) Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator.

    2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    (2015), 3156–3164.
  • Weidlich et al. (2011) M. Weidlich, J. Mendling, and M. Weske. 2011. Efficient Consistency Measurement Based on Behavioral Profiles of Process Models. IEEE Transactions on Software Engineering 37, 3 (2011), 410–429. https://doi.org/10.1109/TSE.2010.96
  • Wu et al. (2018) Huijun Wu, Chen Wang, Jie Yin, Kai Lu, and Liming Zhu. 2018. Sharing Deep Neural Network Models with Interpretation. In Proceedings of the 2018 World Wide Web Conference (WWW ’18). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 177–186. https://doi.org/10.1145/3178876.3185995