Foundation for a series of efficient simulation algorithms

by   Gérard Cécé, et al.

Compute the coarsest simulation preorder included in an initial preorder is used to reduce the resources needed to analyze a given transition system. This technique is applied on many models like Kripke structures, labeled graphs, labeled transition systems or even word and tree automata. Let (Q, →) be a given transition system and Rinit be an initial preorder over Q. Until now, algorithms to compute Rsim , the coarsest simulation included in Rinit , are either memory efficient or time efficient but not both. In this paper we propose the foundation for a series of efficient simulation algorithms with the introduction of the notion of maximal transitions and the notion of stability of a preorder with respect to a coarser one. As an illustration we solve an open problem by providing the first algorithm with the best published time complexity, O(|Psim |.|→|), and a bit space complexity in O(|Psim |^2. log(|Psim |) + |Q|. log(|Q|)), with Psim the partition induced by Rsim.



There are no comments yet.


page 1

page 2

page 3

page 4


Lowerbounds for Bisimulation by Partition Refinement

We provide time lower bounds for sequential and parallel algorithms deci...

Computing the Fuzzy Partition Corresponding to the Greatest Fuzzy Auto-Bisimulation of a Fuzzy Graph-Based Structure

Fuzzy graph-based structures such as fuzzy automata, fuzzy labeled trans...

A Fast Self-correcting π Algorithm

We have rediscovered a simple algorithm to compute the mathematical cons...

Simulation Algorithms for Symbolic Automata (Technical Report)

We investigate means of efficient computation of the simulation relation...

The Complexity of Bisimulation and Simulation on Finite Systems

In this paper the computational complexity of the (bi)simulation problem...

A simpler O(m log n) algorithm for branching bisimilarity on labelled transition systems

Branching bisimilarity is a behavioural equivalence relation on labelled...

Computing Crisp Simulations and Crisp Directed Simulations for Fuzzy Graph-Based Structures

Like bisimulations, simulations and directed simulations are used for an...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The simulation relation has been introduced by Milner [Mil71] as a behavioural relation between process. This relation can also be used to speed up the test of inclusion of languages [ACH10] or as a sufficient condition when this test of inclusion is undecidable in general [CG11]. Another very helpful use of a simulation relation is to exhibit an equivalence relation over the states of a system. This allows to reduce the state space of the given system to be analyzed while preserving an important part of its properties, expressed in temporal logics for examples [GL94]. Note that the simulation equivalence yields a better reduction of the state space than the better known bisimulation equivalence.

1.1 State of the Art

The paper that has most influenced the literature is that of Henzinger, Henzinger and Kopke [HHK95]. Their algorithm, designed over Kripke structures, and here named HHK, to compute , the coarsest simulation, runs in -time, with the transition relation over the state space , and uses bits.

But it happens that is a preorder. And as such, it can be more efficiently represented by a partition-relation pair with a partition, of the state space , whose blocks are classes of the simulation equivalence relation and with a preorder over the blocks of . Bustan and Grumberg [BG03] used this to propose an algorithm, here named BG, with an optimal bit-space complexity in with (in general significantly smaller than ) the number of blocks of the partition associated with . Unfortunately, BG suffers from a very bad time complexity. Then, Gentilini, Piazza and Policriti [GPP03] proposed an algorithm, here named GPP, with a better time complexity, in , and a claimed bit space complexity like the one of BG. This algorithm had a mistake and was corrected in [vGP08]. It is very surprising that none of the authors citing [GPP03], including these of [vGP08, RT07, RT10, CRT11] and [GPP15], realized that the announced bit space complexity was also not correct. Indeed, as shown in [Céc13] and [Ran14] the real bit space complexity of GPP is . In a similar way, [RT10] and [CRT11] did a minor mistake by considering that a bit space in was sufficient to represent the partition in their algorithms while a space in is needed.

Ranzato and Tapparo [RT07, RT10] made a major breakthrough with their algorithm, here named RT, which runs in -time but uses bits, which is more than GPP. The difficulty of the proofs and the abstract interpretation framework put aside, RT is a reformulation of HHK but with a partition-relation pair instead of a mere relation between states. Over unlabelled transition systems, this is the best algorithm regarding the time complexity.

Since [RT07] a question has emerged: is there an algorithm with the time complexity of RT while preserving the space complexity of GPP ?

Crafa, Ranzato and Tapparo [CRT11], modified RT to enhance its space complexity. They proposed an algorithm with a time complexity in and a bit space complexity in with between and , the number of bisimulation classes, and a smaller abstraction of . Unfortunately (although this algorithm provided new insights), for 22 examples, out of the 24 they provided, there is no difference between , and . For the two remaining examples the difference is marginal. With a little provocation, we can then consider that and compute the bisimulation equivalence (what should be done every time as it produces a considerable speedup) then compute the simulation equivalence with GPP on the obtained system is a better solution than the algorithm in [CRT11] even if an efficient computation of the bisimulation equivalence requires, see [PT87], a bit space in .

Ranzato [Ran14] almost achieved the challenge by announcing an algorithm with the space complexity of GPP but with the time complexity of RT multiplied by a factor. He concluded that the suppression of this factor seemed to him quite hard to achieve. Gentilini, Piazza and Policriti [GPP15] outperformed the challenge by providing an algorithm with the space complexity of BG and the time complexity of RT, but only in the special case of acyclic transition systems.

1.2 Our Contributions

In this paper, we respond positively to the question and propose the first simulation algorithm with the time complexity of RT and the space complexity of GPP.

Our main sources of inspiration are [PT87] for its implicit notion of stability against a coarser partition, that we generalize in the present paper for preorders, and for the counters it uses, [HHK95] for the extension of these counters for simulation algorithms, [BG03] for its use of little brothers to which we prefer the use of what we define as maximal transitions, [Ran14] for its implicit use of maximal transitions to split blocks and for keeping as preorders the intermediate relations of its algorithm and [Céc13] for its equivalent definition of a simulation in terms of compositions of relations.

Note that almost all simulation algorithms are defined for Kripke structures. However, in each of them, after an initial step which consists in the construction of an initial preorder , the algorithm is equivalent to calculating the coarsest simulation inside over a classical transition system. We therefore directly start from a transition system and an initial preorder inside which we compute the coarsest simulation.

2 Preliminaries

Let be a set of elements, or states. The number of elements of is denoted . A relation over is a subset of . Let be a relation over . For all we may write , or in the figures, when . We define for , and for . We write , or in the figures, when . For and , we also write (resp. ) for (resp. ). A relation is said coarser than another relation when . The inverse of is . The relation is said symmetric if and antisymmetric if and implies . Let be a second relation over , the composition of by is . The relation is said reflexive if for all we have , and transitive if . A preorder is a reflexive and transitive relation. A partition of is a set of non empty subsets of , called blocks, that are pairwise disjoint and whose union gives . A partition-relation pair is a pair with a partition and a relation over . To a partition-relation pair we associate a relation . Let be a preorder on and , we define and . It is easy to show that is a partition of . Therefore, given any preorder and a state , we also call block, the block of , the set . A symmetric preorder is totally represented by the partition since . Let us recall that a symmetric preorder is traditionally named an equivalence relation. Conversely, given a partition , there is an associated equivalence relation . In the general case, a preorder is efficiently represented by the partition-relation pair with a reflexive, transitive and antisymmetric relation over . Furthermore, for a preorder , we note the relation over which associates to a state the elements of its block. Said otherwise: . Finally, for a set of sets we note for .

Proposition 1.

Let and be two blocks of a preorder . Then

Said otherwise, when two subsets of two blocks of are related by then all the elements of the first block are related by with all the elements of the second block.


Thanks to the transitivity of . ∎

A finite transition systems (TS) is a pair with a finite set of states, and a relation over called the transition relation. A relation is a simulation over if:


For a simulation , when we have , we say that is simulated by (or simulates ).

A relation is a bisimulation if and are both simulations. The interesting bisimulations, such as the coarsest one included in a preorder, are equivalence relations. It is easy to show that an equivalence relation is a bisimulation iff :


The classical definition is to say that a relation is a simulation if: . However, we prefer the formula (1), which is equivalent, because it is more global and to design efficient simulation algorithms we must abstract from individual states.

In the remainder of the paper, all relations are over the same finite set and the underlying transition system is .

3 Key Ideas

Let us start from equation (1). If a relation is not a simulation, we have . This implies the existence of a relation such that: . It can be shown that most of the simulation algorithms cited in the introduction, like HHK, GPP and RT, are based on this last equation. In this paper, like in [Céc13], we make the following different choice. When is not a simulation, we reduce the problem of finding the coarsest simulation inside to the case where there is a relation such that: . Let us note . We will say that is -stable since we have:


Our definition of stability is new. However, it is implicit in the bisimulation algorithm of [PT87, p. 979] where, with the notations from [PT87], a partition is said stable with every block of a coarser partition . Within our formalism we can say the same thing with the formula .

Figure 1: is -stable and , obtained after a split of blocks of and a refinement of , is -stable.

Consider the transition in Figure 1. The preorder is assumed to be -stable and we want to find the coarsest simulation included in . Since is a preorder, the set is a union of blocks of . A state in which doesn’t have an outgoing transition to belongs to , thanks to (3), but cannot simulate . Thus, we can safely remove it from . But to do this effectively, we want to manage blocks of states and not the individual states. Hence, we first do a split step by splitting the blocks of such that a resulting block, included in both and , is either completely included in , which means that its elements still have a chance to simulate , or totally outside of it, which means that its elements cannot simulate . Let us call the equivalence relation associated to the resulting partition. We will say that is -block-stable. Then, to test whether a block, of , which has an outgoing transition in , is included in , it is sufficient to do the test for only one of its elements, arbitrarily choosen, we call the representative of : . To do this test in constant time we manage a counter which, at first, count the number of transitions from to . By scanning the transitions whose destination belongs to this counter is updated to count the transitions from to . Therefore we get the equivalences: there is no transition from to iff there is no transition from to iff this counter is null. Remark that the total bit size of all the counters is in since there is at most blocks like , blocks like and transitions from a state like . The difference is not so significative in practice but we will reduce this size to , at a cost of elementary steps, which is hopefully within our time budget. Removing from the blocks of , like , which do not have an outgoing transition to is called the refine step. After this refine step, has been reduced to . Doing these split and refine steps for all transitions results in the relation that we will prove to be a -stable preorder.

In summary, from an initial preorder we will build a strictly decreasing series of preorders such that is -stable and contains, by construction, all simulations included in . Since all the relations are finite, this series has a limit, reached in a finite number of steps. Let us call this limit. We have: is -stable. Therefore, with (1) and (3), is a simulation and by construction contains all simulations included in the initial preorder: this is the coarsest one.


The counters which are used in the previous paragraphs play a similar role as the counters used in [PT87]. Without them, the time complexity of the algorithm of the present paper would have been multiplied by a factor and would have been this of GPP: .

4 Underlying Theory

In this section we give the necessary theory to define what should be the ideal split step and we justify the correctness of our refine step which allows to treat blocks as if they were single states. We begin by introducing the notion of maximal transition. This is the equivalent concept for transitions from that of little brothers, introduced in [BG03], for states. The main difference is that little brothers have been defined relatively to the final coarsest simulation in a Kripke structure. Here we define maximal transitions relatively to a current preorder .

Definition 2.

Let be a preorder. The transition is said maximal for , or -maximal, which is noted , when:

The set of -maximal transitions and the induced relation are both noted .

Figure 2: Illustration of the left property of Lemma 3.
Lemma 3 (Figure  2).

For a preorder , the two following properties are verified:


Let and . Since is reflexive, this set is not empty because it contains . Let be the set of blocks of which contain an element from . Since this set is finite (there is a finite number of blocks) there is at least a block maximal in . Said otherwise, there is no , different from , such that . Let such that . From what precedes, the transition is maximal and . Hence: . So we have:


Now, from (4) we get and thus since is a preorder. The relation is a subset of . Therefore we also have which concludes the proof.

In the last section, we introduced the notions of stability and of block-stability. Let us define them formaly.

Definition 4.

Let a preorder.

  • is said -stable, with a coarser preorder than , if:

  • An equivalence relation included in , is said -block-stable if:


Say that is included in means that each block of is included in a block of .

As seen in the following lemma we have a nice equivalence: an equivalence relation is -block-stable iff it is -stable.

Lemma 5.

Let be an equivalence relation included in a preorder . Then (6) is equivalent with:

Figure 3: Illustration of 3.

To show the equivalence of (6) and (7) we use an intermediate property:


With the help of Figure  3 it is straightforward to see the equivalence of (6) and (8). It remains therefore to show the equivalence of (7) and (8).

  • (7) (8). From (7) we get and thus (8) since, as a preorder is transitive.

  • (8) (7). Let be the identity relation. We have and thus since is a preorder and as such contains . With (8) we thus get (7).

With (5) and (7) the reader should now be convinced by the interest of (1) to define a simulation.

Following the keys ideas given in Section 3 there is an interest, for the time complexity, of having a coarse -block-stable equivalence relation . Hopefully there is a coarsest one.

Proposition 6.

Given a preorder , there is a coarsest -stable equivalence relation.


With Lemma 5 and by an easy induction based on the two following properties:

  • the identity relation, , is a -stable equivalence relation.

  • the reflexive and transitive closure of the union of two -stable equivalence relations, and , is also a -stable equivalence relation, coarser than them.

We are now ready to introduce the main result of this section. It is a formalization, and a justification, of the refine step given in Section 3. In the following theorem, the link with the decreasing sequence of relations mentioned at the end of Section 3 is: if is the current value of then is and will be . The reader can also ease its comprehension of the theorem by considering Figure  1.

Theorem 7.

Let be a preorder, be a -stable preorder and be the coarsest -stable equivalence relation. Let and with


  1. with

  2. Any simulation included in is also included in .

  3. is a preorder.

  4. is -stable.

  5. Blocks of are blocks of (i.e. ).


  1. Since belongs to , and , from Lemma 3, we get . For the converse, let . By definition, there are such that , , , , and . From and Lemma 3 we have . From , Lemma 3, Lemma 5, and the hypothesis that is -stable, we have . Therefore, there is a state such that and . Les us suppose . Since , we would have had . Thus . We have , , , and thus since is a preorder and . With and the hypothesis that is -stable, we get and thus, with Lemma 3, and thus since is a preorder and . As seen above, . So we have . In summary: , , and . All of this implies that . So we have and thus .

  2. By contradiction. Let such that . This means that . From 1) and the hypothesis there is such that , and thus , from Lemma 3. From , thus , and the assumption that is a simulation there is with and thus . This contradicts . Therefore .

  3. If this is not the case, there are such that , and . Since and is a -stable relation there is such that and . The case would contradict . Therefore and all the conditions are met for belonging in which contradicts .

  4. Let us show that is both reflexive and transitive. If it is not reflexive, since is reflexive, from 1) there is in and a state such that and and . But this is impossible since is reflexive. Hence, is reflexive. We also prove by contradiction that is transitive. If it is not the case, there are such that , but . Since and is transitive then . With and 1), there is such that and . But from 3), there is such that and . With and the same reason, there is such that and . By transitivity of we get and thus . With Lemma 3 this contradicts . Hence, is transitive.

  5. This is a direct consequence of the two preceding items and the fact that by construction .

  6. By hypothesis, . This means that blocks of are made of blocks of . By definition, is obtained by deleting from relations between blocks of . This implies that blocks of are made of blocks of . To proove that a block of is made of a single block of , let us assume, by contradiction, that there are two different blocks, and , of in a block of . We show that is not the coarsest -block-stable equivalence relation. Let . Then is an equivalence relation strictly coarser than . Furthermore, since and are blocks of , we get . With 3) we get that is -stable and thus -block-stable with Lemma 5. This contradicts the hypothesis that was the coarsest one. Therefore, blocks of are blocks of .


1) means that blocks of are sufficiently small to do the refinement step efficiently, as if they were states. 6) means that these blocks cannot be bigger. 5) means that we are ready for next split and refinement steps.

In what precedes, we have assumed that for the preorder , inside which we want to compute the coarsest simulation, there is another preorder such that condition (5) holds. The fifth item of Theorem 7 says that if this true at a given iteration (made of a split step and a refinement step) of the algorithm then this is true at the next iteration. For the end of this section we show that we can safely modify the initial preorder such that this is also initially true. This is indeed a simple consequence of the fact that a state with an outgoing transition cannot be simulated by a state with no outgoing transition.

Definition 8.

Let be a preorder. We define such that:

Proposition 9.

Let with a preorder. Then:

  1. is -stable,

  2. a simulation included in is also included in .


  1. is trivially a preorder. It remains to show that is also a preorder and that (5) is true with .

    Since is a preorder and thus reflexive, is also trivially reflexive. Now, by contradiction, let us suppose that is not transitive. There are three states such that: . From the fact that and is a preorder, we get . With and the definition of this means that has a successor while has not. But the hypothesis that has a successor and implies that has a successor. With we also get that has also a successor, which contradicts what is written above. Hence, is transitive and thus a preorder.

    The formula just means that the two hypotheses, and has a successor, imply that has also a successor. This is exactly the meaning of the second part of the intersection in the definition of .

  2. By contradiction, if this is not true there is a pair of states which belongs to and but does not belong to . By definition of this means that has a successor while has not. But the hypotheses , has a successor and is a simulation imply that has also a successor which contradicts what is written above. Hence, is also included in .

The total relation will thus play the role of the initial in the algorithm.


In [PT87, p. 979] there is also a similar preprocessing of the initial partition where states with no output transition are put aside.

5 The Algorithm

The approach of the previous section can be applied to several algorithms, with different balances between time complexity and space complexity. It can also be extended to labelled transition systems. In this section the emphasis is on the, theoretically, most efficient in memory of the fastest simulation algorithms of the moment.

5.1 Counters and Splitter Transitions

Let us remember that a partition and its associated equivalence relation (or a equivalence relation and its associated partition ) denote essentially the same thing. The difference is that for a partition we focus on the set of blocks whereas for an equivalence relation we focus on the relation which relates the elements of a same block. For a first reading, the reader may consider that a partition is an equivalence relation, and vice versa.

From a preorder that satisfies (5) we will need to split its blocks in order to find its coarsest -block-stable equivalence relation. Then, Theorem 7 will be used for a refine step. For all this, we first need the traditional Split function.

Definition 10.

Given a partition and a set of states , the function returns a partition similar to , but with the difference that each block of such that and is replaced by two blocks: and .

To efficiently perform the split and refine steps we need a set of counters which associates to each representative state of a block, of an equivalence relation, the number of blocks, of that same equivalence relation, it reaches in , for a block of .

Definition 11.

Let be an equivalence relation included in a preorder . We assume that for each block of , a representative state has been chosen. Let be a block of , be a block of and . We define:

Proposition 12.

Let be an equivalence relation included in a preorder , be a block of , be a block of and be a non empty subset of . Then:


Thanks to the transitivity of . ∎

Following Section 3, the purpose of these counters is to check in constant time whether a block of an equivalence relation is included in for a given state . But this is correct only if is already