Learning to Prove Safety over Parameterised Concurrent Systems (Full Version)

by   Yu-Fang Chen, et al.

We revisit the classic problem of proving safety over parameterised concurrent systems, i.e., an infinite family of finite-state concurrent systems that are represented by some finite (symbolic) means. An example of such an infinite family is a dining philosopher protocol with any number n of processes (n being the parameter that defines the infinite family). Regular model checking is a well-known generic framework for modelling parameterised concurrent systems, where an infinite set of configurations (resp. transitions) is represented by a regular set (resp. regular transducer). Although verifying safety properties in the regular model checking framework is undecidable in general, many sophisticated semi-algorithms have been developed in the past fifteen years that can successfully prove safety in many practical instances. In this paper, we propose a simple solution to synthesise regular inductive invariants that makes use of Angluin's classic L* algorithm (and its variants). We provide a termination guarantee when the set of configurations reachable from a given set of initial configurations is regular. We have tested L* algorithm on standard (as well as new) examples in regular model checking including the dining philosopher protocol, the dining cryptographer protocol, and several mutual exclusion protocols (e.g. Bakery, Burns, Szymanski, and German). Our experiments show that, despite the simplicity of our solution, it can perform at least as well as existing semi-algorithms.



There are no comments yet.


page 1

page 2

page 3

page 4


Fair Termination for Parameterized Probabilistic Concurrent Systems (Technical Report)

We consider the problem of automatically verifying that a parameterized ...

Regular Model Checking Revisited (Technical Report)

In this contribution we revisit regular model checking, a powerful frame...

MSO-Definable Regular Model Checking

Regular Model Checking (RMC) is a symbolic model checking technique wher...

Parameterized Synthesis with Safety Properties

Parameterized synthesis offers a solution to the problem of constructing...

Most General Variant Unifiers

Equational unification of two terms consists of finding a substitution t...

Unfolding of Finite Concurrent Automata

We consider recognizable trace rewriting systems with level-regular cont...

Inspecting Maude Variants with GLINTS

This paper introduces GLINTS, a graphical tool for exploring variant nar...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Parameterised concurrent systems are infinite families of finite-state concurrent systems, parameterised by the number of processes. There are numerous examples of parameterised concurrent systems, including models of distributed algorithms which are typically designed to handle an arbitrary number of processes [33, 54]. Verification of such systems, then, amounts to proving that a desired property holds for all permitted values of . For example, proving that the safety property holds for a dining philosopher protocol entails proving that the protocol with any given number  of philosophers () can never reach a state when two neighbouring philosophers eat simultaneously. For each given value of , verifying safety/liveness is decidable, albeit the exponential state-space explosion in the parameter . However, when the property has to hold for each value of , the number of system configurations a verification algorithm has to explore is potentially infinite. Indeed, even safety checking is already undecidable for parameterised concurrent systems [9, 12, 31]; see [13] for a comprehensive survey on the decidability aspect of the parameterised verification problem.

Various sophisticated semi-algorithms for verifying parameterised concurrent systems are available. These semi-algorithms typically rely on a symbolic framework for representing infinite sets of system configurations and transitions. Regular model checking [43, 7, 14, 15, 6, 1, 22, 46, 66] is one well-known symbolic framework for modelling and verifying parameterised concurrent systems. In regular model checking, configurations are modelled using words over a finite alphabet, sets of configurations are represented as regular languages, and the transition relation is defined by a regular transducer. From the research programme of regular model checking, not only are regular languages/transducers known to be highly expressive symbolic representations for modelling parameterised concurrent systems, they are also amenable to an automata-theoretic approach (due to many nice closure properties of regular languages/transducers), which have often proven effective in verification.

In this paper, we revisit the classic problem of verifying safety in the regular model checking framework. Many sophisticated semi-algorithms for dealing with this problem have been developed in the literature using methods such as abstraction [4, 5, 21, 20], widening [15, 23], acceleration [58, 43, 11], and learning [55, 56, 39, 64, 63]. One standard technique for proving safety for an infinite-state systems is by exhibiting an inductive invariant (i.e. a set of configurations that is closed under an application of the transition relation) such that (i) subsumes the set of all initial configurations, but (ii) does not intersect with the set of unsafe configurations. In regular model checking, the sets and are given as regular sets. For this reason, a natural method for proving safety in regular model checking is to exhibit a regular inductive invariant satisfying (i) and (ii). The regular set can be constructed as a “regular proof” for safety since checking that a candidate regular set is a proof for safety is decidable. A few semi-algorithms inspired by automata learning — some based on the passive learning algorithms [39, 56, 2]

and some others based on active learning algorithms

[56, 63]— have been proposed to synthesise a regular inductive invariant in regular model checking. Despite these semi-algorithms, not much attention has been paid to applications of automata learning in regular model checking.

In this paper, we are interested in one basic research question in regular model checking: can we effectively apply the classic Angluin’s automata learning [8] (or variants [59, 45]) to learn a regular inductive invariant? Hitherto this question, perhaps surprisingly, has no satisfactory answer in the literature. A more careful consideration reveals at least two problems. Firstly, membership queries (i.e. is a word reachable from ?) may be asked by the algorithm, which amounts to checking reachability in an infinite-state system, which is undecidable in general. This problem was already noted in [55, 56, 63, 64]. Secondly, a regular inductive invariant satisfying (i) and (ii) might not be unique, and so strictly speaking we are not dealing with a well-defined learning problem. More precisely, consider the question of what the teacher should answer when the learner asks whether is in the desired invariant, but turns out not to be reachable from ? Discarding might not be a good idea, since this could force the learning algorithm to look for a minimal (in the sense of set inclusion) inductive invariant, which might not be regular. Similarly, let us consider what the teacher should answer in the case when we found a pair of configurations such that (1) is in the candidate , (2) , and (3) there is a transition from to . In the ICE-learning framework [36, 35, 55], the pair is called an implication counterexample. To satisfy the inductive invariant constraint, the teacher may respond that should be added to , or that should be removed from . Some works in the literature have proposed using a three-valued logic/automaton (with “don’t know” as an answer) because of the teacher’s incomplete information [38, 26].


In this paper, we propose a simple and practical solution to the problem of applying the classic automata learning algorithm and its variants to synthesise a regular inductive invariant in regular model checking. To deal with the first problem mentioned in the previous paragraph, we propose to restrict to length-preserving regular transducers. In theory, length-preservation is not a restriction for safety analysis, since it just implies that each instance of the considered parameterised system is operating on bounded memory of size  (but the parameter  is unbounded). Experience shows that many practical examples in parameterised concurrent systems can be captured naturally in terms of length-preserving systems, e.g., see [53, 7, 6, 43, 22, 58, 1]. The benefit of the restriction is that the problem of membership queries is now decidable, since the set of configurations that may reach (be reachable from) any given configuration is finite and can be solved by a standard finite-state model checker. For the second problem mentioned in the previous paragraph, we propose that a strict teacher be employed in learning for regular inductive invariants in regular model checking. A strict teacher attempts to teach the learner the minimal inductive invariant (be it regular or not), but is satisfied when the candidate answer posed by the learner is an inductive invariant satisfying (i) and (ii) without being minimal. [In this sense, perhaps a more appropriate term is a strict but generous teacher, who tries to let a student pass a final exam whenever possible.] For this reason, when the learner asks whether is in the desired inductive invariant, the teacher will reply NO if is not reachable from . The same goes with an implication counterexample such that the teacher will say that an unreachable is not in the desired inductive invariant.

We have implemented the learning-based approach in a prototype tool with an interface to the libalf library, which includes the algorithm and its variants. Despite the simplicity of our solution, it (perhaps surprisingly) works extremely well in practice, as our experiments suggest. We have taken numerous standard examples from regular model checking, including cache coherence protocols (German’s Protocol), self-stabilising protocols (Israeli-Jalfon’s Protocol and Herman’s Protocol), synchronisation protocols (Lehmann-Rabin’s Dining Philosopher Protocol), secure multi-party computation protocols (Dining Cryptographers Protocol [25]), and mutual exclusion protocols (Szymanski’s Protocol, Burn’s Protocol, Dijkstra’s Protocol, Lamport’s bakery algorithm, and Resource-Allocator Protocol). We show that algorithm can perform at least as well as (and, in fact, often outperform) existing semi-algorithms. We compared the performance of our algorithm with well-known and established techniques such as SAT-based learning [56, 55, 52, 53], abstract regular model checking (ARMC), which is based on abstraction-refinement using predicate abstractions and finite-length abstractions [20, 21], and T(O)RMC, which is based on extrapolation (a widening technique) [16]. Our experiments show that, despite the simplicity of our solution, it can perform at least as well as existing semi-algorithms.

Related Work

The work of Vardhan et al. [64, 63] applies learning to infinite-state systems and, amongst other, regular model checking. The learning algorithm attempts to learn an inductive invariant enriched with “distance” information, which is one way to make membership queries (i.e. reachability for general infinite-state systems) decidable. This often makes the resulting set not regular, even if the set of reachable configurations is regular, in which case our algorithm is guaranteed to terminate (recall our algorithm is only learning a regular invariant without distance information). Conversely, when an inductive invariant enriched with distance information is regular, so is the projection that omits the distance information. Unfortunately, neither their tool Lever [64], nor the models used in their experiments are available, so that we cannot make a direct comparison to our approach. A learning algorithm allowing incomplete information [38] has been applied in [56] for inferring inductive invariants of regular model checking. Although the learning algorithm in [38] uses the same data structure as the standard algorithm, it is essentially a SAT-based learning algorithm (its termination is not guaranteed by the Myhill-Nerode theorem).

Despite our results that SAT-based learning seems to be less efficient than learning for synthesising regular inductive invariants in regular model checking, SAT-based learning is more general and more easily applicable when verifying other properties, e.g., liveness [53], fair termination [49], and safety games [57]. View abstraction [5] is a novel technique for parameterised verification. Comparing to parameterised verification based on view abstraction, our framework (i.e. general regular model checking framework with transducers) provides a more expressive modelling language that is required in specifying protocols with near-neighbour communication (e.g. Dining Cryptographers and Dining Philosophers).

When preparing the final version, we found that a very similar algorithm had already appeared in Vardhan’s thesis [62, Section 6] from 2006; in particular, including the trick to make a membership query (i.e. point-to-point reachability) decidable by bounding the space of the transducers. The research presented here was conducted independently, and considers several aspects that were not yet present in [62]

, including experimental results on systems that are not counter systems (parameterised concurrent systems with topologies), and heuristics like the use of shortest counterexamples and caching. We cannot compare our implementation in detail with the one from

[62], since the latter tool is not publicly available.


The notations are defined in Section II. A brief introduction to regular model checking and automata learning is given in Section III and Section IV, respectively. The learning-based algorithm is provided in Section V. The result of the experiments is in Section VI.

Ii Preliminaries

General Notations

Let be a finite set of symbols called alphabet. A word over is a finite sequence of symbols of . We use to represent an empty word. For a set and a relation , we define to be the post-image of under , i.e., . Let be the identity relation. We define for all in the standard way by induction: , and , where denotes the composition of relations. Let denote the transitive closure of , i.e., . For any two sets and , we use to denote their symmetric difference, i.e., the set .

Finite Automata and Transducer

In this paper, automata/transducers are denoted in calligraphic fonts to represent automata/transducers, while the corresponding languages/relations are denoted in roman fonts .

A finite automaton (FA) is a tuple where is a finite set of states, is an alphabet, is a transition relation, is the initial state, and is the set of final states. A run of on a word is a sequence of states such that . A run is accepting if the last state . A word is accepted by if it has an accepting run. The language of , denoted by , is the set of word accepted by . A language is regular if it can be recognised by a finite automaton. is a deterministic finite automaton (DFA) if for each and .

Let . A (finite) transducer is a tuple where is a finite set of states, is a transition relation, is the initial state, and is the set of final states. We say that is length-preserving if . We define relation as the smallest relation satisfying (1) for any and (2) . The relation represented by is the set . A relation is regular and length-preserving if it can be represented by a length-preserving transducer.

Iii Regular model checking

Regular model checking (RMC) is a uniform framework for modelling and automatically analysing parameterised concurrent systems. In the paper, we focus on the regular model checking framework for safety properties. Under the framework, each system configuration is represented as a word in . The sets of initial configurations and of bad configurations are captured by regular languages over . The transition relation is captured by a regular and length-preserving relation on . We use a triple to denote a regular model checking problem, where is an FA recognizing the set of initial configurations, is a transducer representing the transition relation, and is an FA recognizing the set of bad configurations. Then the regular model checking problem asks if . A standard way to prove is to find a proof based on a set satisfying the following three conditions: (1) (i.e. all initial configurations are contained in ), (2) (i.e. does not contain bad configurations), (3) (i.e. is inductive: applying to any configuration in does not take it outside ). We call the set an inductive invariant for the regular model checking problem . In the framework of regular model checking, a standard method for proving safety (e.g. see [56, 7]) is to find a regular proof, i.e., an inductive invariant that can be captured by finite automaton. Because regular languages are effectively closed under Boolean operations and taking pre-/post-images w.r.t. finite transducers, an algorithm for verifying whether a given regular language is an inductive invariant can be obtained by using language inclusion algorithms for FA [3, 19].

Example 1 (Herman’s Protocol).

Herman’s Protocol is a self-stabilising protocol for processes (say with ids ) organised as a ring structure. A configuration in the Herman’s Protocol is correct

iff only one process has a token. The protocol ensures that any system configuration where the processes collectively holding any odd number of tokens will almost surely be recovered to a correct configuration. More concretely, the protocol works iteratively. In each iteration, the scheduler randomly chooses a process. If the process with the number

is chosen by the scheduler, it will toss a coin to decide whether to keep the token or pass the token to the next process, i.e. the one with the number . If a process holds two tokens in the same iteration, it will discard both tokens. One safety property the protocol guarantees is that every system configuration has at least one token.

The protocol and the corresponding safety property can be modelled as a regular model checking problem . Each process has two states; the symbol T denotes the state that the process has a token and N denotes the state that the process does not have a token. The word NNTTNN denotes a system configuration with six processes, where only the processes with numbers and are in the state with tokens. The set of initial configurations is , i.e., an odd number of processes has tokens. The set of bad configuration is , i.e., all tokens have disappeared. We use the regular language to denote the relation that a process is idle. The transition relation can be specified as a union of the following regular expressions: (1) [Idle], (2) [Discard both tokens], and (3) + [Pass the token].

Iv Automata Learning

Suppose is a regular target language whose definition is not directly accessible. Automata learning algorithms [8, 59, 45, 17] automatically infer a FA  recognising . The setting of an online learning algorithm assumes a teacher who has access to and can answer the following two queries: (1) Membership query : is the word a member of , i.e., ? (2) Equivalence query : is the language of FA  equal to , i.e., ? If not, it returns a counterexample . The learning algorithm will then construct a FA such that by interacting with the teacher. Such an algorithm works iteratively: In each iteration, it performs membership queries to get from the teacher information about . Using the results of the queries, it proceeds by constructing a candidate automaton and makes an equivalence query . If , the algorithm terminates with as the resulting FA. Otherwise, the teacher returns a word distinguishing from . The learning algorithm uses to refine the candidate automaton of the next iteration. In the last decade, automata learning algorithms have been frequently applied to solve formal verification and synthesis problems, c.f., [27, 24, 39, 38, 26, 32].

More concretely, below we explain the details of the automata learning algorithm proposed by Rivest and Schapire [59] (RS), which is an improved version of the classic learning algorithm by Angluin [8]. The foundation of the learning algorithm is the Myhill-Nerode theorem, from which one can infer that the states of the minimal DFA recognizing are isomorphic to the set of equivalence classes defined by the following relations: Informally, two strings and belong to the same state of the minimal DFA recognising iff they cannot be distinguished by any suffix . In other words, if one can find a suffix such that and or vice versa, then and belong to different states of the minimal DFA.

The algorithm uses a data structure called observation table to find the equivalence classes correspond to , where is a set of strings denoting the set of identified states, is the set of suffixes to distinguish if two strings belong to the same state of the minimal DFA, and is a mapping from to . The value of iff . We use as a shorthand for . That is, the strings and cannot be identified as two different states using only strings in the set as the suffixes. Observe that implies for all . We say that an observation table is closed iff . Informally, with a closed table, every state can find its successors wrt. all symbols in . Initially, , and for all .

Input: A teacher answers and about a target regular language and the initial observation table .
1 repeat
2       while  is not closed do
3             Find a pair such that . Extend to and update using membership queries accordingly;
5      Build a candidate DFA , where , the empty string is the initial state, and ;
6       if , where  then  Analyse and add a suffix of to ;
8until ;
9return is the minimal DFA for ;
Algorithm 1 The improved algorithm by Rivest and Schapire

The details of of the improved algorithm by Rivest and Schapire can be found in Algorithm 1. Observe that, in the algorithm, two strings with will never be simultaneously contained in the set . When the equivalence query returns together with a counterexample , the algorithm will perform a binary search over using membership queries to find a suffix of and extend to . The suffix has the property that , that is, add to will identify at least one more state. The existence of such a suffix is guaranteed. We refer the readers to [59] for the proof.

Proposition 1.

[59] Algorithm 1 will find the minimal DFA for using at most equivalence queries and membership queries, where is the number of state of and is the length of the longest counterexample returned from the teacher.

Because each equivalence query with a answer will increase the size (number of states) of the candidate DFA by at least one and the size of the candidate DFA is bounded by according to the Myhill-Nerode theorem, the learning algorithm uses at most equivalence queries. The number of membership queries required to fill in the entire observation table is bounded by . Since a binary search is used to analyse the counterexample and the number of counterexample from the teacher is bounded by , the number of membership queries required is bounded by .

We would like to introduce the other two important variants of the learning algorithm. The algorithm proposed by Kearns and Vazirani [45] (KV) uses a classification tree data structure to replace the observation table data structure of the classic algorithm. The algorithm of Kearns and Vazirani has a similar query complexity to the one of Rivest and Schapire [59]; it uses at most equivalence queries and membership queries. However, the worst case bound of the number of membership queries is very loose. It assumes the structure of the classification tree is linear, i.e., each node has at most one child, which happens very rarely in practice. In our experience, the algorithm of Kearns and Vazirani usually requires a few more equivalence queries, with a significant lower number of membership queries comparing to Rivest and Schapire when applied to verification problems.

The algorithm [17] learns a non-deterministic finite automaton instead of a deterministic one. More concretely, it makes use of a canonical form of nondeterministic finite automaton, named residual finite-state automaton (RFSA) to express the target regular language. In some examples, RFSA can be exponentially more succinct than DFA recognising the same languages. In the worst case, the algorithm uses equivalence queries and membership queries to infer a canonical RFSA of the target language.

V Algorithm



andan inductiveinvariant or anda word


Fig. 1: Overview: using automata learning to solve the regular model checking problem . Recall that we use calligraphy font for automata/transducers and roman font for the corresponding languages/relations.

We apply automata learning algorithms, including Angluin’s and its variants, to solve the regular model checking problem . Those learning algorithms require a teacher answering both equivalence and membership queries. Our strategy is to design a “strict teacher” targeting the minimal inductive invariant . For a membership query on a word , the teacher checks if , which is decidable under the assumption that is length-preserving. For an equivalence query on a candidate FA , the teacher analyses if can be used as an inductive invariant in a proof of the problem . It performs one of the following actions depending on the result of the analysis (Fig. 1):

  • Determine that does not represent an inductive invariant, and return together with an explanation to the learner.

  • Conclude that , and terminate the learning process with an inductive invariant as the proof.

  • Conclude that , and terminate the learning with a word as an evidence.

Similar to the typical regular model checking approach, our learning-based technique tries to find a “regular proof”, which amounts to finding an inductive invariant in the form of a regular language. Our approach is incomplete in general since it could happen that there only non-regular inductive invariants exist. Pathological cases where only non-regular inductive invariant exist do not, however, seem to occur frequently in practice, c.f., [21, 39, 20, 22, 61, 58, 51].

Answering a membership query on a word , i.e., checking whether , is the easy part: since is length-preserving, we can construct an FA recognising and then check if . In practice, can be efficiently computed and represented using BDDs and symbolic model checking.

For an equivalence query on a candidate FA , we need to check if can be used as an inductive invariant for the regular model checking problem . More concretely, we check the three conditions (1) , (2) , and (3) using Algorithm 2.

Input: An FA and an RMC problem
1 if  then
2       Find a word ;
3       return (, ) to the learner;
5else if  then
6       Find a word ;
7       if  then  Output {, } and halt;
8       else  return (, ) to the learner;
10else if  then
11       Find a pair of words such that but ;
12       if  then  return (, ) to the learner;
13       else  return (, ) to the learner;
15else  Output {, } and halt;
Algorithm 2 Answer equivalence query on candidate FA

If the condition (1) is violated, i.e., , there is a word . Since , the teacher can infer that and return as a positive counterexample to the learner. A counterexample is positive if it represents a word in the target language that was missing in the candidate language. The definition negative counterexamples is symmetric.

If the condition (2) is violated, i.e., , there is a word . The teacher checks if by constructing and checking if . If , the teacher obtains that and returns together with as a negative counterexample to the learner. Otherwise, the teacher infers that and outputs with the word as an evidence.

The case that the condition (3) is violated, i.e., , is more involved. There exists a pair of words such that . The teacher will check if . If it is, then the teacher knows that and hence returns together with as a positive counterexample to the learner. If , then the teacher knows that and hence returns together with as a negative counterexample to the learner.

If all conditions hold, the “strict teacher” shows its generosity ( might not equal to , but it will still pass) and concludes that with a proof using as the inductive invariant.

Theorem 1 (Correctness).

If the algorithm from Fig. 1 terminates, it gives correct answer to the RMC problem .

To see this, observe that the algorithm provides an inductive invariant when it concludes and a word in when it concludes . In addition, if one of the learning algorithms111If is used, the bound in Theorem 2 will increase to . from Section IV is used, we can obtain an additional result about termination:

Theorem 2 (Termination).

When is regular, the algorithm from Fig. 1 is guaranteed to terminate in at most iterations, where is the size of the minimal DFA of .


Observe that in the algorithm, the counterexample obtained by the learner in each iteration locates in the symmetric difference of the candidate language and . Hence, when can be recognized by a DFA of states, the algorithm will not execute more than iterations by Proposition 1. ∎

Two remarks are in order. Firstly, the set tends to be regular in practice, e.g., see [21, 39, 20, 22, 61, 58, 10, 11, 51, 50]. In fact, it is known that is regular for many subclasses of infinite-state systems that can be modelled in regular model checking [61, 51, 41, 11, 50]

including pushdown systems, reversal-bounded counter systems, two-dimensional VASS (Vector Addition Systems with States), and other subclasses of counter systems. Secondly, even in the case when

is not regular, termination may still happen due to the “generosity” of the teacher, which will accept any inductive invariant as an answer.

Considerations on Implementation

The implementation of the learning-based algorithm is very simple. Since it is based on standard automata learning algorithms and uses only basic automata/transducer operations, one can find existing libraries for them. The implementation only need to take care of how to answer queries. The core of our implementation has only around 150 lines of code (excluding the parser of the input models). We provide a few suggestions to make the implementation more efficient. First, each time when an FA recognising is produced, we store the pair in a cache. It can be reused when a query on any word of length is posed. We can also check if . The algorithm can immediately terminate and return if . Second, for each language inclusion test, if the inclusion does not hold, we suggest to return the shortest counterexample. This heuristic helped to shorten the average length of strings sent for membership queries and hence reduced the cost of answering them. Recall that the algorithm needs to build the FA of to answer membership queries. The shorter the average length of query strings is, the fewer instances of have to be built.

Vi Evaluation

To evaluate our techniques, we have developed a prototype222Available at https://github.com/ericpony/safety-prover. in Java and used the libalf library [18] as the default inference engine. We used our prototype to check safety properties for a range of parameterised systems, including cache coherence protocols (German’s Protocol), self-stabilising protocols (Israeli-Jalfon’s Protocol and Herman’s Protocol), synchronisation protocols (Lehmann-Rabin’s Dining Philosopher Protocol), secure multi-party computation protocol (David Chaums’ Dining Cryptographers Protocol), and mutual exclusion protocols (Szymanski’s Protocol, Burn’s Protocol, Dijkstra’s Protocol, Lamport’s Bakery Algorithm, and Resource-Allocator Protocol). Most of the examples we consider are standard benchmarks in the literature of regular model checking (c.f. [4, 5, 20, 22, 58]). Among them, German’s Protocol and Kanban are more difficult than the other examples for fully automatic verification (c.f. [4, 5, 44]).

Based on these examples, we compare our learning method with existing techniques such as SAT-based learning [55, 56, 52, 53], extrapolating [16, 47], and abstract regular model checking (ARMC) [20, 21]. The SAT-based learning approach encodes automata as Boolean formulae and exploits a SAT-solver to search for candidate automata representing inductive invariants. It uses automata-based algorithms to either verify the correctness of the candidate or obtain a counterexample that can be further encoded as a Boolean constraint. T(O)RMC [16, 47] extrapolates the limit of the reachable configurations represented by an infinite sequence of automata. The extrapolation is computed by first identifying the increment between successive automata, and then over-approximating the repetition of the increment by adding loops to the automata. ARMC is an efficient technique that integrates abstraction refinement into the fixed-point computation. It begins with an existential abstraction obtained by merging states in the automata/transducers. Each time a spurious counterexample is found, the abstraction can be refined by splitting some of the merged states. ARMC is among the most efficient algorithms for regular model checking [39].

The RMC problems RS SAT T(O)RMC ARMC
Name #label Sinit Tinit Strans Ttrans Sbad Tbad Time Sinv Tinv Time Sinv Tinv Time Sinv Tinv Time
Bakery [33] 3 3 3 5 19 3 9 0.0s 6 18 0.5s 2 5 0.0s 6 11 0.0s
Burns [4] 12 3 3 10 125 3 36 0.2s 8 96 1.1s 2 10 0.1s 7 38 0.0s
Szymanski [60] 11 9 9 118 412 13 40 0.3s 43 473 1.6s 2 21 2.0s 51 102 0.1s
German [37] 581 3 3 17 9.5k 4 2112 4.8s 14 8134 t.o. t.o. 10s
Dijkstra [4] 42 1 1 13 827 3 126 0.1s 9 378 1.7s 2 24 6.1s 8 83 0.3s
Dijkstra, ring [29, 34] 12 3 3 13 199 3 36 1.4s 22 264 0.9s 2 14 t.o. 0.1s
Dining Crypto. [25] 14 10 30 17 70 12 70 0.1s 32 448 t.o. t.o. 7.2s
Coffee Can [53] 6 8 18 13 34 5 8 0.0s 3 18 0.2s 2 7 0.1s 6 13 0.0s
Herman, linear [40] 2 2 4 4 10 1 1 0.0s 2 4 0.2s 2 4 0.0s 2 4 0.0s
Herman, ring [40] 2 2 4 9 22 1 1 0.0s 2 4 0.4s 2 4 0.0s 2 4 0.0s
Israeli-Jalfon [42] 2 3 6 24 62 1 1 0.0s 4 8 0.1s 2 4 0.0s 4 8 0.0s
Lehmann-Rabin [48] 6 4 4 14 96 3 13 0.1s 8 48 0.5s 2 11 0.8s 19 105 0.0s
LR Dining Philo. [53] 4 4 4 3 10 3 4 0.0s 4 16 0.2s 2 6 0.1s 7 18 0.0s
Mux Array [34] 6 3 3 4 31 3 18 0.0s 5 30 0.4s 2 7 0.2s 4 14 0.0s
Res. Allocator [30] 3 3 3 7 25 4 9 0.0s 5 15 0.0s 1 3 0.0s 4 9 0.0s
Kanban [5, 44] 3 25 48 98 250 37 68 t.o. t.o. t.o. 3.5s
Water Jugs [65] 11 5 6 23 132 5 12 0.1s 24 264 t.o. t.o. 0.0s
TABLE I: Comparing the performance of different RMC techniques. #label stands for the size of alphabet; Sx and Tx stand for the numbers of states and transitions, respectively, in the automata/transducers. RS is the result of our prototype using Rivest and Schapire’s version of ; SAT, T(O)RMC, and ARMC are the results of the other three techniques.

The comparison of those algorithms are reported in Table I, running on a MinGW64 system with 3GHz Intel i7 processor, 2GB memory limit, and 60-second timeout. The experiments show that the learning method is quite efficient: the results of our prototype are comparable with those of the ARMC algorithm333Available at http://www.fit.vutbr.cz/research/groups/verifit/tools/hades. on all examples but Kanban, for which the minimal inductive invariant, if it is regular, has at least 400 states. On the other hand, our algorithm is significantly faster than ARMC in two cases, namely German’s Protocol and Dining Cryptographers. ARMC comes with a bundle of options and heuristics, but not all of them work for our benchmarks. We have tested all the heuristics available from the tool and adopted the ones444The heuristics are structure preserving, backward computation, and backward collapsing with all states being predicates. See [21] for explanations. that had the best performance in our experiments. The performance of SAT-based learning is comparable to the previous two approaches whenever inductive invariants representable by automata with few states exist. However, as its runtime grows exponentially with the sizes of candidate automata, the SAT-based algorithm fails to solve four examples that do not have small regular inductive invariants. T(O)RMC seems to suffer from similar problems as it timeouts on all examples that cannot be proved by the SAT-based approach.

Table II reports the results of the learning-based algorithm geared with different automata learning algorithms implemented in libalf. As the table shows, these algorithms have similar performance on small examples; however, the algorithm of Rivest and Schapire [59] and the algorithm of Kearns and Varzirani [45] are significantly more efficient than the other algorithms on some large examples such as Szymanski and German. Table II shows that Kearns and Varzirani’s algorithm can often find smaller inductive invariants (fewer states) than the other variants, which explains the performance difference. For , our implementation pays an additional cost to determinise the learned FA in order to answer the equivalence queries; this cost is significant when a large invariant is needed.

Recall that our approach uses a “strict but generous teacher”. Namely, the target language of the teacher is for an RMC problem . We have tried the version where a “flexible and generous teacher” is used, that is, the target language of the teacher is the complement of . The performance, however, is worse than that of our current version. This result may reflect the fact that the set is “more regular” (i.e., can be expressed by a DFA with fewer states) than the set in practical cases.

Time Sinv Tinv Time Sinv Tinv Time Sinv Tinv Time Sinv Tinv Time Sinv Tinv
Bakery 0.0s 6 18 0.0s 6 18 0.1s 6 18 0.0s 6 18 0.1s 6 18
Burns 0.2s 8 96 0.5s 8 96 0.2s 8 96 0.2s 8 96 0.4s 6 72
Szymanski 0.3s 43 473 2.4s 51 561 1.2s 41 451 0.3s 41 451 1.4s 59 649
German 4.8s 14 8134 13s 15 8715 26s 15 8715 4.2s 14 8134 40s 15 8715
Dijkstra 0.1s 9 378 0.4s 9 378 0.1s 9 378 0.2s 9 378 0.2s 10 420
Dijkstra, ring 1.4s 22 264 2.7s 20 240 8.9s 22 264 1.5s 14 168 1.8s 20 240
Dining Crypto. 0.1s 32 448 0.2s 34 476 0.2s 38 532 0.1s 19 266 0.3s 36 504
Coffee Can 0.0s 3 18 0.0s 3 18 0.0s 4 24 0.0s 3 18 0.0s 4 24
Herman, linear 0.0s 2 4 0.0s 2 4 0.0s 2 4 0.0s 2 4 0.0s 2 4
Herman, ring 0.0s 2 4 0.0s 2 4 0.0s 2 4 0.0s 2 4 0.0s 2 4
Israeli-Jalfon 0.0s 4 8 0.0s 4 8 0.0s 4 8 0.0s 4 8 0.0s 4 8
Lehmann-Rabin 0.1s 8 48 0.2s 8 48 0.1s 8 48 0.1s 8 48 0.2s 8 48
LR D. Philo. 0.0s 4 16 0.2s 4 16 0.0s 5 20 0.0s 4 16 0.0s 8 32
Mux Array 0.0s 5 30 0.0s 5 30 0.0s 5 30 0.0s 5 30 0.0s 5 30
Res. Allocator 0.0s 5 15 0.0s 4 12 0.0s 5 15 0.0s 5 15 0.0s 5 15
Kanban >60s >60s >60s >60s >60s
Water Jugs 0.1s 24 264 0.5s 25 275 0.5s 25 275 0.1s 24 264 0.5s 25 275
TABLE II: Comparing the performance based on different automata learning algorithms. The columns , , , , and are the results of the original algorithm by Angluin [8], a variant of that adds all suffixes of the counterexample to columns, the version by Rivest and Shapire [59], the version by Kearns and Vazirani [45], and the algorithm [17], respectively.

Vii Conclusion

The encouraging experimental results suggest that the performance of the algorithm for synthesising regular inductive invariants is comparable to the most sophisticated algorithm for regular model checking for proving safety. From a theoretical viewpoint, learning-based approaches (including ours and [55, 56, 39]) have a termination guarantee when the set is regular, which is not guaranteed by approaches based on a fixed-point computation (e.g., the ARMC [21]). An interesting research question is whether algorithm can be effectively used for verifying other properties, e.g., liveness.


This article is a full version of [28]. We thank anonymous referees for their useful comments. Rümmer was supported by the Swedish Research Council under grant 2014-5484.


  • [1] P. A. Abdulla. Regular model checking. STTT, 14(2):109–118, 2012.
  • [2] P. A. Abdulla, M. F. Atig, Y. Chen, L. Holík, A. Rezine, P. Rümmer, and J. Stenman. String constraints for verification. In CAV’14, pages 150–166.
  • [3] P. A. Abdulla, Y. Chen, L. Holík, R. Mayr, and T. Vojnar. When simulation meets antichains. In TACAS’10, pages 158–174.
  • [4] P. A. Abdulla, G. Delzanno, N. B. Henda, and A. Rezine. Regular model checking without transducers (on efficient verification of parameterized systems). In TACAS’07, pages 721–736.
  • [5] P. A. Abdulla, F. Haziza, and L. Holík. All for the price of few. In VMCAI’13, pages 476–495.
  • [6] P. A. Abdulla, B. Jonsson, M. Nilsson, J. d’Orso, and M. Saksena. Regular model checking for LTL(MSO). STTT, 14(2):223–241, 2012.
  • [7] P. A. Abdulla, B. Jonsson, M. Nilsson, and M. Saksena. A survey of regular model checking. In CONCUR’04, pages 35–48.
  • [8] D. Angluin. Learning regular sets from queries and counterexamples. Inf. Comput., 75(2):87–106, 1987.
  • [9] K. R. Apt and D. Kozen. Limits for automatic verification of finite-state concurrent systems. IPL, 22(6):307–309, 1986.
  • [10] S. Bardin, A. Finkel, J. Leroux, and L. Petrucci. FAST: acceleration from theory to practice. STTT, 10(5):401–424, 2008.
  • [11] S. Bardin, A. Finkel, J. Leroux, and P. Schnoebelen. Flat acceleration in symbolic model checking. In ATVA’05, pages 474–488.
  • [12] N. Bertrand and P. Fournier. Parameterized verification of many identical probabilistic timed processes. In FSTTCS’13, volume 24 of LIPIcs, pages 501–513. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik.
  • [13] R. Bloem, S. Jacobs, A. Khalimov, I. Konnov, S. Rubin, H. Veith, and J. Widder. Decidability of Parameterized Verification. Synthesis Lectures on Distributed Computing Theory. Morgan & Claypool Publishers, 2015.
  • [14] B. Boigelot. Symbolic Methods for Exploring Infinite State Spaces. PhD thesis, Université de Liège, 1999.
  • [15] B. Boigelot, A. Legay, and P. Wolper. Iterating transducers in the large (extended abstract). In CAV’03, pages 223–235.
  • [16] B. Boigelot, A. Legay, and P. Wolper. Omega-regular model checking. In TACAS’04, pages 561–575.
  • [17] B. Bollig, P. Habermehl, C. Kern, and M. Leucker. Angluin-style learning of NFA. In IJCAI’09, pages 1004–1009.
  • [18] B. Bollig, J.-P. Katoen, C. Kern, M. Leucker, D. Neider, and D. R. Piegdon. libalf: The automata learning framework. In CAV’10, pages 360–364.
  • [19] F. Bonchi and D. Pous. Checking NFA equivalence with bisimulations up to congruence. In POPL’13, pages 457–468.
  • [20] A. Bouajjani, P. Habermehl, A. Rogalewicz, and T. Vojnar. Abstract regular tree model checking. ENTCS, 149(1):37–48, 2006.
  • [21] A. Bouajjani, P. Habermehl, and T. Vojnar. Abstract regular model checking. In CAV’04, pages 372–386.
  • [22] A. Bouajjani, B. Jonsson, M. Nilsson, and T. Touili. Regular model checking. In CAV’00, pages 403–418.
  • [23] A. Bouajjani and T. Touili. Widening techniques for regular tree model checking. STTT, 14(2):145–165, 2012.
  • [24] M. Chapman, H. Chockler, P. Kesseli, D. Kroening, O. Strichman, and M. Tautschnig. Learning the language of error. In ATVA’15, pages 114–130.
  • [25] D. Chaum. The dining cryptographers problem: Unconditional sender and recipient untraceability. Journal of Cryptology, 1(1):65–75, 1988.
  • [26] Y. Chen, A. Farzan, E. M. Clarke, Y. Tsay, and B. Wang. Learning minimal separating DFA’s for compositional verification. In TACAS’09, pages 31–45.
  • [27] Y. Chen, C. Hsieh, O. Lengál, T. Lii, M. Tsai, B. Wang, and F. Wang. PAC learning-based verification and model synthesis. In ICSE’16, pages 714–724.
  • [28] Y.-F. Chen, C.-D. Hong, A. W. Lin, and P. Rümmer. Learning to prove safety over parameterised concurrent systems. In FMCAD’17.
  • [29] E. W. Dijkstra, R. Bird, M. Rogers, and O.-J. Dahl. Invariance and non-determinacy [and discussion]. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 312(1522):491–499, 1984.
  • [30] A. F. Donaldson. Automatic techniques for detecting and exploiting symmetry in model checking. PhD thesis, University of Glasgow, 2007.
  • [31] J. Esparza. Parameterized verification of crowds of anonymous processes. Dependable Software Systems Engineering, 45:59–71, 2016.
  • [32] A. Farzan, Y. Chen, E. M. Clarke, Y. Tsay, and B. Wang. Extending automated compositional verification to the full class of omega-regular languages. In TACAS’08, pages 2–17.
  • [33] W. Fokkink. Distributed Algorithms. MIT Press, 2013.
  • [34] L. Fribourg and H. Olsén. Reachability sets of parameterized rings as regular languages. ENTCS, 9:40, 1997.
  • [35] P. Garg, C. Löding, P. Madhusudan, and D. Neider. ICE: A robust framework for learning invariants. In CAV’14, pages 69–87.
  • [36] P. Garg, C. Löding, P. Madhusudan, and D. Neider. Learning universally quantified invariants of linear data structures. In CAV’13, pages 813–829.
  • [37] S. M. German and A. P. Sistla. Reasoning about systems with many processes. JACM, 39(3):675–735, 1992.
  • [38] O. Grinchtein, M. Leucker, and N. Piterman. Inferring network invariants automatically. In IJCAR’06, pages 483–497.
  • [39] P. Habermehl and T. Vojnar. Regular model checking using inference of regular languages. ENTCS, 138(3):21–36, 2005.
  • [40] T. Herman. Probabilistic self-stabilization. IPL, 35(2):63–67, 1990.
  • [41] O. H. Ibarra. Reversal-bounded multicounter machines and their decision problems. J. ACM, 25(1):116–133, 1978.
  • [42] A. Israeli and M. Jalfon. Token management schemes and random walks yield self-stabilizing mutual exclusion. In PODC’90, pages 119–131.
  • [43] B. Jonsson and M. Nilsson. Transitive closures of regular relations for verifying infinite-state systems. In TACAS’00, pages 220–234.
  • [44] A. Kaiser, D. Kroening, and T. Wahl. Dynamic cutoff detection in parameterized concurrent programs. In CAV’10, pages 645–659.
  • [45] M. J. Kearns and U. V. Vazirani.

    An Introduction to Computational Learning Theory

    MIT press, 1994.
  • [46] Y. Kesten, O. Maler, M. Marcus, A. Pnueli, and E. Shahar. Symbolic model checking with rich assertional languages. TCS, 256(1-2):93–112, 2001.
  • [47] A. Legay. T(O)RMC: A tool for ()-regular model checking. In CAV’08, pages 548–551.
  • [48] D. Lehmann and M. O. Rabin. On the advantages of free choice: a symmetric and fully distributed solution to the dining philosophers problem. In POPL’81, pages 133–138.
  • [49] O. Lengál, A. W. Lin, R. Majumdar, and P. Rümmer. Fair termination for parameterized probabilistic concurrent systems. In TACAS’17.
  • [50] J. Leroux and G. Sutre. Flat counter automata almost everywhere! In ATVA’05, pages 489–503.
  • [51] A. W. Lin. Accelerating tree-automatic relations. In FSTTCS’12, pages 313–324.
  • [52] A. W. Lin, T. K. Nguyen, P. Rümmer, and J. Sun. Regular symmetry patterns. In VMCAI’16, pages 455–475.
  • [53] A. W. Lin and P. Rümmer. Liveness of randomised parameterised systems under arbitrary schedulers. In CAV’16, pages 112–133.
  • [54] N. A. Lynch, I. Saias, and R. Segala. Proving time bounds for randomized distributed algorithms. In PODC’94, pages 314–323.
  • [55] D. Neider. Applications of Automata Learning in Verification and Synthesis. PhD thesis, RWTH Aachen, 2014.
  • [56] D. Neider and N. Jansen. Regular model checking using solver technologies and automata learning. In NFM, pages 16–31, 2013.
  • [57] D. Neider and U. Topcu. An automaton learning approach to solving safety games over infinite graphs. In TACAS’16, pages 204–221.
  • [58] M. Nilsson. Regular Model Checking. PhD thesis, Uppsala Univ., 2005.
  • [59] R. L. Rivest and R. E. Schapire. Inference of finite automata using homing sequences. Inf. Comput., 103(2):299–347, 1993.
  • [60] B. K. Szymanski. A simple solution to Lamport’s concurrent programming problem with linear wait. In ICS’88, pages 621–626.
  • [61] A. W. To and L. Libkin. Algorithmic metatheorems for decidable LTL model checking over infinite systems. In FoSSaCS’10, pages 221–236.
  • [62] A. Vardhan. Learning To Verify Systems. PhD thesis, Department of Computer Science, University of Illinois at Urbana-Champaign, 2006.
  • [63] A. Vardhan, K. Sen, M. Viswanathan, and G. Agha. Learning to verify safety properties. In ICFME’04, pages 274–289.
  • [64] A. Vardhan and M. Viswanathan. LEVER: A tool for learning based verification. In CAV’06, pages 471–474.
  • [65] Wikipedia. Liquid water pouring puzzles. https://en.wikipedia.org/w/index.php?title=Liquid_water_pouring_puzzles&oldid=764748113, 2017. [Accessed: 24-February-2017].
  • [66] P. Wolper and B. Boigelot. Verifying systems with infinite but regular state spaces. In CAV’98, pages 88–97.

Viii Appendix

We provide some more examples for regular model checking in the appendix.







Fig. 2: Using learning to solve the RMC problem of Herman’s Protocol. The table on the left is the content of the observation table used by the automata learning algorithm of Rivest and Shaphire [59] and the automaton on the right is the inferred candidate DFA.
Example 2 (RMC of the Herman’s Protocol).

Consider the RMC problem of Herman’s Protocol in Example 1. Initially, several membership queries will be posed to the teacher to produce the closed observation table on the left of Fig. 2. In this example, the teacher returns only for words containing an odd number of the symbol T. The learner will then construct the candidate FA on the right of Fig. 2 and pose an equivalence query on . Observe that can be used as an inductive invariant in a regular proof. It is easy to verify that and . The condition (3) of a regular proof can be proved to be correct based on the following observation: the FA recognises exactly the set of all configurations with an odd number of tokens. When tokens are discarded in a transition, the total number of discarded tokens in all processes is always . The other two types of transitions will not change the total number of tokens in the system. It follows that taking a transition from any configurations in will arrive a configuration with an odd number of tokens, which is still in .

The verification of Herman’s Protocol finishes after the first iteration of learning and hence we cannot see how the learning algorithm uses a counterexample for refinement. Below we introduce a slightly more difficult problem.

Example 3 (RMC of the Israeli-Jalfon’s Protocol).

Israeli-Jalfon’s Protocol is a routing protocol of processes organised in a ring-shaped topology, where a process may hold a token. Again we assume the processes are numbered from to . If the process with the number is chosen by the scheduler, it will toss a coin to decide to pass the token to the left or right, i.e. the one with the number or . When two tokens are held by the same process, they will be merged. The safety property of interest is that every system configuration has at least one token. The protocol and the corresponding safety property can be modelled as a regular model checking problem , together with the set of initial configurations , i.e., at least two processes have tokens, and the set of bad configurations , i.e., all tokens have disappeared. Again we use the regular language to denote the relation that a process is idle, i.e, the process does not change its state. The transition relation then can be specified as a union of the regular expressions in Figure 3.

, (Pass the token right)
, (Pass the token left)
Fig. 3: The Transition Relation of Israeli-Jalfon’s Protocol

When the automata learning algorithm of Rivest and Shaphire is applied to solve the RMC problem, we can obtain an inductive invariant with 4 states in 3 iterations. The first candidate FA in Fig. 4(a) is incorrect because it does not include the initial configuration TT. By analysing the counterexample TT, the learning algorithm adds the suffix T to the set . The second candidate FA in Fig. 4(b) is still incorrect because it contains an unreachable bad configuration NNN. The learning algorithm analyses the counterexample NNN and adds the suffix N to the set . This time it obtains the candidate FA in Fig. 4(c), which is a valid regular inductive invariant.

T N valign=M

(a) First candidate automaton

T T TT N TN TTT TTN valign=M

(b) Second candidate automaton





(c) Third candidate automaton
Fig. 4: Using learning to solve the RMC problem of Israeli-Jalfon’s Protocol. The table on the left of each sub-figure is the content of the observation table used by the automata learning algorithm of Rivest and Shaphire [59].