DeepAI
Log In Sign Up

A Constructive Equivalence between Computation Tree Logic and Failure Trace Testing

01/30/2019
by   Stefan D. Bruda, et al.
0

The two major systems of formal verification are model checking and algebraic model-based testing. Model checking is based on some form of temporal logic such as linear temporal logic (LTL) or computation tree logic (CTL). One powerful and realistic logic being used is CTL, which is capable of expressing most interesting properties of processes such as liveness and safety. Model-based testing is based on some operational semantics of processes (such as traces, failures, or both) and its associated preorders. The most fine-grained preorder beside bisimulation (mostly of theoretical importance) is based on failure traces. We show that these two most powerful variants are equivalent; that is, we show that for any failure trace test there exists a CTL formula equivalent to it, and the other way around. All our proofs are constructive and algorithmic. Our result allows for parts of a large system to be specified logically while other parts are specified algebraically, thus combining the best of the two (logic and algebraic) worlds.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

09/21/2022

Parametric Interval Temporal Logic over Infinite Words

Model checking for Halpern and Shoham's interval temporal logic HS has b...
11/22/2017

Interval vs. Point Temporal Logic Model Checking: an Expressiveness Comparison

In the last years, model checking with interval temporal logics is emerg...
04/28/2021

A Temporal Logic for Asynchronous Hyperproperties

Hyperproperties are properties of computational systems that require mor...
06/28/2022

A quantitative extension of Interval Temporal Logic over infinite words

Model checking for Halpern and Shoham's interval temporal logic HS has b...
09/21/2021

HyperQube: A QBF-Based Bounded Model Checker for Hyperproperties

This paper presents HyperQube, a push-button QBF-based bounded model che...
01/19/2021

The Complexity of Monitoring Hyperproperties

We study the runtime verification of hyperproperties, expressed in the t...
05/07/2021

On the Complexity of Verification of Time-Sensitive Distributed Systems: Technical Report

This paper develops a Multiset Rewriting language with explicit time for...

1 Introduction

Computing systems are already ubiquitous in our everyday life, from entertainment systems at home, to telephone networks and the Internet, and even to health care, transportation, and energy infrastructure. Ensuring the correct behaviour of software and hardware has been one of the goals of Computer Science since the dawn of computing. Since then computer use has skyrocketed and so has the need for assessing correctness.

Historically the oldest verification method, which is still widely used today, is empirical testing [22, 27]. This is a non-formal method which provides input to a system, observes the output, and verifies that the output is the one expected given the input. Such testing cannot check all the possible input combinations and so it can disprove correctness but can never prove it. Deductive verification [16, 18, 25] is chronologically the next verification method developed. It consists of providing proofs of program correctness manually, based on a set of axioms and inference rules. Program proofs provide authoritative evidence of correctness but are time consuming and require highly qualified experts.

Various techniques have been developed to automatically perform program verification with the same effect as deductive reasoning but in an automated manner. These efforts are grouped together in the general field of formal methods. The general technique is to verify a system automatically against some formal specification. Model-based testing and model checking are the two approaches to formal methods that became mainstream. Their roots can be traced to simulation and deductive reasoning, respectively. These formal methods however are sound, complete, and to a large extent automatic. They have proven themselves through the years and are currently in wide use throughout the computing industry.

In model-based testing [4, 13, 29] the specification of a system is given algebraically, with the underlying semantics given in an operational manner as a labeled transition system (LTS for short), or sometimes as a finite automaton (a particular, finite kind of LTS). Such a specification is usually an abstract representation of the system’s desired behaviour. The system under test is modeled using the same formalism (either finite or infinite LTS). The specification is then used to derive systematically and formally tests, which are then applied to the system under test. The way the tests are generated ensures soundness and completeness. In this paper we focus on arguably the most powerful method of model-based testing, namely failure trace testing [20]. Failure trace testing also introduces a smaller set (of sequential tests) that is sufficient to assess the failure trace relation.

By contrast, in model checking [9, 10, 24] the system specification is given in some form of temporal logic. The specification is thus a (logical) description of the desired properties of the system. The system under test is modeled as Kripke structures, another formalism similar to transition systems. The model checking algorithm then determines whether the initial states of the system under test satisfy the specification formulae, in which case the system is deemed correct. There are numerous temporal logic variants used in model checking, including CTL*, CTL and LTL. In this paper we focus on CTL.

There are advantages as well as disadvantages to each of these formal methods techniques. Model checking is a complete verification technique, which has been widely studied and also widely used in practice. The main disadvantage of this technique is that it is not compositional. It is also the case that model checking is based on the system under test being modeled using a finite state formalism, and so does not scale very well with the size of the system under test. By contrast, model-based testing is compositional by definition (given its algebraic nature), and so has better scalability. In practice however it is not necessarily complete given that some of the generated tests could take infinite time to run and so their success or failure cannot be readily ascertained. The logical nature of specification for model checking allows us to only specify the properties of interest, in contrast with the labeled transition systems or finite automata used in model-based testing which more or less require that the whole system be specified.

Some properties of a system may be naturally specified using temporal logic, while others may be specified using finite automata or labeled transition systems. Such a mixed specification could be given by somebody else, but most often algebraic specifications are just more convenient for some components while logic specifications are more suitable for others. However, such a mixed specification cannot be verified. Parts of it can be model checked and some other parts can be verified using model-based testing. However, no global algorithm for the verification of the whole system exists. Before even thinking of verifying such a specification we need to convert one specification to the form of the other.

We describe in this paper precisely such a conversion. We first propose two equivalence relations between labeled transition systems (the semantic model used in model-based testing) and Kripke structures (the semantic model used in model checking), and then we show that for each CTL formula there exists an equivalent failure trace test suite, and the other way around. In effect, we show that the two (algebraic and logic) formalisms are equivalent. All our proofs are constructive and algorithmic, so that implementing back and forth automated conversions is an immediate consequence of our result.

We believe that we are thus opening the domain of combined, algebraic and logic methods of formal system verification. The advantages of such a combined method stem from the above considerations but also from the lack of compositionality of model checking (which can thus be side-stepped by switching to algebraic specifications), from the lack of completeness of model-based testing (which can be side-stepped by switching to model checking), and from the potentially attractive feature of model-based testing of incremental application of a test suite insuring correctness to a certain degree (which the all-or-nothing model-checking lacks).

The reminder of this paper is organized as follows: We introduce basic concepts including model checking, temporal logic, model-based testing, and failure trace testing in the next section. Previous work is reviewed briefly in Section 3. Section 4 defines the concept of equivalence between LTS and Kripke structures, together with an algorithmic function for converting an LTS into its equivalent Kripke structure. Two such equivalence relations and conversion functions are offered (Section 4.1 and 4.2, respectively). Section 5 then presents our algorithmic conversions from failure trace tests to CTL formulae (Section 5.1 with an improvement in Section 5.2) and the other way around (Section 5.3). We discuss the significance and consequences of our work in Section 6. For the remainder of this paper results proved elsewhere are introduced as Propositions, while original results are stated as Theorems, Lemmata, or Corollaries.

2 Preliminaries

This section is dedicated to introducing the necessary background information on model checking, temporal logic, and failure trace testing. For technical reasons we also introduce TLOTOS, a process algebra used for describing algebraic specifications, tests, and systems under test. The reason for using this particular language is that earlier work on failure trace testing uses this language as well.

Given a set of symbols we use as usual to denote exactly all the strings of symbols from . The empty string, and only the empty string is denoted by . We use to refer to , the cardinality of the set of natural numbers. The power set of a set is denoted as usual by .

2.1 Temporal Logic and Model Checking

A specification suitable for model checking is described by a temporal logic formula. The system under test is given as a Kripke structure. The goal of model checking is then to find the set of all states in the Kripke structure that satisfy the given logic formula. The system then satisfies the specification provided that all the designated initial states of the respective Kripke structure satisfy the logic formula.

Formally, a Kripke structure [10] over a set of atomic propositions is a tuple , where is a set of states, is the set of initial states, is the transition relation, and is a function that assigns to each states exactly all the atomic propositions that are true in that state. As usual we write instead of . It is usually assumed [10] that is total, meaning that for every state there exists a state such that . Such a requirement can however be easily established by creating a “sink” state that has no atomic proposition assigned to it, is the target of all the transitions from states with no other outgoing transitions, and has one outgoing “self-loop” transition back to itself.

A path in a Kripke structure is a sequence such that for all . The path starts from state . Any state may be the start of multiple paths. It follows that all the paths starting from a given state can be represented together as a computation tree with nodes labeled with states. Such a tree is rooted at and is an edge in the tree if and only if . Some temporal logics reason about computation paths individually, while some other temporal logics reason about whole computation trees.

There are several temporal logics currently in use. We will focus in this paper on the CTL* family [10, 15] and more precisely on the CTL variant. CTL* is a general temporal logic which is usually restricted for practical considerations. One such a restriction is the linear-time temporal logic or LTL [10, 23], which is an example of temporal logic that represents properties of individual paths. Another restriction is the computation tree logic or CTL [8, 10], which represents properties of computation trees.

In CTL* the properties of individual paths are represented using five temporal operators:  (for a property that has to be true in the next state of the path),  (for a property that has to eventually become true along the path),  (for a property that has to hold in every state along the path),  (for a property that has to hold continuously along a path until another property becomes true and remains true for the rest of the path), and  (for a property that has to hold along a path until another property becomes true and releases the first property from its obligation). These path properties are then put together so that they become state properties using the quantifiers  (for a property that has to hold on all the outgoing paths) and  (for a property that needs to hold on at least one of the outgoing paths).

CTL is a subset of CTL*, with the additional restriction that the temporal constructs , , , , and  must be immediately preceded by one of the path quantifiers  or . More precisely, the syntax of CTL formulae is defined as follows:

where , and , , are all state formulae.

CTL formulae are interpreted over states in Kripke structures. Specifically, the CTL semantics is given by the operator such that means that the formula is true in the state of the Kripke structure . All the CTL formulae are state formulae, but their semantics is defined using the intermediate concept of path formulae. In this context the notation means that the formula is true along the path in the Kripke structure . The operator  is defined inductively as follows:

  1. is true and is false for any state in any Kripke structure .

  2. , if and only if .

  3. if and only if for any state formula .

  4. if and only if and for any state formulae and .

  5. if and only if or for any state formulae and .

  6. for some path formula if and only if there exists a path , such that .

  7. for some path formula if and only if for all paths , .

We use to denote the -th state of a path , with the first state being . The operator  for path formulae is then defined as follows:

  1. if and only if for any state formula .

  2. for any state formulae and if and only if there exists such that for all , and for all .

  3. for any state formulae and if and only if for all , if for every then .

2.2 Labeled Transition Systems and Stable Failures

CTL semantics is defined over Kripke structures, where each state is labeled with atomic propositions. By contrast, the common model used for system specifications in model-based testing is the labeled transition system (LTS), where the labels (or actions) are associated with the transitions instead.

An LTS [19] is a tuple where is a countable, non empty set of states, is the initial state, and is a countable set of actions. The actions in are called visible (or observable), by contrast with the special, unobservable action (also called internal action). The relation is the transition relation; we use instead of . A transition means that state becomes state after performing the (visible or internal) action .

The notation stands for . The sets of states and transitions can also be considered global, in which case an LTS is completely defined by its initial state. We therefore blur whenever convenient the distinction between an LTS and a state, calling them both “processes”. Given that is a relation rather than a function, and also given the existence of the internal action, an LTS defines a nondeterministic process.

A path (or run) starting from state is a sequence with such that for all . We use to refer to , the length of . If , then we say that is finite. The trace of is the sequence of all the visible actions that occur in the run listed in their order of occurrence and including duplicates. Note in particular that internal actions do not appear in traces. The set of finite traces of a process is defined as . If we are not interested in the intermediate states of a run then we use the notation to state that there exists a run starting from state and ending at state such that . We also use instead of .

A process that has no outgoing internal action cannot make any progress unless it performs a visible action. We say that such a process is stable [26]. We write whenever we want to say that process is stable. Formally, . A stable process responds predictably to any set of actions , in the sense that its response depends exclusively on its outgoing transitions. Whenever there is no action such that we say that refuses the set . Only stable processes are able to refuse actions; unstable processes refuse actions “by proxy”: they refuse a set whenever they can internally become a stable process that refuses . Formally, refuses (written ) if and only if .

To describe the behaviour of a process in terms of refusals we need to record each refusal together with the trace that causes that refusal. An observation of a refusal plus the trace that causes it is called a stable failure [26]. Formally, is a stable failure of process if and only if . The set of stable failures of is then .

Several preorder relations (that is, binary relations that are reflexive and transitive but not necessarily symmetric or antisymmetric) can be defined over processes based on their observable behaviour (including traces, refusals, stable failures, etc.) [5]. Such preorders can then be used in practice as implementation relations, which in turn create a process-oriented specification technique. The stable failure preorder is defined based on stable failures and is one of the finest such preorders (but not the absolute finest) [5].

Let and be two processes. The stable failure preorder is defined as if and only if and . Given the preorder one can naturally define the stable failure equivalence : if and only if and .

2.3 Failure Trace Testing

In model-based testing [4] a test runs in parallel with the system under test and synchronizes with it over visible actions. A run of a test and a process represents a possible sequence of states and actions of and running synchronously. The outcome of such a run is either success () or failure (). The precise definition of synchronization, success, and failure depends on the particular type of tests being considered. We will present below such definitions for the particular framework of failure trace testing.

Given the nondeterministic nature of LTS there may be multiple runs for a given process and a given test and so a set of outcomes is necessary to give the results of all the possible runs. We denote by the set of exactly all the possible outcomes of all the runs of and . Given the existence of such a set of outcomes, two definitions of a process passing a test are possible. More precisely, a process may pass a test whenever some run is successful (formally, if and only if ), while must pass whenever all runs are successful (formally, if and only if ).

In what follows we use the notation . A failure trace [20] is a string of the form , , with (sequences of actions) and (sets of refusals). Let be a process such that ; is then a failure trace of whenever the following two conditions hold:

  • If , then ; for a stable state the failure trace refuses any set of events that cannot be performed in that state (including the empty set).

  • If then ; whenever is not a stable state it refuses an empty set of events by definition.

In other words, we obtain a failure trace of by taking a trace of and inserting refusal sets after stable states.

Systems and tests can be concisely described using the testing language TLOTOS [3, 20], which will also be used in this paper. is the countable set of observable actions, ranged over by . The set of processes or tests is ranged over by , and , while ranges over the sets of tests or processes. The syntax of TLOTOS is then defined as follows:

The semantics of TLOTOS is then the following:

  1. inaction (stop): no rules.

  2. action prefix: and

  3. deadlock detection: .

  4. successful termination: .

  5. choice: with ,

  6. generalized choice: with ,

Failure trace tests are defined in TLOTOS using the special actions which signals the successful completion of a test, and which is the deadlock detection label (the precise behaviour will be given later). Processes (or LTS) can also be described as TLOTOS processes, but such a description does not contain or . A test runs in parallel with the system under test according to the parallel composition operator . This operator also defines the semantics of as the lowest priority action:

Given that both processes and tests can be nondeterministic we have a set of possible runs of a process and a test. The outcome of a particular run of a test and a process under test is success () whenever the last symbol in is , and failure () otherwise. One can then distinguish the possibility and the inevitability of success for a test as mentioned earlier: if and only if , and if and only if .

The set  of sequential tests is defined as follows [20]: , if then for any , and if then for any .

A bijection between failure traces and sequential tests exists [20]. For a sequential test the failure trace is defined inductively as follows: , , and . Conversely, let be a failure trace. Then we can inductively define the sequential test as follows: , , and . For all failure traces we have that , and for all tests we have . We then define the failure trace preorder as follows: if and only if .

The above bijection effectively shows that the failure trace preorder (which is based on the behaviour of processes) can be readily converted into a testing-based preorder (based on the outcomes of tests applied to processes). Indeed there exists a successful run of in parallel with the test , if and only if is a failure trace of both and . Furthermore, these two preorders are equivalent to the stable failure preorder introduced earlier:

Proposition 1.

[20] Let be a process, a sequential test, and a failure trace. Then if and only if , where .

Let and be processes. Then if and only if if and only if for all failure trace tests if and only if .

Let be a failure trace test. Then there exists such that if and only if .

We note in passing that unlike other preorders, (or equivalently ) can be in fact characterized in terms of may testing only; the must operator needs not be considered any further.

3 Previous Work

The investigation into connecting logical and algebraic frameworks of formal specification and verification has not been pursued in too much depth. To our knowledge the only substantial investigation on the matter is based on linear-time temporal logic (LTL) and its relation with Büchi automata [28]. Such an investigation started with timed Büchi automata [1] approaches to LTL model checking [10, 13, 17, 30, 31].

An explicit equivalence between LTL and the may and must testing framework of De Nicola and Hennessy [13] was developed as a unified semantic theory for heterogeneous system specifications featuring mixtures of labeled transition systems and LTL formulae [11]. This theory uses Büchi automata [28] rather than LTS as underlying semantic formalism. The Büchi must-preorder for a certain class of Büchi process was first established by means of trace inclusion. Then LTL formulae were converted into Büchi processes whose languages contain the traces that satisfy the formula.

The relation between may and must testing and temporal logic mentioned above [11] was also extended to the timed (or real-time) domain [6, 12]. Two refinement timed preorders similar to may and must testing were introduced, together with behavioural and language-based characterizations for these relations (to show that the new preorders are extensions of the traditional preorders). An algorithm for automated test generation out of formulae written in a timed variant of LTL called Timed Propositional Temporal Logic (TPTL) [2] was then introduced.

To our knowledge there were only two efforts on the equivalence between CTL and algebraic specifications [7, 14], one of which is our preliminary version of this paper. This earlier version [7] presents the equivalence between LTS and Kripke structures (Section 4 below) and also a tentative (but not complete) conversion from failure trace tests to CTL formulae. The other effort [14] is the basis of our second equivalence relation between LTS and Kripke structure (again see Section 4 below).

4 Two Constructive Equivalence Relations between LTS and Kripke Structures

We believe that the only meaningful basis for constructing a Kripke structure equivalent to a given LTS is by taking the outgoing actions of an LTS state as the propositions that hold on the equivalent Kripke structure state. This idea is amenable to at least two algorithmic conversion methods.

4.1 Constructing a Compact Kripke Structure Equivalent with a Given LTS

We first define an LTS satisfaction operator similar to the one on Kripke structures in a natural way (and according to the intuition presented above).

Definition 1.

Satisfaction for processes: A process satisfies , written by abuse of notation , iff . That satisfies some (general) CTL* state formula is defined inductively as follows: Let and be some state formulae unless stated otherwise; then,

  1. is true and is false for any process .

  2. iff .

  3. iff and .

  4. iff or .

  5. for some path formula iff there is a path such that .

  6. for some path formula iff for all paths .

We use to denote the -th state of a path (with the first state being state 0, or ). The definition of  for LTS paths is:

  1. iff .

  2. iff there exists such that and for all , and for all .

  3. iff for all , if for every then .

We also need to define a weaker satisfaction operator for CTL. Such an operator is similar to the original, but is defined over a set of states rather than a single state. By abuse of notation we denote this operator by as well.

Definition 2.

Satisfaction over sets of states: Consider a Kripke structure over . For some set and some CTL state formula we define as follows, with and state formulae unless stated otherwise:

  1. is true and is false for any set in any Kripke structure .

  2. iff for some , .

  3. iff .

  4. iff and .

  5. iff or .

  6. for some path formula iff for some there exists a path such that .

  7. for some path formula iff for some it holds that for all paths .

With these definition we can introduce the following equivalence relation between Kripke structures and LTS.

Definition 3.

Equivalence between Kripke structures and LTS: Given a Kripke structure and a set of states of , the pair is equivalent to a process , written (or ), if and only if for any CTL* formula if and only if .

It is easy to see that the relation is indeed an equivalence relation. This equivalence has the useful property that given an LTS it is easy to construct its equivalent Kripke structure.

Theorem 2.

There exists an algorithmic function which converts a labeled transition system into a Kripke structure and a set of states such that .

Specifically, for any labeled transition system , its equivalent Kripke structure is defined as where:

  1. .

  2. .

  3. contains exactly all the transitions such that , and

    1. for any , ,

    2. for some and for any , , and

    3. if then and (these loops ensure that the relation is complete).

  4. such that , where .

Nframe=n,ExtNL=y,NLdist=1,NLangle=0,Nw=2,Nh=2,Nframe=n,Nfill=y (p)(10,20)(t)(15,10)[NLangle=180](q)(5,10)(s)(10,0)[NLangle=180](r)(0,0)(u)(20,0)[r](q,r)(q,s)[r](p,q)(p,t)(t,u) Nadjust=wh,Nadjustdist=1,Nfill=n (p1)(5,18)(q2)(10,9)(q1)(0,9)(s)(10,0)(r)(0,0)(p2)(25,18)(t)(25,9)(u)(25,0)(p1,q1) (p1,q2) (q1,r) (q2,s) (p2,t) (t,u) [l](r) [r](s) [r](u)

Each state of the Kripke structure is labeled with the LTS state it came from, and the set of propositions that hold in that Kripke state.

Figure 1: A conversion of an LTS to an equivalent Kripke structure .

Figure 1 illustrates graphically how we convert a labelled transition system to its equivalent Kripke structure. As illustrated in the figure, we combine each state in the labelled transition system with its actions provided as properties to form new states in the equivalent Kripke structure. The transition relation of the Kripke structure is formed by the new states and the corresponding transition relation in the original labelled transition system. The labeling function in the equivalent Kripke structure links the actions to their relevant states. An LTS states in split into multiple Kripke states whenever it can evolve differently by performing different actions (like state in the figure).

Using such a conversion we can define the semantics of CTL* formulae with respect to a process rather than Kripke structure. One problem—that required the new satisfaction operator for sets of Kripke states as defined in Definition 2—is introduced by the fact that one state of a process can generate multiple initial Kripke states. We believe that the weaker satisfaction operator from Definition 2 is introduced without loss of generality and may even be worked around by such mechanisms as considering processes with one outgoing transition (a “start” action) followed by their normal behaviour.

Proof of Theorem 2. The proof relies on the properties of the syntax and semantics of CTL* formulae and is done by structural induction.

For the basis of the induction, we note that is true for any process and for any state in any Kripke structure. iff is therefore immediate. The same goes for (no process and no state in any Kripke structure satisfy ). iff by the definition of ; indeed, (so that ) iff for some state that is, .

On to the inductive step. iff for any formula by induction hypothesis, so we take and so iff .

Suppose that and [or] (so that []). This is equivalent by induction hypothesis to and [or] , that is, [], as desired.

Let now be a path starting from a process . According to the definition of , all the equivalent paths in the Kripke structure have the form , such that for all . Clearly, such a path exists. Moreover, given some path of form , a path of form also exists (because no path in the Kripke structure comes out of the blue; instead all of them come from paths in the original process). By abuse of notation we write , with the understanding that this incarnation of is not necessarily a function (it could be a relation) but is complete (there exists a path for any path ). With this notion of equivalent paths we can now proceed to path formulae.

Consider the formula such that some path satisfies it. Whenever , and therefore (by inductive assumption, for indeed is a state, not a path formula) and therefore , as desired. Conversely, , that is, means that by inductive assumption, and so .

The proof for , , , and  operators proceed similarly. Whenever , there is a state such that . By induction hypothesis then and so . The other way (from to ) is similar. The  operator requires that all the states along satisfy , which imply that all the states in any satisfy , and thus (and again things proceed similarly in the other direction). In all, the induction hypothesis established a bijection between the states in and the states in (any) . This bijection is used in the proof for  and  just as it was used in the above proof for  and . Indeed, the states along the path will satisfy or as appropriate for the respective operator, but this translates in the same set of states satisfying and in , so the whole formula (using  or  holds in iff it holds in .

Finally, given a formula , implies that there exists a path starting from that satisfies . By induction hypothesis there is then a path starting from that satisfies (there is at least one such a path) and thus . The other way around is similar, and so is the proof for (all the paths satisfy so all the path satisfy as well; there are no supplementary paths, since all the paths in come from the paths in ). ∎

4.2 Yet Another Constructive Equivalence between LTS and Kripke Structures

The function developed earlier produces a very compact Kripke structure. However, a state in the original LTS can result in multiple equivalent state in the resulting Kripke structure, which in turn requires a modified notion of satisfaction (over sets of states, see Definition 2). This in turn implies a non-standard model checking algorithm. A different such a conversion algorithm [14] avoids this issue, at the expense of a considerably larger Kripke structure. We now explore a similar equivalence.

The just mentioned conversion algorithm [14] is based on introducing intermediate states in the resulting Kripke structure. These states are labelled with the special proposition which is understood to mark a state that is ignored in the process of determining the truth value of a CTL formula; if labels a state then it is the only label for that state. We therefore base our construction on the following definition of equivalence between processes and Kripke structures:

Definition 4.

Satisfaction for processes: Given a Kripke structure and a state of , the pair is equivalent to a process , written as (or ) iff for any CTL* formula iff . The operator is defined for processes in Definition 1 and for Kripke structures as follows:

  1. iff

  2. iff

  3. iff

  4. iff

  5. iff

  6. iff

  7. iff

  8. iff

  9. iff

  10. iff

  11. iff

Note that the definition above is stated in terms of CTL* rather than CTL; however, CTL* is stronger and so equivalence under CTL* implies equivalence under CTL.

Most of the equivalence is immediate. However, some cases need to make sure that the states labelled are ignored. This happens first in , which is equivalent to . Indeed, needs to hold immediately, except that any preceding states labelled must be ignored, hence must be eventually true and when it becomes so it releases the chain of labels. The formula for is constructed using the same idea (except that the formula releasing the possible chain of happens starting from the next state).

Then expression means that must remain true with possible interleaves of until becomes true. Similarly requires that is true (with the usual interleaved ) until it is released by becoming true.

Based on this equivalence we can define a new conversion of LTS into equivalent Kripke structures. This conversion is again based on a similar conversion [14] developed in a different context.

Theorem 3.

There exist at least two algorithmic functions for converting LTS into equivalent Kripke structures. The first is the function described in Theorem 2.

The new function is defined as follows: with a fresh symbol not in : Given an LTS , the Kripke structure is given by:

  1. ;

  2. ;

  3. ;

  4. ;

  5. For .

Then .

Proof.

We prove the stronger equivalence over CTL* rather than CTL by structural induction. Since is effectively handled by the satisfaction operator introduced in Definition 4 it will turn out that there is no need to mention it at all.

For the basis of the induction, we note that is true for any process and for any state in any Kripke structure. iff is therefore immediate. The same goes for (no process and no state in any Kripke structure satisfy ). iff ; Indeed, (so that ) iff .

That iff is immediately given by the induction hypothesis that iff .

Suppose that and [or] (so that []). This is equivalent by induction hypothesis to and [or] , that is, [], as desired.

Let now be a path starting from a process . According to the definition of , all the equivalent paths in the Kripke structure have the form , such that for all . Clearly, such a path exists. According to the function , we know that is a symbol that stands for states in the LTS and has no meaning in the Kripke structure. The satisfaction operator for Kripke structures (Definition 4) is specifically designed to ignore the label and this insures that the part is equivalent to the path with for all and so we will use this form for the reminder of the proof.

Consider the formula such that some path satisfies it. Whenever , and therefore (by inductive assumption, for indeed is a state, not a path formula) and therefore , as desired. Conversely, , that is, means that by inductive assumption, and so .

The proof for , , , and  operators proceed similarly. Whenever , there is a state such that . By induction hypothesis then and so . The other way (from to ) is similar. The  operator requires that all the states along satisfy , which implies that all the states in any satisfy , and thus (and again things proceed similarly in the other direction). In all, the induction hypothesis established a bijection between the states in and the states in (any) . This bijection is used in the proof for  and  just as it was used in the above proof for  and . Indeed, the states along the path will satisfy or as appropriate for the respective operator, but this translates in the same set of states satisfying and in , so the whole formula (using  or  holds in iff it holds in ).

Finally, given a formula , implies that there exists a path starting from that satisfies . By induction hypothesis there is then a path starting from that satisfies (there is at least one such a path) and thus . The other way around is similar, and so is the proof for (all the paths satisfy so all the path satisfy as well; there are no supplementary paths, since all the paths in come from the paths in ). ∎

Nframe=n,ExtNL=y,NLdist=1,NLangle=0,Nw=2,Nh=2,Nframe=n,Nfill=y (p)(10,32)[NLangle=180](q)(5,16)(t)(15,16)[NLangle=180](r)(0,0)(s)(10,0)(u)(20,0)[r](p,q)(p,t)[r](q,r)(q,s)(t,u)Nadjust=wh,Nadjustdist=1,Nfill=n (t1)(12.5,32)(a)(5,24)(b)(20,24)(t2)(5,16)(t3)(20,16)(c)(0,8)(d)(10,8)(e)(20,8)(t4)(0,0)(t5)(10,0)(t6)(20,0)(t1,a) (t1,b) (a,t2) (b,t3) (t2,c) (t2,d) (t3,e) (c,t4) (d,t5) (e,t6)

            

Figure 2: Conversion of an LTS to its equivalent Kripke structure .

The process of the new version described in Theorem 3 is most easily described graphically; refer for this purpose to Figure 2. Specifically, the function converts the LTS given in Figure  2 into the equivalent Kripke structure shown in Figure 2. In this new structure, instead of combining each state with its corresponding actions in the LTS (and thus possibly splitting the LTS state into multiple Kripke structure states), we use the new symbol to stand for the original LTS states. Every state of the Kripke structure is the LTS state, and all the other states in the Kripke structure are the actions in the LTS. This ensures that all states in the Kripke structure corresponding to actions that are outgoing from a single LTS state have all the same parent. This in turn eliminates the need for the weaker satisfaction operator over sets of states (Definition 2).

5 CTL Is Equivalent to Failure Trace Testing

We now proceed to show the equivalence between CTL formulae and failure trace tests. Let  be the set of all processes,  the set of all failure trace tests, and  the set of all CTL formulae. We have:

Theorem 4.
  1. For some and , whenever if and only if for any we say that and are equivalent. Then, for every failure trace test there exists an equivalent CTL formula and the other way around. Furthermore a failure trace test can be algorithmically converted into its equivalent CTL formula and the other way around.

  2. For some and , whenever if and only if for any we say that and are equivalent. Then, for every failure trace test there exists an equivalent CTL formula and the other way around. Furthermore a failure trace test can be algorithmically converted into its equivalent CTL formula and the other way around.

Proof.

The proof of Item 1 follows from Lemma 5 (in Section 5.1 below) and Lemma 11 (in Section 5.3 below). The algorithmic nature of the conversion is shown implicitly in the proofs of these two results. The proof of Item 2 is fairly similar and is summarized in Lemma 6 (Section 5.1) and Lemma 12 (Section 5.3). ∎

The remainder of this section is dedicated to the proof of the lemmata mentioned above and so the actual proof of this result. Note incidentally that Lemmata 11 and 12 will be further improved in Theorem 7.

5.1 From Failure Trace Tests to CTL Formulae

Lemma 5.

There exists a function such that if and only if for any .

Proof.

The proof is done by structural induction over tests. In the process we also construct (inductively) the function .

We put . Any process passes pass and any Kripke structure satisfies , thus it is immediate that iff . Similarly, we put . No process passes stop and no Kripke structure satisfies .

On to the induction steps now. We put : an internal action in a test is not seen by the process under test by definition. We then put . We note that iff and for some . Now, iff by the construction of , and also iff by induction hypothesis. By Theorem 2, when we convert to an equivalent Kripke structure we take as new states the original states together with their outgoing actions. So once we are (in ) in a state that satisfies , all the next states of that state correspond to the states following after executing . Therefore, is satisfied in exactly those states in which must succeed. Thus iff . For illustration purposes note that in Figure 1 the initial state becomes two initial states and ; the next state of the state satisfying the property in the Kripke structure contains only (and never ).

Note now that is just syntactical sugar, for indeed is perfectly equivalent with