DeepAI
Log In Sign Up

Interval vs. Point Temporal Logic Model Checking: an Expressiveness Comparison

11/22/2017
by   Laura Bozzelli, et al.
0

In the last years, model checking with interval temporal logics is emerging as a viable alternative to model checking with standard point-based temporal logics, such as LTL, CTL, CTL*, and the like. The behavior of the system is modeled by means of (finite) Kripke structures, as usual. However, while temporal logics which are interpreted "point-wise" describe how the system evolves state-by-state, and predicate properties of system states, those which are interpreted "interval-wise" express properties of computation stretches, spanning a sequence of states. A proposition letter is assumed to hold over a computation stretch (interval) if and only if it holds over each component state (homogeneity assumption). A natural question arises: is there any advantage in replacing points by intervals as the primary temporal entities, or is it just a matter of taste? In this paper, we study the expressiveness of Halpern and Shoham's interval temporal logic (HS) in model checking, in comparison with those of LTL, CTL, and CTL*. To this end, we consider three semantic variants of HS: the state-based one, introduced by Montanari et al., that allows time to branch both in the past and in the future, the computation-tree-based one, that allows time to branch in the future only, and the trace-based variant, that disallows time to branch. These variants are compared among themselves and to the aforementioned standard logics, getting a complete picture. In particular, we show that HS with trace-based semantics is equivalent to LTL (but at least exponentially more succinct), HS with computation-tree-based semantics is equivalent to finitary CTL*, and HS with state-based semantics is incomparable with all of them (LTL, CTL, and CTL*).

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

09/21/2022

Parametric Interval Temporal Logic over Infinite Words

Model checking for Halpern and Shoham's interval temporal logic HS has b...
06/08/2020

Satisfiability and Model Checking for the Logic of Sub-Intervals under the Homogeneity Assumption

The expressive power of interval temporal logics (ITLs) makes them reall...
01/12/2019

Model checking: the interval way

[...] The most famous MC techniques were developed from the late 80s, be...
01/30/2019

A Constructive Equivalence between Computation Tree Logic and Failure Trace Testing

The two major systems of formal verification are model checking and alge...
09/06/2017

Model Checking for Fragments of Halpern and Shoham's Interval Temporal Logic Based on Track Representatives

Model checking allows one to automatically verify a specification of the...
06/28/2022

A quantitative extension of Interval Temporal Logic over infinite words

Model checking for Halpern and Shoham's interval temporal logic HS has b...

1 Introduction

Point-based temporal logics (PTLs) provide a standard framework for the specification of the behavior of reactive systems, that makes it possible to describe how a system evolves state-by-state (“point-wise” view). PTLs have been successfully employed in model checking (MC), which enables one to automatically verify complex finite-state systems usually modelled as finite propositional Kripke structures. The MC methodology considers two types of PTLs—linear and branching—which differ in the underlying model of time. In linear PTLs, like LTL [37]

, each moment in time has a unique possible future: formulas are interpreted over paths of a Kripke structure, and thus they refer to a single computation of the system. In branching PTLs, like

CTL and  [17], each moment in time may evolve into several possible futures: formulas are interpreted over states of the Kripke structure, hence referring to all the possible system computations.

Interval temporal logics (ITLs) have been proposed as an alternative setting for reasoning about time [20, 36, 42]. Unlike standard PTLs, they assume intervals, instead of points, as their primitive entities. ITLs allow one to specify relevant temporal properties that involve, e.g., actions with duration, accomplishments, and temporal aggregations, which are inherently “interval-based”, and thus cannot be naturally expressed by PTLs. ITLs have been applied in various areas of computer science, including formal verification, computational linguistics, planning, and multi-agent systems [26, 36, 38]. Halpern and Shoham’s modal logic of time intervals (referred to as HS[20] is the most popular among the ITLs. It features one modality for each of the 13 possible ordering relations between pairs of intervals (the so-called Allen’s relations [1]), apart from equality. Its satisfiability problem turns out to be highly undecidable for all interesting (classes of) linear orders [20]; the same happens with most of its fragments [11, 25, 29], but there are some noteworthy exceptions like the logic of temporal neighbourhood , over all relevant (classes of) linear orders [13, 14], and the logic of sub-intervals , over the class of dense linear orders [12, 35].

In this paper, we focus on the MC problem for HS. In order to check interval properties of computations, one needs to collect information about states into computation stretches, that is, finite paths of the Kripke structure (traces for short). Each trace is interpreted as an interval, whose labelling is defined on the basis of the labelling of the component states. Such an approach to HS MC has been simultaneously and independently proposed by Montanari et al. in [34, 30] and by Lomuscio and Michaliszyn in [26, 27].

In [34, 30], Montanari et al. assume a state-based semantics, according to which intervals/traces are “forgetful” of the history leading to their initial state. Since the initial (resp., final) state of an interval may feature several predecessors (resp., successors), such an interpretation induces a branching reference both in the future and in the past. A graphical account of the state-based semantics can be found in Figure 1; a detailed explanation will be given in the following. The other fundamental choice done in [34, 30] concerns the labeling of intervals: a natural principle, known as the homogeneity assumption, is adopted, which states that a proposition letter holds over an interval if and only if it holds over each component state (such an assumption turns out to be the most appropriate choice for many practical applications). In this setting, the MC problem for full HS turns out to be decidable. More precisely, it is -hard [7], while the only known upper bound is non-elementary [30].111Here and in the following we refer to the combined complexity of MC (which accounts for both the size of the Kripke structure and of the formula at the same time). The exact complexity of MC for almost all the meaningful syntactic fragments of HS, which ranges from to , , and beyond, has been determined in a subsequent series of papers [7, 9, 10, 30, 31, 32, 33].

Figure 1: State-based semantic variant : past and future are branching.

In [26, 27], Lomuscio and Michaliszyn address the MC problem for some fragments of HS extended with epistemic modalities. Their semantic assumptions are different from those made in [34, 30]: the fragments are interpreted over the unwinding of the Kripke structure (computation-tree-based semantics—see Figure 2 for a graphical account), and the interval labeling takes into account only the endpoints of intervals. In [26], they focus on the HS fragment of Allen’s relations started-by and finished-by, extended with epistemic modalities. They consider a restricted form of MC (local MC), which checks the specification against a single (finite) initial computation interval, and prove that it is -complete. In [27], they demonstrate that the picture drastically changes with other fragments of HS that allow one to access infinitely many intervals. In particular, they prove that the MC problem for the HS fragment of Allen’s relations meets and starts, extended with epistemic modalities, is decidable with a non-elementary upper bound. The decidability status of MC for full epistemic HS is not known.

To summarize, the MC problem for HS (and its fragments) has been extensively studied under the state-based and the computation-tree-based semantics, mainly focusing on complexity issues. What is missing is a formal comparison of the expressiveness of HS MC and MC for standard point-based temporal logics. A comparison of the expressiveness of the MC problem for HS under the state-based and the computation-tree-based semantics is missing as well.

Figure 2: Computation-tree-based semantic variant : future is branching, past is linear, finite and cumulative.

Our contribution.

In this paper, we study the expressiveness of HS, in the context of MC, in comparison with that of the standard PTLs LTL, CTL, and CTL. The analysis is carried on enforcing the homogeneity assumption.

We prove that HS endowed with the state-based semantics proposed in [34, 30] (hereafter denoted as ) is not comparable with LTL, CTL, and CTL. On the one hand, the result supports the intuition that gains some expressiveness by the ability of branching in the past. On the other hand, does not feature the possibility of forcing the verification of a property over an infinite path, thus implying that the formalisms are not comparable. With the aim of having a more “effective” comparison base, we consider two other semantic variants of HS, namely, the computation-tree-based semantic variant (denoted as ) and the trace-based one ().

The state-based (see Figure 1) and computation-tree-based (see Figure 2) approaches rely on a branching-time setting and differ in the nature of past. In the latter approach, past is linear: each interval may have several possible futures, but only a unique past. Moreover, past is assumed to be finite and cumulative, that is, the story of the current situation increases with time, and is never forgotten. The trace-based approach relies on a linear-time setting (see Figure 3), where the infinite paths (computations) of the given Kripke structure are the main semantic entities. Branching is neither allowed in the past nor in the future. Note that the linear-past (rather than branching) approach is more suited to the specification of dynamic behaviors, because it considers states in a computation tree, while the branching-past approach considers machine states, where past is not very meaningful for the specification of behavioral constraints [23].

Figure 3: Trace-based semantic variant : neither past nor future are branching.

The variant is a natural candidate for an expressiveness comparison with the branching time logics CTL and CTL. The most interesting and technically involved result is the characterization of the expressive power of : turns out to be expressively equivalent to finitary CTL, that is, the variant of CTL with quantification over finite paths. As for CTL, a non comparability result can be stated.

The variant is a natural candidate for an expressiveness comparison with LTL. We prove that and LTL are equivalent (this result holds true even for a very small fragment of ), but the former is at least exponentially more succinct than the latter.

finitary CTL

LTL

CTL

CTL

Figure 4: Overview of the expressiveness results.

We complete the picture with a comparison of the three semantic variants , , and . We prove that, as expected, is not comparable with either of the branching versions, and . The interesting result is that, on the other hand, is strictly included in : this supports , adopted in [30, 31, 32, 33, 7, 9], as a reasonable and adequate semantic choice. The complete picture of the expressiveness results is reported in Figure 4 (the symbols , , and denote incomparability, equivalence, and strict inclusion, respectively).

Structure of the paper.

In Section 2, we introduce basic notation and preliminary notions. In Subsection 2.1 we define Kripke structures and interval structures, in Subsection 2.2 we recall the well-known PTLs LTL, CTL, and CTL, and in Subsection 2.3 we present the interval temporal logic HS. Then, in Subsection 2.4 we define the three semantic variants of HS (, , and ). Finally, in Subsection 2.5 we provide a detailed example which gives an intuitive account of the three semantic variants and highlights their differences. In the next three sections, we analyze and compare their expressiveness. In Section 3 we show the expressive equivalence of LTL and . Then, in Section 4 we prove the expressive equivalence of and finitary CTL. Finally, in Section 5 we compare the expressiveness of , , and . Conclusions summarize the work done and outline some directions for future research.

2 Preliminaries

In this section, we introduce the notation and some fundamental notions that will be extensively used in the rest of the paper. Let be the set of natural numbers equipped with the standard linear ordering. For all , with , we denote by the set of natural numbers such that . Let be an alphabet and be a non-empty finite or infinite word over . We denote by the length of ( if is infinite). For all , with , denotes the -th letter of , while denotes the finite subword of given by . If is finite and , we define and . The sets of all proper prefixes and suffixes of are and , respectively. The set of all the finite words over is denoted by , and , where is the empty word.

2.1 Kripke structures and interval structures

Systems are usually modelled as Kripke structures. Let be a finite set of proposition letters, which represent predicates decorating the states of the given system.

Definition 2.1 (Kripke structure).

A Kripke structure over a finite set of proposition letters is a tuple , where is a set of states, is a left-total transition relation, is a total labelling function assigning to each state the set of proposition letters that hold over it, and is the initial state. For , we say that is a successor of , and is a predecessor of . Finally, we say that is finite if is finite.

Figure 5: The Kripke structure .

For example, Figure 5 depicts the finite Kripke structure , where , , . The initial state is marked by a double circle.

Let be a Kripke structure. An infinite path of is an infinite word over such that for all . A trace (or finite path) of is a non-empty prefix of some infinite path of . A finite or infinite path is initial if it starts from the initial state of . Let be the (infinite) set of all traces of and be the set of initial traces of . For a trace , denotes the set of states occurring in , i.e., , where .

We now introduce the notion of -tree structure, namely, an infinite tree-shaped Kripke structure with branches over a set of directions.

Definition 2.2 (-tree structure).

Given a set of directions, a -tree structure (over ) is a Kripke structure such that , is a prefix closed subset of , and is the set of pairs such that there exists for which (note that is completely specified by ). The states of a -tree structure are called nodes.

A Kripke structure induces an -tree structure, called the computation tree of , denoted by , which is obtained by unwinding from the initial state (note that the directions are the set of states of ). Formally, , where the set of nodes is the set of initial traces of and for all , and if and only if for some . See Figure 6 for an example.

Figure 6: Computation tree of the Kripke structure of Figure 5.

Given a strict partial ordering , an interval in

is an ordered pair

such that and . The interval denotes the subset of given by the set of points such that . We denote by the set of intervals in .

Definition 2.3 (Interval structure).

An interval structure over is a pair such that is a strict partial ordering and is a labeling function assigning a set of proposition letters to each interval over .

2.2 Standard temporal logics

In this subsection, we recall the standard propositional temporal logics CTL, CTL, and LTL [17, 37]. Given a set of proposition letters , the formulas of CTL are defined as follows:

where , X and U are the “next” and “until” temporal modalities, and is the existential path quantifier. 222Hereafter, we denote by the existential/universal path quantifiers (instead of by the usual E/A), in order not to confuse them with the HS modalities . We also use the standard shorthands (“universal path quantifier”), (“eventually” or “in the future”) and its dual (“always” or “globally”). Hereafter, we denote by the size of , that is, the number of its symbols/subformulas.

The logic CTL is the fragment of CTL where each temporal modality is immediately preceded by a path quantifier, whereas LTL corresponds to the path-quantifier-free fragment of CTL.

Given a Kripke structure , an infinite path of , and a position along , the satisfaction relation for CTL, written simply when is clear from the context, is defined as follows (Boolean connectives are treated as usual):

The model checking (MC) problem is defined as follows: is a model of , written , if for all initial infinite paths of , it holds that .

We also consider a variant of CTL, called finitary CTL, where the path quantifier of CTL is replaced by the finitary path quantifier . In this setting, path quantification ranges over the traces (finite paths) starting from the current state. The satisfaction relation , where is a trace and is a position along , is similar to that given for CTL with the only difference of finiteness of paths, and the fact that for a formula , if and only if and . A Kripke structure is a model of a finitary CTL formula if for each initial trace of , it holds that .

The MC problem for both CTL and LTL is -complete [18, 40]. It is not difficult to show that, as it happens with finitary LTL [15], MC for finitary CTL is -complete as well.

2.3 The interval temporal logic Hs

An interval algebra was proposed by Allen in [1] to reason about intervals and their relative order, while a systematic logical study of interval representation and reasoning was done a few years later by Halpern and Shoham, that introduced the interval temporal logic HS featuring one modality for each Allen relation, but equality [20]. Table 1 depicts 6 of the 13 Allen’s relations, together with the corresponding HS (existential) modalities. The other 7 relations are the 6 inverse relations (given a binary relation , the inverse relation is such that if and only if ) and equality.

Allen relation HS Definition w.r.t. interval structures Example

meets
before
started-by
finished-by
contains
overlaps
Table 1: Allen’s relations and corresponding HS modalities.

For a set of proposition letters , the formulas of HS are defined as follows:

where and . For any modality , the dual universal modality is defined as . For any subset of Allen’s relations , denotes the HS fragment featuring (universal and existential) modalities for only.

We assume the non-strict semantic version of HS, which admits intervals consisting of a single point.333All the results we prove in the paper hold for the strict version as well. Under such an assumption, all HS modalities can be expressed in terms of , and [42]. As an example, can be expressed in terms of and as: . We also use the derived operator of HS (and its dual ), which allows one to select arbitrary subintervals of a given interval, and is defined as: .

HS can be viewed as a multi-modal logic with , and as primitive modalities and its semantics can be defined over a multi-modal Kripke structure, called abstract interval model, where intervals are treated as atomic objects and Allen’s relations as binary relations over intervals.

Definition 2.4 (Abstract interval model [30]).

An abstract interval model over is a tuple , where is a set of worlds, and are two binary relations over , and is a labeling function assigning a set of proposition letters to each world.

Let be an abstract interval model. In the interval setting, is interpreted as a set of intervals, and as Allen’s relations (started-by) and (finished-by), respectively, and assigns to each interval in the set of proposition letters that hold over it. Given an interval , the truth of an HS formula over is inductively defined as follows (the Boolean connectives are treated as usual):

  • if and only if , for any ;

  • , for , if and only if there exists such that and ;

  • , for , if and only if there exists such that and .

The next definition shows how to derive an abstract interval model from an interval structure.

Definition 2.5 (Abstract interval model induced by an interval structure).

An interval structure , with , induces the abstract interval model , where iff and , and iff and .

For an interval and an HS formula , we write to mean that .

2.4 Three semantic variants of Hs for MC

In this section we define the three variants of HS semantics (state-based), (computation-tree-based), and (trace-based) for model checking HS formulas against Kripke structures. For each variant, the related (finite) MC problem consists of deciding whether or not a finite Kripke structure is a model of an HS formula under such a semantic variant.

Let us start with the state-based variant [34, 30], where an abstract interval model is naturally associated with a given Kripke structure by considering the set of intervals as the set of traces of .

Definition 2.6 (Abstract interval model induced by a Kripke structure).

The abstract interval model induced by a Kripke structure is , where , , , and is such that , for all .

According to the definition of , holds over if and only if it holds over all the states of . This conforms to the homogeneity principle, according to which a proposition letter holds over an interval if and only if it holds over all its subintervals [39].

Definition 2.7 (State-based Hs).

Let be a Kripke structure and be an HS formula. A trace satisfies under the state-based semantic variant, denoted as , if it holds that . Moreover, is a model of under the state-based semantic variant, denoted as , if for all initial traces , it holds that .

We now introduce the computation-tree-based semantic variant, where we simply consider the abstract interval model induced by the computation tree of the Kripke structure. Notice that since each state in a computation tree has a unique predecessor (with the exception of the initial state), this HS variant enforces a linear reference in the past.

Definition 2.8 (Computation-tree-based Hs).

A Kripke structure is a model of an HS formula under the computation-tree-based semantic variant, written , if .

Finally, we define the trace-based semantic variant, which exploits the interval structures induced by the infinite paths of the Kripke structure.

Definition 2.9 (Interval structure induced by an infinite path).

For a Kripke structure and an infinite path of , the interval structure induced by is , where for each interval , .

Definition 2.10 (Trace-based Hs).

A Kripke structure is a model of an HS formula under the trace-based semantic variant, denoted as , if and only if for each initial infinite path and for each initial interval , it holds that .

In the next sections, we compare the expressiveness of the logics , , , LTL, CTL, and CTL when interpreted over finite Kripke structures. Given two logics and , and two formulas and , we say that in is equivalent to in if, for every finite Kripke structure , is a model of in if and only if is a model of in . We say that is subsumed by , denoted as , if for each formula , there exists a formula such that in is equivalent to in . Moreover is as expressive as (or and have the same expressive power), written , if both and . We say that is (strictly) more expressive than if and . Finally and are expressively incomparable if both and .

2.5 An example: a vending machine

In this section, we give an example highlighting the differences among the HS semantic variants , , and .

ins_$2

ins_$1

ins_$0.50

sel

sel

sel

sel

sel

sel

dispensed

dispensed

dispensed

change_given

change_given

maint_ongoing

maint_failed

maint_success

Figure 7: Kripke structure representing a vending machine.

The Kripke structure of Figure 7 represents a vending machine, which can dispense water, hot dogs, and candies. In state (the initial one), no coin has been inserted into the machine (hence, the proposition letter holds there). Three edges, labelled by “ins_$1”, “ins_$2”, and “ins_$0.50”, connect to , , and , respectively. Edge labels do not convey semantic value (they are neither part of the structure definition nor associated with proposition letters) and are simply used for an easy reference to edges. In (resp., , ) the proposition letter (resp., , ) holds, representing the fact that 1 Dollar (resp., 2, 0.50 Dollars) has been inserted into the machine. The cost of a bottle of water (resp., a candy, a hot dog) is $0.50 (resp., $1, $2). A state , for , is connected to a state , for , only if the available credit allows one to buy the corresponding item. Then, edges labelled by “dispensed” connect , and to . In , the machine gives change, and can nondeterministically move back to (ready for dispensing another item), or to , where it begins an automatic maintenance activity ( holds there). Afterwards, state is reached, where maintenance ends. From there, if the maintenance activity fails (edge “maint_failed”), is reached again (another maintenance cycle is attempted); otherwise, maintenance concludes successfully (“maint_success”) and is reached. Since the machine is operating in states , and under maintenance in and , holds over the former, and it does not on the latter.

In the following, we will make use of the formulas , with : for any given , characterizes the intervals of length , and is defined as follows:

We now give some examples of properties we can formalize under all, or some, of the HS semantic variants , , and .

  • In any run of length 50, during which the machine never enters maintenance mode, it dispenses at least a hotdog, a bottle of water and a candy.

    Clearly this property is false, as the machine can possibly dispense only one or two kinds of items. We start by observing that the above formula is equivalent in all of the three semantic variants of HS: since modalities and only allow one to “move” from an interval to its subintervals, , , and coincide (for this reason, we have omitted the subscript from the symbol ). Homogeneity plays a fundamental role here: asking to be true implies that such a letter is true along the whole trace (thus and are always avoided).

    It is worth observing that the same property can be expressed in LTL, for instance as follows:

    The length of this LTL formula is exponential in the number of items (in this case, 3), whereas the length of the above HS one is only linear. As a matter of fact, we will prove (Theorem 3.5) that is at least exponentially more succinct than LTL.

  • If the credit is $0.50, then no hot dog or candy may be provided.

    We observe that a trace satisfies if and only if it ends in . This property is satisfied under all of the three semantic variants, even though the nature of future differs among them (recall Figure 1, 2, and 3). As we have already mentioned, a linear setting (rather than branching) is suitable for the specification of dynamic behaviors, because it considers states of a computation; conversely, a branching approach focuses on machine states (and thus on the structure of a system).

    In this case, only the state can be reached from , regardless of the nature of future. For this reason, , , and behave in the same way.

  • Let us exemplify now a difference between (and ) and .

    This is a structural property, requiring that when the machine enters state (where maintenance ends), it can become again operative reaching state ( is not a lock state for the system). This is clearly true when future is branching and it is not when future is linear: refers to system computations, and some of these may ultimately loop between and .

  • Conversely, some properties make sense only if they are predicated over computations. This is the case, for instance, of fairness.

    Assuming the trace-based semantics, the property requires that if a system computation enters infinitely often into maintenance mode, it will infinitely often enter operation mode. Again, this is not true, as some system computations may ultimately loop between and (hence, they are not fair). On the contrary, such a property is trivially true under or , as, for any initial trace , it holds that .

  • We conclude with a property showing the difference between linear and branching past, that is, between and (and ). The requirement is the following: the machine may dispense water with any amount of (positive) credit.

    Again, this one is a structural property, that cannot be expressed in or , as these refer to a specific computation in the past. Conversely, it is true under , since is backward reachable in one step by , , and .

3 Equivalence between Ltl and

In this section, we show that is as expressive as LTL even for small syntactical fragments of . To this end, we exploit the well-known equivalence between LTL and the first-order fragment of monadic second-order logic over infinite words (FO for short). Recall that, given a countable set of (position) variables, the FO formulas over a set of proposition letters are defined as:

We interpret FO formulas over infinite paths of Kripke structures . Given a variable valuation , assigning to each variable a position , the satisfaction relation corresponds to the standard satisfaction relation , where is the infinite word over given by . More precisely, is inductively defined as follows (we omit the standard rules for the Boolean connectives):

where and for . Note that the satisfaction relation depends only on the values assigned to the variables occurring free in the given formula . We write to mean that , where for each variable . An FO sentence is a formula with no free variables. The following is a well-known result (Kamp’s theorem [21]).

Proposition 3.1.

Given an FO sentence over , one can construct an LTL formula such that, for all Kripke structures over and infinite paths , it holds that if and only if .

Given a formula , we now construct an FO sentence such that, for all Kripke structures , if and only if for each initial infinite path of , .

We start by defining a mapping assigning to each triple , consisting of a HS formula and two distinct position variables , an FO formula having as free variables and . The mapping returns the FO formula defining the semantics of the HS formula interpreted over an interval bounded by the positions and .

The function is homomorphic with respect to the Boolean connectives, and is defined for proposition letters and modal operators as follows (here is a fresh position variable):

It is worth noting that homogeneity plays a crucial role in the definition of (without it, a binary predicate would be necessary to encode the truth of over ).

Given a Kripke structure , an infinite path , an interval of positions , and an formula , by a straightforward induction on the structure of , we can show that