1 Introduction
Runtime Verification is a lightweight verification technique where a computational entity that we call a monitor is used to observe a system run in order to verify a given property. That property, which we choose to formalize in HennessyMilner logic with recursion () [13], can be a potential property of either the system [12, 2], or of the current system run, encoded as a trace of events [5] — see also, for example, [14, 10, 8] for earlier work on the monitoring of trace properties, mainly formalized on LTL.
To address the case of verifying trace properties, the authors introduced in [5] a class of monitors that can generate multiple parallel components that analyse the same system trace. These were called parallel monitors. When some of them reach a verdict, they can combine these verdicts into one. In the same paper, it was determined that this monitoring system has the same monitoring power as its restriction to a single monitoring component, as it was used in [12, 2], called regular monitors. However, the cost of the translation from the more general monitoring system to this fragment, as given in [5], is doubly exponential with respect to the syntactic size of the monitors. Furthermore, if the goal is a deterministic regular monitor [4, 3], then the resulting monitor is quadruplyexponentially larger than the original, parallel one, in [5].
In this paper, we show that the doubleexponential cost for translating from parallel to equivalent regular monitors is tight. Furthermore, we improve the translation cost from parallel monitors to equivalent deterministic monitors to a triple exponential, and we show that this bound is tight. We define monitor equivalence in two ways, the first one stricter than the second. For the first definition, two monitors are equivalent when they reach the same verdicts for the same finite traces, while for the second one it suffices to reach the same verdicts for the same infinite traces. We prove the upper bounds for a transformation that gives monitors that are equivalent with respect to the stricter definition, while we prove the lower bounds with respect to transformations that satisfy the coarser definition. Therefore, our bounds hold for both definitions of monitor equivalence. This treatment allows us to derive stronger results, which yield similar bounds for the case of logical formulae, as well.
In [5], we show that, when interpreted over traces, , the fragment of that does not use least fixed points, is equivalent to the syntactically smaller safety fragment . That is, every formula can be translated to a logically equivalent formula. Similarly to the aforementioned translation of monitors, this translation of formulae results in a formula that that is syntactically at most doublyexponentially larger than the original formula. We show that this upper bound is tight.
The first four authors have worked on the complexity of monitor transformations before in [4, 3], where the cost of determinizing monitors is examined. Similarly to [4, 3], in [5], but also in this paper, we use results and techniques from Automata Theory and specifically about alternating automata [9, 11].
In Sec. 2, we introduce the necessary background on monitors and on infinite traces, as these were used in [5]. In Sec. 3, we describe the monitor translations that we mentioned above, and we provide upper bounds for these, which we prove to be tight in Sec. 4. In Sec. 5, we extrapolate these bounds to the case where we translate logical formulae, from to . In Sec. 6, we conclude the paper. Omitted proofs can be found in the appendix.
2 Preliminaries
Monitors are expected to monitor for a specification, which, in our case, is written in . We use the lineartime interpretation of the logic , as it was given in [5]. According to that interpretation, formulae are interpreted over infinite traces.
2.1 The model and the logic
We assume a finite set of actions with distinguished silent action . We also assume that and that , and refer to the actions in as visible actions (as opposed to the silent action ). The metavariables range over (infinite) sequences of visible actions, which abstractly represent system runs. We also use the metavariable to range over sets of traces. We often need to refer to finite traces, denoted as , to represent objects such as a finite prefix of a system run, or to traces that may be finite or infinite (finfinite traces, as they were called in [5]), denoted as . A trace (finite trace, finfinite trace) with action at its head is denoted as (, ). Similarly a trace with a prefix is written .
Syntax
LinearTime Semantics
The logic [13, 7] assumes a countable set (with ) of logical variables, and is defined as the set of closed formulae generated by the grammar of Fig. 1. Apart from the standard constructs for truth, falsehood, conjunction and disjunction, the logic is equipped with possibility and necessity modal operators labelled by visible actions, together with recursive formulae expressing least or greatest fixpoints; formulae and bind free instances of the logical variable in , inducing the usual notions of open/closed formulae and formula equality up to alphaconversion.
We interpret formulae over traces, using an interpretation function that maps formulae to sets of traces, relative to an environment , which intuitively assigns to each variable the set of traces that are assumed to satisfy it, as defined in Fig. 1. The semantics of a closed formula is independent of the environment and is simply written . Intuitively, denotes the set of traces satisfying . For a formula , we use to denote the length of as a string of symbols.
2.2 Two monitoring systems
Syntax
Dynamics
Regular monitor rules:
[Act]
[RecF]x τ
[RecB]x τ
p_x
[SelL] ’ ’
[SelR] ’ ’
[Ver]
Parallel tracing rules:
[Par]’ & ’
’’
[TauL]τ’
τ’
Parallel evaluation rules:
[VrE]τ
[VrC1] τ
[VrC2] τ
[VrD1] τ
[VrD2] τ
We now present two monitoring systems, parallel and regular monitors, that were introduced in [12, 2, 5]. A monitoring system is a Labelled Transition System (LTS) based on , the set of actions, that is comprised of the monitor states, or monitors, and a transition relation. The set of monitor states, , and the monitor transition relation, , are defined in Fig. 2. There and elsewhere, ranges over both parallel operators and . When discussing a monitor with free variables (an open monitor) , we assume it is part of a larger monitor without free variables (a closed monitor), where every variable appears at most once in a recursive operator. Therefore, we assume an injective mapping from each monitor variable to a unique monitor , of the form that is a submonitor of .
The suggestive notation denotes ; we also write to denote . We employ the usual notation for weak transitions and write in lieu of and for . We write sequences of transitions as , where . The monitoring system of parallel monitors is defined using the full syntax and all the rules from Fig. 2; regular monitors are parallel monitors that do not use the parallel operators and . Regular monitors were defined and used already in [2] and [12], while parallel monitors were defined in [5]. We observe that the rules RecF and RecB are not the standard recursion rules from [2] and [12], but they are equivalent to those rules [5, 1] and more convenient for our arguments.
A transition denotes that the monitor in state can analyse the (visible) action and transition to state . Monitors may reach any one of three verdicts after analysing a finite trace: acceptance, , rejection, , and the inconclusive verdict . We highlight the transition rule for verdicts in Fig. 2, describing the fact that from a verdict state any action can be analysed by transitioning to the same state; verdicts are thus irrevocable. Rule Par states that both submonitors need to be able to analyse an external action for their parallel composition to transition with that action. The rules in Fig. 2 also allow transitions for the reconfiguration of parallel compositions of monitors. For instance, rules VrC1 and VrC2 describe the fact that, in conjunctive parallel compositions, whereas verdicts are uninfluential, verdicts supersede the verdicts of other monitors (Fig. 2 omits the obvious symmetric rules). The dual applies for and verdicts in a disjunctive parallel composition, as described by rules VrD1 and VrD2 (again, we omit the obvious symmetric rules). Rule VrE applies to both forms of parallel composition and consolidates multiple inconclusive verdicts. Finally, rules TauL and its omitted dual TauR are contextual rules for these monitor reconfiguration steps.
Definition 1 (Acceptance and Rejection)
We say that rejects (accepts) when (). We similarly say that rejects (accepts) if rejects (accepts) some prefix of .
Just like for formulae, we use to denote the length of as a string of symbols. In the sequel, for a finite nonempty set of indices , we use to denote any combination of the monitors in using the operator . The notation is justified, because is commutative and associative with respect to the transitions that a resulting monitor can exhibit. For each , is called a summand of (and the term is called a sum of ). The regular monitors in Fig. 2 have an important property, namely that their state space, the set of reachable states, is finite (see Remark 1). On the other hand, parallel monitors can be infinitestate, but they are convenient when one synthesizes monitors. However, the two monitoring systems are equivalent (see Prop. 2). For a monitor , is the set of monitor states reachable through a transition sequence from .
Lemma 2
Every submonitor of a closed regular monitor can only transition to submonitors of .
Remark 1
An immediate consequence of Lem. 2 is that regular monitors are finitestate. This is not the case for parallel monitors, in general. For example, consider parallel monitor . We can see that there is a unique sequence of transitions that can be made from :
One basic requirement that we maintain on monitors is that they are not allowed to give conflicting verdicts for the same trace.
Definition 2 (Monitor Consistency)
A monitor is consistent when there is no finite trace such that and .
We identify a useful monitor predicate that allows us to neatly decompose the behaviour of a parallel monitor in terms of its constituent submonitors.
Definition 3 (Monitor Reactivity)
We call a monitor reactive when for every and , there is some such that .
The following lemma states that parallel monitors behave as expected with respect to the acceptance and rejection of traces as long as the constituent submonitors are reactive.
Lemma 3 ([5])
For reactive and :

rejects if and only if either or rejects .

accepts if and only if both and accept .

rejects if and only if both and reject .

accepts if and only if either or accepts .
The following example, which stems from [5], indicates why the assumption that and are reactive is needed in Lem. 3.
Example 1
Assume that . The monitors and are both reactive. The monitor , however, is not reactive. Since the submonitor can only transition with , according to the rules of Fig. 2, cannot transition with any action that is not . Similarly, as the submonitor can only transition with , cannot transition with any action that is not . Thus, cannot transition to any monitor, and therefore it cannot reject or accept any trace.
In general, we are interested in reactive parallel monitors, and the parallel monitors that we use will have this property.
2.3 Automata, Languages, Equivalence
In [5], we describe how to transform a parallel monitor to a verdict equivalent regular one. This transformation goes through alternating automata [9, 11]. For our purposes, we only need to define nondeterministic and deterministic automata.
Definition 4 (Finite Automata)
A nondeterminitic finite automaton (NFA) is a quintuple , where is a finite set of states, is a finite alphabet (here it coincides with the set of actions), is the starting state, is the set of accepting, or final states, and is the transition relation. An NFA is deterministic (DFA) if is a function from to .
Given a state and a symbol , returns a set of possible states where the NFA can transition, and we typically use instead of . We extend the transition relation to , so that and . We say that the automaton accepts when , and that it recognizes when is the set of strings accepted by the automaton.
Definition 5 (Monitor Language Recognition)
A monitor recognizes positively (negatively) a set of finite traces (a language) when for every , if and only if accepts (rejects) . We call the set that recognizes positively (negatively) (). Similarly, we say that recognizes positively (negatively) a set of infinite traces when for every , if and only if accepts (rejects) .
Observe that, by Lem. 1, and are closed under finite extensions.
Lemma 4
The set of infinite traces that is recognized positively (negatively) by is exactly ().
Proof
The lemma is a consequence of verdict persistence (Lem. 1). ∎
To compare different monitors, we use a notion of monitor equivalence from [4] that focusses on how monitors can reach verdicts.
Definition 6 (Verdict Equivalence)
Monitors and are verdict equivalent, denoted as , if and .
One may consider the notion of verdict equivalence, as defined in Def. 6, to be too strict. After all, verdict equivalence is defined with respect to finite traces, so if we want to turn a parallel monitor into a regular or deterministic monitor, the resulting monitor not only needs to accept and reject the same infinite traces, but it is required to do so at the same time the original parallel monitor does. However, one may prefer to have a smaller, but not tight monitor, if possible, as long as it accepts the same infinite traces.
Definition 7 (Verdict Equivalence)
Monitors and are verdict equivalent, denoted as , if and .
From Lem. 4 we observe that verdict equivalence implies verdict equivalence. The converse does not hold, because , but .
Definition 8 ([3])
A closed regular monitor is deterministic iff every sum of at least two summands that appears in is of the form , where .
Example 2
The monitor abaa is not deterministic while the verdict equivalent monitor a(ba) is deterministic.
2.4 Synthesis
There is a tight connection between the logic from Sec. 2.1 and the monitoring systems from Sec. 2.2. Ideally, we would want to be able to synthesize a monitor from any formula , such that the monitor recognizes positively and negatively. However, as shown in [5], neither goal is possible for all formulae. Instead, we identify the following fragments of .
Definition 9 (MAX and MIN Fragments of )
The greatestfixedpoint and leastfixedpoint fragments of are, respectively, defined as:
Definition 10 (Safety and coSafety Fragments of )
The safety and cosafety fragments of are, respectively, defined as:
Theorem 2.1 (Monitorability and Maximality, [5])

For every (), there is a reactive parallel monitor , such that and ().

For every reactive parallel monitor , there are and , such that , , and .

For every (), there is a regular monitor , such that and ().

For every regular monitor , there are and , such that , , and .
We say that a logical fragment is monitorable for a monitoring system, such as parallel or regular monitors, when for each of the fragment’s formulae there is a monitor that detects exactly the satisfying or violating traces for that formula. One of the consequences of Thm. 2.1 is that the fragments defined in Defs. 10 and 9 are semantically the largest monitorable fragments of for parallel and regular monitors, respectively. As we will see in Sec. 3, every parallel monitor has a verdict equivalent regular monitor (Props. 4 and 3), and therefore all formulae in and can be translated into equivalent and formulae respectively, as Thm. 5.1 later on demonstrates. However, Thm. 5.2 to follow shows that the cost of this translation is significant.
3 Monitor Transformations: Upper Bounds
In this section we explain how to transform a parallel monitor into a regular or deterministic monitor, and what is the cost, in monitor size, of this transformation. The various relevant transformations, including some from [3] and [9, 11], are summarized in Fig. 3, where each edge is labelled with the bestknown worstcase upper bounds for the cost of the corresponding transformation in Fig. 3 (AFA abbreviates alternating finite automaton [9]). As we see in [4, 3] and in Sec. 4, these bounds cannot be improved significantly.
Proposition 1 ([5])
For every reactive parallel monitor , there are an alternating automaton that recognizes and one that recognises , with states.
Proof
The construction from [5, Proposition 3.6] gives an automaton that has the submonitors of as states. The automaton’s transition function corresponds to the semantics of the monitor. ∎
Corollary 1 (Corollary 3.7 of [5])
For every reactive and closed parallel monitor , there are an NFA that recognises and an NFA that recognises , and each has at most states.
Proposition 2
For every reactive and closed parallel monitor , there exists a verdict equivalent regular monitor such that .
Proof
Theorem 3.1 (Corollary 3 of [4])
For every consistent closed regular monitor , there is a deterministic monitor such that and .
Proposition 3 (Proposition 3.11 of [5])
For every consistent reactive and closed parallel monitor , there is a verdict equivalent deterministic regular monitor such that .
However, the bound given by Prop. 3 for the construction of deterministic regular monitors from parallel ones is not optimal, as we observe below.
Proposition 4
For every consistent reactive and closed parallel monitor , there is a deterministic monitor such that and .
In the following Sec. 4, we see that the upper bounds of Props. 4 and 2 are tight, even for monitors that can only accept or reject, and even when the constructed regular or deterministic monitor is only required to be verdict equivalent to the starting one, and not necessarily verdict equivalent, to the original parallel monitor. As we only need to focus on acceptance monitors, in the following we say that a monitor recognizes a language to mean that it recognizes the language positively.
4 Lower Bounds
We now prove that the transformations of Sec. 3 are optimal, by establishing the corresponding lower bounds. To this end, we introduce a family of suffixclosed languages . Each is a variation of a language introduced in [9] to prove the lower bound for the transformation from an alternating automaton to a deterministic one. In this section, we only need to consider closed monitors, and as such, all monitors are assumed to be closed.
A variation of the language that was introduced in [9] is the following:
An alternating automaton that recognizes can nondeterministically skip to the first occurrence of and then verify that, for every number between and , the ’th bit matches the ’th bit after the symbol. This verification can be done using up to states, to count the position of the bit that is checked. On the other hand, a DFA that recognizes must remember all possible candidates for that have appeared before , and hence requires states. We can also conclude that any NFA for must have at least states, because a smaller NFA could be determinized to a smaller DFA.
A gap language
For our purposes, we use a similar family of suffixclosed languages, which are designed to be recognized by small parallel monitors, but such that each regular monitor recognizing must be “large”. We fix two numbers , such that . First, we observe that we can encode every string as a string , where is a permutation of and, for all , . Then, gives the information that, for being the number with binary representation , the ’th position of holds bit . Let
Let , where and , and for , and . We define to mean that for every , implies . Let ; then, is the ordered encoding of , where is the binary encoding of . Then, is called an encoding of if .
Let and . Then,
In other words, a finite trace is in exactly when it has a substring of the form , where and are encodings of the same string and there is only one between them. Intuitively, is there to delimit bitstrings that may be elements of , and delimits sequences of such bitstrings. So, the language asks if there are two such consecutive sequences where the last bitstring of the second sequence comes from and matches an element from the first sequence. We observe that is suffixclosed.
Lemma 5
if and only if .
Conventions
For the conclusions of Lem. 3 to hold, monitors need to be reactive. However, a reactive monitor can have a larger syntactic description than an equivalent nonreactive one, vs. , when . This last monitor is also verdict equivalent to . In what follows, for brevity and clarity, whenever we write a sum of a monitor of the form , we will mean , which is reactive, so it can be safely used with a parallel operator, and is verdict equivalent to . We use setnotation for monitors: for , stands for (or under the above convention). Furthermore, we define and . Notice that . We can also similarly define for .
Auxiliary monitors
We start by defining auxiliary monitors. Given a (closed) monitor , let
These monitors read the trace until they reach a certain symbol, and then they activate submonitor . We can think that nondeterministically skips to some occurrence of that comes before the first occurrence of ; and respectively skip to the next occurrence of and ; and skips to the last occurrence of before the next occurrence of .
Lemma 6
accepts iff there are and , such that , accepts , and .
The following lemmata are straightforward and explain how the remaining monitors defined above are used.
Lemma 7
accepts iff there are and , such that , accepts , and .
Lemma 8
accepts iff there are and , such that , accepts , and .
Lemma 9
accepts iff there are , and , such that , accepts , , and .
The following monitors help us ensure that a bitstring from is actually a member of . Monitor ensures that all bit positions appear in the bitstring; assures us that the bit position does not appear any more in the remainder of the bitstring; and guarantees that each bit position appears at most once. Monitor combines these monitors together.
for ,
The purpose of is to ensure that a certain block of bits before the appearance of the symbol is a member of the set : it accepts exactly when is a sequence of blocks of bits with length exactly (by ) and for every there is some such that is one of these blocks (by ), and that for each such only one block is of the form (by ).
Lemma 10
accepts iff is a prefix of , for some .
Lemma 11
.
Given a block of bits, monitor accepts a sequence of blocks of bits exactly when is one of the blocks of :
Lemma 12
For , accepts if and only if there is some , such that is a prefix of .
For , ensures that right before the second occurrence of , there is a , where and is a bit block in .
Lemma 13
For , accepts if and only if there are , such that , , and there is a prefix of , such that .
Recognizing with a parallel monitor
We can now define a parallel acceptancemonitor of length that recognizes . Monitor ensures that every one of the consecutive blocks of bits that follow, also appears in the block of bits that appears right before the occurrence of that follows the next (and that there is no other between these and ). Therefore, if what follows from the current position in the trace and what appears right before that occurrence of are elements of , ensures that . Then, nondeterministically chooses an occurrence of , it verifies that the block of bits that follows is an element of that ends with , and that what follows is of the form , where , which matches the description of .
Lemma 14
recognizes and .
Proof
The lemma follows from this section’s previous lemmata and from counting the sizes of the various monitors that we have constructed. ∎
Lemma 15
If is a deterministic monitor that recognizes , then
Thm. 4.1 gathers our conclusions about .
Theorem 4.1
For every , is recognized by an alternating automaton of states and a parallel monitor of length , but by no DFA with states and no deterministic monitor of length . is recognized by a parallel monitor of length , but by no deterministic monitor of length .
Proof
Lem. 14 informs us that there is a parallel monitor of length that recognizes . Therefore, it also recognizes by Lem. 1. Prop. 1 tells us that can be turned into an alternating automaton with states that recognizes . Lem. 15 yields that there is no deterministic monitor of length that recognizes that language. From [4], we know that if there were a DFA with states that recognizes , then there would be a deterministic monitor of length that recognizes , which, as we argued, cannot exist. ∎
Hardness for regular monitors
Proposition 4.1 does not guarantee that the upper bound for the transformation from a parallel monitor to a nondeterministic regular monitor is tight. To prove a tighter lower bound, let be the language that includes all strings of the form where for , , and , and for every , encodes a string that is smaller than the string encoded by , in the lexicographic order.
Lemma 16
if and only if .
We describe how can be recognized by a parallel monitor of size . The idea is that we need to compare the encodings of two consecutive blocks of bits. Furthermore, a string is smaller than another if there is a position in these strings, where the first string has the value and the second , and for every position that comes before that position, the bits of the two strings are the same. We define the following monitors:
Comments
There are no comments yet.