1 Introduction
Runtime Verification (RV) [LeuckerS09, RVTutorial, Bartocci2017] is a lightweight formal method which consists in verifying that a run of a system is correct wrt a specification. The specification formalizes the behavior of the system typically in logics (such as variants of LinearTime Temporal Logic, LTL) or finitestate machines. Typically the system is considered as a black box that feeds events to a monitor. An event usually consists of a set of atomic propositions that describe some abstract operations or states in the system. The sequence of events transmitted to the monitor is referred to as the trace. Based on the received events, the monitor emits verdicts in a truth domain that indicate whether the run complies or not with the specification. A typical truth domain is a set where verdicts and indicate respectively that a program complies or violates the specification, and verdict indicates that no final verdict could be reached yet. Truth domains can also include additional verdicts such as currently true and currently false, to indicate a finer grained truth value. RV techniques have been used for instance in the context of decentralized automotive [ex:autosar] and medical [ex:medical] systems. In both cases, RV is used to verify correct communication patterns between the various components and their adherence to the architecture and their formal specifications. While RV comprehensively deals with monolithic systems, multiple challenges are presented when scaling existing approaches to decentralized systems, that is, systems with multiple components with no central observation point. These challenges are inherent to the nature of decentralization; the monitors have a partial view of the system and need to account for communication and consensus. Our assumptions on the system are as follows: No monitors are malicious, i.e., messages do not contain wrong information; No messages are lost, they are eventually delivered in their entirety but possibly outoforder; All components share one logical discrete time marked by round numbers indicating relevant transitions in the system specification.
Challenges.
Several algorithms have been designed [BauerF12, FalconeCF14, BonakdarpourFRT16, DecentMon] and used [Bartocci13] to monitor decentralized systems. Algorithms are primarily designed to address one issue at a time and are typically experimentally evaluated by considering runtime and memory overheads. However, such algorithms are difficult to compare as they may combine multiple approaches at once. For example, algorithms that use LTL rewriting [BauerF12, DecentMon, HavelundR01] not only exhibit variable runtime behavior due to the rewriting, but also incorporate different monitor synthesis approaches that separate the specification into multiple smaller specifications depending on the monitor. In this case, we would like to split the problem of generating equivalent decentralized specifications from a centralized one (synthesis) from the problem of monitoring. In addition, works on characterizing what one can monitor (i.e., monitorability [KimVBKLS99, PnueliZ06, FalconeFM12]) for centralized specifications exist [LTL3Tools, FalconeFM12, Diekert201429], but do not extend to decentralized specifications. For example by splitting an LTL formula adhoc, it is possible to obtain a nonmonitorable subformula^{1}^{1}1We use the example from [DecentMon]: (where means that should hold infinitely often) is monitorable, but its subformulas are both nonmonitorable. which interferes with the completeness of a monitoring algorithm.
Contributions.
We tackle the presented challenges using two complementary approaches. The first approach consists in using the data structure Execution History Encoding (EHE) that encodes automata executions. Since by using EHE one only needs to rewrite Boolean expressions, we are able to determine the parameters and their respective effect on the size of expressions, and fix upper bounds. In addition, EHE is designed to be particularly flexible in processing, storing and communicating the information in the system. EHE operates on an encoding of atomic propositions and guarantees strongeventual consistency [CRDT]. The second approach introduces decentralized specifications. We introduce decentralized specifications, define their semantics, interdependencies and study some of their properties. We aim at abstracting the highlevel steps of decentralized monitoring. By identifying these steps, we elaborate a general decentralized monitoring algorithm. We view a decentralized system as a set of components . A decentralized specification is thus as a set of finitestate automata with specific properties, which we call monitors. We associate monitors to these components with the possibility of multiple monitors being associated to a component. Therefore, we generalize monitoring algorithms to multiple monitors. Monitoring a centralized system can be seen as a special case with one component, one specification, and one monitor. As such, we present a general decentralized monitoring algorithm that uses two high level steps: setup and monitor. The setup phase creates the monitors, defines their dependencies and attaches them to components. As such, the setup phases defines a topology of monitors and their dependencies. The monitor phase allows the monitors to begin monitoring and propagating information to reach a verdict when possible. Therefore, the two high level operations help decompose monitoring into different subproblems and define them independently. For example, the problem of generating a decentralized specification from a centralized specification is separated from checking the monitorability of a specification, and also separated from the computation and communication performed by the monitor. We formulate and solve the problems of deciding compatibility and monitorability for decentralized specifications. Compatibility ensures that a monitor topology can be deployed on a given system, monitorability ensures that given a specification, monitors are able to eventually emit a verdict, for all possible traces. We present THEMIS, a JAVA tool that implements the concepts in this paper; and show how it can be used to design and analyze new algorithms. We use THEMIS to create new metrics related to loadbalancing and our data structures. We use two scenarios to compare four existing algorithms. The first scenario is a synthetic benchmark, using random traces and specifications, while the second scenario is a real example that uses the publishsubscribe pattern in the Chiron graphical user interface system. The synthetic scenario examines the trends of the analysis, and the Chiron scenario examines more specific differences in behavior.
This paper extends the work presented at the ACM SIGSOFT International Symposium on Software Testing and Analysis 2017 [themisissta], as follows:

adding the property that the EHE construction guarantees its determinism (Proposition 2);

elaborating and adding properties of decentralized specifications (monitorability, compatibility) as well as the algorithms for checking them (Section 6);

improving THEMIS by optimizing the EHE performance, and adding distributed and multithreaded support (Section 8);

elaborating on the results and providing a discussion of the synthetic benchmarks (Section 9.1);

evaluating the algorithms on a new use case based on the Chiron example that relies on publishsubscribe and has a formalized specification (Section 9.2); and
Overview.
After presenting related work in Section 2, we lay out the basic blocks, by introducing our basic data structure (dict), and the basic notions of monitoring with expressions in Section 3. Then, we present our first approach, a middle ground between rewriting and automata evaluation by introducing the Execution History Encoding (EHE) data structure in Section 4. We shift the focus on studying decentralized specifications by defining their semantics (Section 5), and their properties (Section 6). In Section 7, we use our analysis of EHE to study the behavior of three existing algorithms and discuss the situations that advantage certain algorithms over others. In Section 8, we present the THEMIS tool, which we use in Section 9 to compare the algorithms presented in Section 7 under two different scenarios: a synthetic random benchmark, and an example of a publishsubscribe system. In Section 10, we present future work and formulate additional interesting properties for decentralized specifications. Finally, we conclude in Section 11.
2 Related Work
Several approaches have been taken to handle decentralized monitoring. The first class of approaches consists in monitoring by rewriting formulae, the second class handles faulttolerance, and the third class defines specifications for monitoring streams.
Formula rewriting.
The first class of approaches consists in monitoring by LTL formula rewriting [HavelundR01, BauerF12, DecentMon]. Given an LTL formula specifying the system, a monitor will rewrite the formula based on information it has observed or received from other monitors, to generate a formula that has to hold on the next timestamp. Typically a formula is rewritten and simplified until it is equivalent to (true) or (false) at which point the algorithm terminates. Another approach [MTL] extends rewriting to focus on realtime systems. They use Metric Temporal Logic (MTL), which is an extension to LTL with temporal operators. This approach also covers lower bound analysis on monitoring MTL formulae. While these techniques are simple and elegant, rewriting varies significantly during runtime based on observations, thus analyzing the runtime behavior could prove difficult if not unpredictable. For example, when excluding specific syntactic simplification rules, could be rewritten and will keep growing in function of the number of timestamps. To tackle the unpredictability of rewriting LTL formulae, another approach [FalconeCF14] uses automata for monitoring regular languages, and therefore (i) can express richer specifications, and (ii) has predictable runtime behavior. These approaches use a centralized specification to describe the system behavior.
Faulttolerant monitoring.
Another class of research focuses on handling a different problem that arises in distributed systems. In [BonakdarpourFRT16], monitors are subject to many faults such as failing to receive correct observations or communicate state with other monitors. Therefore, the problem handled is that of reaching consensus with faulttolerance, and is solved by determining the necessary verdict domain needed to be able to reach a consensus. To remain general, we do not impose the restriction that all monitors must reach the verdict when it is known, as we allow different specifications per monitor. Since we have heterogeneous monitors, we are not particularly interested in consensus. However, for multiple monitors tasked to monitor the same specification, we are interested in strong eventual consistency. We maintain the 3valued verdict domain, and tackle the problem from a different angle by considering eventual delivery of messages. Similar work [FAMTL] extends the MTL approach to deal with failures by modeling knowledge gaps and working on resolving these gaps. We also highlight that the mentioned approaches [BauerF12, FAMTL, DecentMon], and other works [DistVolker, DistSen, DistSchmitz] do in effect define separate monitors with different specifications, typically consisting in splitting the formula into subformulas. Then, they describe the collaboration between such monitors. However, they primarily focus on presenting one global formula of the system from which they derive multiple specifications. In our approach, we generalize the notions from a centralized to a decentralized specification, and separate the problem of generating multiple specifications equivalent to a centralized specification from the monitoring of a decentralized specification (Section 10).
Specifications over streams.
Specification languages have been developed that monitor synchronous systems as streams [LOLA, TESSLA]. In this setting, events are grouped as a stream, and streams are then aggregated by various operators. The output domain extends beyond the Boolean domain and encompasses types. The stream approach to monitoring has the advantage of aggregating types, as such operations such as summing, averaging or pulling statistics across multiple streams is also possible. Stream combination is thus provided by generalpurpose functions, which are more complex to analyze than automata. This is similar to complex event processing where RV is a special case [CEPRV]. Specification languages such as LOLA [LOLA] even define dependency graphs between various stream information, and have some properties like well formed, and efficiently monitorable LOLA specifications. The former ensures that dependencies in the trace can be resolved before they are needed, and the latter ensures that the memory requirement is no more than constant with respect to the length of the trace. While streams are general enough to express monitoring, they do not address decentralized monitoring explicitly. As such, there is no explicit assignment of monitors to components and parts of the system, nor consideration of architecture. Furthermore, there is no algorithmic consideration addressing monitoring in a decentralized fashion, eventhough some works such as [BEEPBEEP3] do provide multithreaded implementations.
3 Common Notions
We begin by introducing the dict data structure (Section 3.1.) used to build more complex data structures, and defining the basic concepts for decentralized monitoring (Section 3.2).
3.1 The dict Data Structure
In monitoring decentralized systems, monitors typically have a state, and attempt to merge other monitor states with theirs to maintain a consistent view of the running system, that is, at no point in the execution, should two monitors receive updates that conflict with one another. We would like in addition, that any two monitors receiving the same information be in equivalent states. Therefore, we are interested in designing data structures that can replicate their state under strong eventual consistency (SEC) [CRDT], they are known as statebased convergent replicated datatypes (CvRDTs). We use a dictionary data structure (noted dict) as our basic building block that assigns a value to a given key. Data structure dict will be used to define the memory of a monitor (Section 3.2), and data structure EHE which encodes the execution of an automaton (Section 4.2).
We model dict as a partial function . The domain of (denoted by ) is the set of keys, while the codomain of (denoted by ) is the set of values. dict supports two operations: query and merge. The query operation checks if a key and returns . If , then it is undefined. The merge operation of a dict with another dict , is modeled as function composition. Two partial functions and are composed using operator where is a binary function.
On sets of functions, applies pairwise: . The following two operators are used in the rest of the paper: and . We define both of these operators to be commutative, idempotent, and associative to ensure SEC.
Operator acts as a replace function based on a total order () between the elements, so that it always chooses the highest element to guarantee idempotence, while uses the logical or operator to combine elements. Respectively, we denote the associated pairwise set operators by and .
Data structure dict can be composed by only using operation merge. The modifications never remove entries, the state of dict is then monotonically increasing using the order provided by merge. By ensuring that merge is idempotent, commutative, and associative we fulfill the necessary conditions [CRDT] for our data structure to be a CvRDT.
Proposition 1
Data structure dict with operations query and merge is a CvRDT.
3.2 Basic Monitoring Concepts
We recall the basic building blocks of monitoring. We consider the set of verdicts to denote the verdicts true, false, not reached (or inconclusive) respectively. A verdict in is a final verdict. It indicates that the monitor has concluded its monitoring, and any further input will not change affect it. Abstract states of a system are represented as a set of atomic propositions (). A monitoring algorithm typically includes additional information such as a timestamp associated with the atomic propositions. We capture this information as an encoding of the atomic propositions (), this encoding is left to the monitoring algorithm to specify. (resp. ) denotes the set of Boolean expressions over (resp. ). When omitted, refers to . An encoder is a function that encodes the atomic propositions into atoms. In this paper, we use two encoders: which is the identity function (it does not modify the atomic proposition), and which adds a timestamp to each atomic proposition. A decentralized monitoring algorithm requires retaining, retrieving and communicating observations.
Definition 1 (Event)
An observation is a pair in indicating whether or not a proposition has been observed. An event is a set of observations in .
Example 1 (Event)
Event over indicates that proposition has been observed to be true, while has been observed to be false.
Definition 2 (Memory)
A memory is a dict, and is modeled as a partial function that associates an atom to a verdict. The set of all memories is defined as .
A monitor stores its events in a memory with some encoding (e.g., adding a timestamp). An event can be converted to a memory by encoding the atomic propositions to atoms, and associating their truth value: .
Example 2 (Memory)
Let be an event at , the resulting memories using encodes and are:
If we impose that be a totally ordered set, then two memories and can be merged by applying operator . The total ordering is needed for operator . This ensures that the operation is idempotent, associative and commutative. Monitors that exchange their memories and merge them have a consistent snapshot of the memory, regardless of the ordering. Since a memory is a dict and is idempotent, associative, and commutative, it follows from Proposition 1 that a memory is a CvRDT.
Corollary 1
A memory with operation is a CvRDT.
In this paper, we perform monitoring by manipulating expressions in . The first operation we provide is , which rewrites the expression to attempt to eliminate .
Definition 3 (Rewriting an expression)
An expression is rewritten with a memory using function defined as follows:
Using information from a memory , the expression is rewritten by replacing atoms with a final verdict (a truth value in ) in when possible. Atoms that are not associated with a final verdict are kept in the expression. Operation yields a smaller formula to work with.
Example 3 (Rewriting)
We consider and . We have , , . Since is associated with then it will not be replaced when the expression is evaluated. The resulting expression is .
We eliminate additional atoms using Boolean logic. We denote by the simplification of expression expr ^{2}^{2}2This is also known as The Minimum Equivalent Expression problem [CircuitMin]..
Example 4 (Simplification)
Consider and . We have . Atoms can be eliminated with . We finally get .
We combine both rewriting and simplification in the function which determines a verdict from an expression .
Definition 4 (Evaluating an expression)
The evaluation of a Boolean expression using a memory yields a verdict. Function is defined as:
Function returns the verdict (resp. ) if the simplification after rewriting is (Boolean) equivalent to (resp. ), otherwise it returns verdict .
Example 5 (Evaluating expressions)
Consider and . We have , and which depends on : we cannot emit a final verdict before observing .
A decentralized system is a set of components . We assign a sequence of events to each component using a decentralized trace function.
Definition 5 (Decentralized trace)
A decentralized trace of length is a function (where denotes the interval of the first nonzero natural numbers).
Function assigns an event to a component for a given timestamp. We denote by the set of all possible decentralized traces. We additionally define function to assigns an atomic proposition to a component^{3}^{3}3We assume that (1) no two components can observe the same atomic propositions, and (2) a component has at least one observation at all times (a component with no observations to monitor, can be simply considered excluded from the system under monitoring).. The function is defined as .
We consider timestamp 0 to be associated with the initial state, therefore our traces start at 1. The length of a trace tr is denoted by . An empty trace has length 0 and is denoted by . Monitoring using LTL or finitestate automata relies on sequencing the trace. Events must be totally ordered. A timestamp indicates simply the order of the sequence of events. As such, a timestamp represents a logical time, it can be seen as a round number. Every round consists in a transition taken on the automaton after reading a part of the word. While gives us a view of what components can locally see, we reconstruct the global trace to reason about all observations. A global trace of the system is therefore a sequence of events. A global trace encompasses all observations observed locally by components.
Definition 6 (Reconstructing a global trace)
Given a decentralized trace of length , we reconstruct the global trace using function defined as s.t. .
For each timestamp , we take all observations of all components and union them to get a global event. Consequently, an empty trace yields an empty global trace, .
Example 6 (Traces)
We consider a system of two components and , that are associated with atomic propositions and respectively. An example decentralized trace of the system is given by . That is, component observes proposition to be at both timestamps 1 and 2, while observes to be at timestamp 1 and at timestamp 2. The associated global trace is: .
4 Centralized Specifications
We now focus on a decentralized system specified by one global automaton. We consider automata that emit 3valued verdicts in the domain , similar to those in [DecentMon, LTL3Tools] for centralized systems. Using automata with 3valued verdicts has been the topic of a lot of the Runtime Verification literature [LTL3Tools, BauerF12, DecentMon, FalconeCF14, FAMTL], we focus on extending the approach for decentralized systems in [DecentMon] to use a new data structure called Execution History Encoding (EHE). Typically, monitoring is done by labeling an automaton with events, then playing the trace on the automaton and determining the verdict based on the reached state. We present the EHE, a data structure that encodes the necessary information from an execution of the automaton. Monitoring using EHEs ensures strong eventual consistency. We begin by defining the specification automaton used for monitoring in Section 4.1, then we present the EHE data structure, illustrate its usage for monitoring in Section 4.2, and describe its use to reconcile partial observations in Section 4.3.
4.1 Preliminaries
Specifications are similar to the Moore automata generated by [LTL3Tools]. We modify labels to be Boolean expressions over atomic propositions (in ). We choose to label the transitions with Boolean expressions as opposed to events, to keep a homogeneous representation (with EHE)^{4}^{4}4Indeed, an event can be converted to an expression by the conjunction of all observations, negating the terms that are associated with the verdict ..
Definition 7 (Specification)
The specification is a deterministic Moore automaton where is the initial state, is the transition function and is the labeling function.
The labeling function associates a verdict with each state. When using multiple automata we use labels to separate them, . We fix to be a specification automaton for the remainder of this section. For monitoring, we are interested in events (Definition 1), we extend to events, and denote it by ^{5}^{5}5We note that in this case, we are not using any encoding ()..
Definition 8 (Transition over events)
Given an event , we build the memory . Then, function is defined as follows:
A transition is taken only when an event contains observations (i.e., ). This allows the automaton to wait on observations before evaluating, as such it remains in the same state (i.e., ). Upon receiving observations, we use to evaluate each label of an outgoing transition, and determine if a transition can be taken (i.e., ).
To handle a trace, we extend to its reflexive and transitive closure in the usual way, and note it . For the empty trace, the automaton makes no moves, i.e., .
Example 7 (Monitoring using expressions)
Remark 1 (Properties and normalization)
We recall that the specification is a deterministic and complete automaton. Hence, there are properties on the expressions that label the transition function. For any , we have:

; and

the disjunction of the labels of all outgoing transitions results in an expression that is a tautology.
The first property states that for all possible memories encoded with no two (or more) labels can evaluate to at once. It results from determinism: no two (or more) transitions can be taken at once. The second property results from completeness: given any input, the automaton must be able to take a move. Furthermore, we note that for each pair of states , we can rewrite such that there exists at most one expression , such that , without loss of generality. This is because for a pair of states, we can always disjoin the expressions to form only one expression, as it suffices that only one expression needs to evaluate to to reach . By having at most one transition between any pair of states, we simplify the topology of the automaton.
4.2 Execution History Encoding
The execution of the specification automaton, is in fact, the process of monitoring, upon running the trace, the reached state determines the verdict. An execution of the specification automaton can be seen as a sequence of states . It indicates that for each timestamp the automaton is in the state . In a decentralized system, a component receives only local observations and does not necessarily have enough information to determine the state at a given timestamp. Typically, when sufficient information is shared between various components, it is possible to know the state that is reached in the automaton at (we say that the state has been found, in such a case). The aim of the EHE is to construct a data structure which follows the current state of an automaton, and in case of partial information, tracks the possible states the automaton can be in. For that purpose, we need to ensure strong eventual consistency in determining the state of the execution of an automaton. That is, after two different monitors share their EHE, they should both be able to find for (if there exists enough information to infer the global state), or if not enough information is available, they both find no state at all.
Definition 9 (Execution History Encoding  Ehe)
An Execution History Encoding (EHE) of the execution of an automaton is a partial function .
For a given execution, we encode the conditions to be in a state at a given timestamp as an expression in . is an expression used to track whether the automaton is in state at . Using information from the execution stored in a memory , if is , then we know that the automaton is indeed in state at timestamp . We use the notation , to denote all the timestamps that the EHE encodes, i.e., . Similarly to automata notation, if multiple EHEs are present, we use a label in the subscript to identify them and their respective operations ( denotes the EHE of ).
To compute for a timestamp range, we will next define some (partial) functions: , , , , and . The purpose of these functions is to extract information from at a given timestamp, which we can use to recursively build for future timestamps. Given a memory which stores atoms, function determines if a state is reached at a timestamp . If the memory does not contain enough information to evaluate the expressions, then the state is . The state at timestamp with a memory is determined by:
Function is a shorthand to retrieve the verdict at a given timestamp :
The automaton is in the first state at . We start building up with the initial state and associating it with expression : . Then for a given timestamp , we use function to check the next possible states in the automaton by looking at the outgoing transitions for all states in at :
We now build the necessary expression to reach from multiple states by disjoining the transition labels. Since the label consists of expressions in we use an encoder to get an expression in To get to the state at from we conjunct the condition to reach at .
By considering the disjunction, we cover all possible paths to reach a given state. Updating the conditions for the same state on the same timestamp is done by disjoining the conditions:
with .
Finally, is obtained by considering the next states and merging all their expressions with . We use superscript to denote the encoding up to a given timestamp as .
t  q  e 

0  
1  
1  
2  
2 
Example 8 (Monitoring with Ehe)
We encode the execution of the automaton presented in Example 7. For this example, we use the encoder which appends timestamp to an atomic proposition. We have . From , it is possible to go to or , therefore . To move to at , we must be at at . The following condition must hold: . The encoding up to timestamp is obtained with and is shown in Table 1. We consider the same event as in Example 2 at , . Let . It is possible to infer the state of the automaton after computing only by using , we evaluate:
We find that is the selected state, with verdict .
Since we are encoding deterministic automata, we recall from Remark 1 that when a state is reachable at a timestamp , no other state is reachable at . Moreover, the EHE construction using operation and encoder preserves determinism.
Proposition 2 (Deterministic Ehe)
Given an EHE constructed with operation using encoder , we have:
Determinism is preserved since, by using encoder , we only change an expression to add the timestamp. By construction, when there exists a state s.t. , such a state is unique, since the EHE is built using a deterministic automaton. The full proof is in Appendix 0.A.
While the construction of a EHE preserves the determinism found in the automaton, an important property is in ensuring that the EHE encodes correctly the the automata execution.
Proposition 3 (Soundness)
Given a decentralized trace of length , we reconstruct the global trace , we have: , with:
EHE is sound wrt the specification automaton; both the automaton and EHE will indicate the same state reached with a given trace. Thus, the verdict is the same as it would be in the automaton. The proof is by induction on the reconstructed global trace ().
Proof sketch.
We first establish that both the EHE and the automaton memories evaluate two similar expressions modulo encoding to the same result. That is, for the given length , the generated memories at with encodings and yield similar evaluations for the same expression . Then, starting from the same state reached at length , we assume holds. We prove that it holds at , by building the expression (for each encoding) to reach state at , and showing that the generated expression is the only expression that evaluates to . As such, we determine that both evaluations point to being the next state. The full proof is in Appendix 0.A.
4.3 Reconciling Execution History
EHE provides interesting properties for decentralized monitoring. Since the data structure EHE is a reification of dict, as it maps a pair in to an expression in , and since the combination is done using , which is idempotent, commutative and associative, it follows from Proposition 1 that EHE is a CvRDT.
Corollary 2
An EHE with operation is a CvRDT.
Two (or more) components sharing EHEs and merging them will be able to infer the same execution history of the automaton. That is, components will be able to aggregate the information of various EHEs, and are able to determine the reached state, if possible, or that no state was reached. Merging two EHEs of the same automaton with allows us to aggregate information from two partial histories.
However, two EHEs for the same automaton contain the same expression if constructed with . To incorporate the memory in a EHE, we generate a new EHE that contains the rewritten and simplified expressions for each entry. To do so we define function to apply to a whole EHE and a memory to generate a new EHE: . We note, that for a given and , maintains the invariant of Proposition 2. We are simplifying expressions or rewriting atoms with their values in the memory which is what already does for each entry in the EHE. That is, is a valid representation of the same deterministic and complete automaton as . However, incorporates information from memory in addition.
Proposition 4 (Memory obsolescence)
Proof
Follows directly by construction of and the definition of (which uses functions and ).
Proposition 4 ensures that it is possible to directly incorporate a memory in an EHE, making the memory no longer necessary. This is useful for algorithms that communicate the EHE, as they do not need to also communicate the memory.
By rewriting the expressions, the EHEs of two different monitors receiving different observations contain different expressions. However, since they still encode the same automaton, and observations do not conflict, merging with shares useful information.
Corollary 3
Given an EHE constructed using function , and two memories and that do not observe conflicting observations, the two EHEs and have the following properties :

is deterministic (Proposition 2);

;

;

The first property ensures that the merge of two EHEs that incorporate memories are still indeed representing a deterministic and complete automaton, this follows from Proposition 2 and Proposition 4. Since operation disjoins the two expressions, and since the two expressions come from EHEs that each maintain the property, the additional disjunction will not affect the outcome of . The second property extends Proposition 4 to the merging of EHE with incorporated memories. It follows directly from Proposition 4, and the assumptions that the memories have no conflicts. The third property adds a stronger condition. It states that merging two EHEs with incorporated memories results in an EHE that does not evaluate differently under the different memories. This follows from the second property and the fact that the memories do not have conflicting observations. Finally, the fourth property ensures that merging an EHE with an entry that evaluates to does not result in an entry that evaluates to . That is, if an EHE has already determined that a state is not reachable, merging it with another EHE does not result in the state being reachable. This ensures the consistency when sharing information. This property follows from the merging operator which uses to merge entries in two EHEs. We recall that an entry in is constructed as: . For to be , either or has to be , if one is already , then the other has to be . This leads to a contradiction, since both and encode the same deterministic automaton, as such, the automaton cannot be in two states at once.
Example 9 (Reconciling information)
We consider specification (Figure 2), and two components: and monitored by and respectively. The monitors can observe the propositions and respectively and use one EHE each: and respectively. Their memories are respectively and . Table 2 shows the EHEs at . Constructing the EHE follows similarly from Example 8. We show the rewriting for both and respectively in the next two columns. Then, we show the result of combining the rewrites using . We notice initially that since is , could evaluate and know that the automaton is in state . However, for , this is not possible until the expressions are combined. By evaluating the combination , determines that the automaton is in state . In this case, we are only looking for expressions that evaluate to . We notice that monitor can determine that is not reachable (since ) while cannot, as the expression cannot yet be evaluated to a final verdict. This does not affect the outcome, as we are only looking for one expression that evaluates to , since both and are encoding the same execution.
t  q  

0  
1  
1 
5 Decentralized Specifications
In this section, we shift the focus to a specification that is decentralized. A set of automata represent various requirements (and dependencies) for different components of a system. In this section, we define the notion of a decentralized specification and its semantics, and in Section 6, we define various properties on such specifications.
Decentralizing a specification.
We recall that a decentralized system consists of a set of components . To decentralize the specification, instead of having one automaton, we have a set of specification automata (Definition 7) , where is a set of monitor labels. We refer to these automata as monitors. To each monitor, we associate a component using a function . However, the transition labels of a monitor are expressions restricted to either observations local to the component the monitor is attached to (i.e., ), or references to other monitors. Transitions are labeled over . This ensures that the monitor is labeled with observations it can locally observe or depend on other monitors. To evaluate a trace as one would on a centralized specification, we require one of the monitors to be a starting point, we refer to that monitor as the root monitor ().
Definition 10 (Decentralized specification)
A decentralized specification is a tuple , , , , .
We note that a centralized specification is a special case of a decentralized specification, with one component (global system, ), and one monitor () attached to the sole component, i.e. .
As automata expressions now include references to monitors, we first define function , which determines monitor dependencies. Then, we define the semantics of evaluating (decentralized) specifications with references.
Definition 11 (Monitor dependency)
The set of monitor dependencies in an expression e is obtained by function , defined as^{6}^{6}6We note that this definition can be trivially extended to any encoding of such expressions that contains the monitor id.:
match with:
Function finds all monitors referenced by expression e, by syntactically traversing it.
Example 10 (Decentralized specification)
Figure 3 shows a decentralized specification corresponding to the centralized specification in Example 7. It consists of 2 monitors and respectively. We consider two atomic propositions and which can be observed by component and respectively. The monitor labeled with (resp. ) is attached to component (resp. ). depends on the verdict from and only observations local to , while is only labeled with with observations local to . Given the expression , we have .
Semantics of a decentralized specification.
The transition function of the decentralized specification is similar to the centralized automaton with the exception of monitor ids.
Definition 12 (Semantics of a decentralized specification)
Consider the root monitor and a decentralized trace with index representing the timestamps. Monitoring starting from emits the verdict where for a given monitor label :
For a monitor , we determine the new state of the automaton starting at , and running the trace from timestamp to timestamp by applying . To do so, we evaluate one transition at a time using as would with (see Definition 8). To evaluate at any state , we need to evaluate the expressions so as to determine the next state . The expressions contain atomic propositions and monitor ids. For atomic propositions, the memory is constructed using which is based on the event with observations local to the component the monitor is attached to (i.e., ). However, for monitor ids, the memory represents the verdicts of the monitors. To evaluate each reference in the expression, the remainder of the trace starting from the current event timestamp is evaluated recursively on the automaton from the initial state . Then, the verdict of the monitor is associated with in the memory.
Example 11 (Monitoring of a decentralized specification)
Consider monitors (root) and associated to components and respectively and the trace
Comments
There are no comments yet.