1 Introduction
In this work we present nearlinear (hence nearoptimal) randomized algorithms for the strong fairness verification in graphs and Markov decision processes (MDPs). In the fundamental modelchecking problem, the input is a model and a specification, and the algorithmic verification problem is to check whether the model satisfies the specification. We first describe the models and the specifications we consider, then the notion of satisfaction, and then previous results followed by our contributions.
Models: Graphs and MDPs. Graphs and Markov decision processes (MDPs) are two classical models of reactive systems. The states of a reactive system are represented by the vertices of a graph, the transitions of the system are represented by the edges and nonterminating trajectories of the system are represented as infinite paths of the graph. Graphs are a classical model for reactive systems with nondeterminism, and MDPs extend graphs with probabilistic transitions that represent reactive systems with both nondeterminism and uncertainty. Thus graphs and MDPs are the standard models of reactive systems with nondeterminism, and nondeterminism with stochastic aspects, respectively [15, 2]. Moreover MDPs are used as models for concurrent finitestate processes [16, 28] as well as probabilistic systems in open environments [26, 23, 17, 2].
Specification: Strong fairness (aka Streett) objectives. A basic and fundamental specification formalism in the analysis of reactive systems is the strong fairness condition. The strong fairness conditions (aka Streett objectives) consist of types of requests and corresponding grants, and the requirement is that for each type if the request happens infinitely often, then the corresponding grant must also happen infinitely often. Beyond safety, reachability, and liveness objectives, the most standard properties that arise in the analysis of reactive systems are Streett objectives, and chapters of standard textbooks in verification are devoted to it (e.g., [15, Chapter 3.3], [24, Chapter 3], [1, Chapters 8, 10]). In addition, regular objectives can be specified as Streett objectives, e.g., LTL formulas and nondeterministic automata can be translated to deterministic Streett automata [25] and efficient translations have been an active research area [6, 18, 22]. Consequently, Streett objectives are a canonical class of objectives that arise in verification.
Satisfaction.
The notions of satisfaction for graphs and MDPs are as follows: For graphs, the notion of satisfaction requires that there is a trajectory (infinite path) that belongs to the set of paths specified by the Streett objective. For MDPs the satisfaction requires that there is a strategy to resolve the nondeterminism such that the Streett objective is ensured almostsurely (with probability 1). Thus the algorithmic modelchecking problem of graphs and MDPs with Streett objectives is a central problem in verification, and is at the heart of many stateoftheart tools such as SPIN, NuSMV for graphs
[20, 14], PRISM, LiQuor, Storm for MDPs [23, 13, 17].Our contributions are related to the algorithmic complexity of graphs and MDPs with Streett objectives. We first present previous results and then our contributions.
Previous results.
The most basic algorithm for the problem for graphs is based on repeated SCC (strongly connected component) computation, and informally can be described as follows: for a given SCC, (a) if for every request type that is present in the SCC the corresponding grant type is also present in the SCC, then the SCC is identified as “good”, (b) else vertices of each request type that have no corresponding grant type in the SCC are removed, and the algorithm recursively proceeds on the remaining graph. Finally, reachability to good SCCs is computed. The algorithm for MDPs is similar where the SCC computation is replaced with maximal endcomponent (MEC) computation, and reachability to good SCCs is replaced with probability 1 reachability to good MECs. The basic algorithms for graphs and MDPs with Streett objective have been improved in several works, such as for graphs in [19, 10], for MEC computation in [7, 8, 9], and MDPs with Streett objectives in [5]. For graphs/MDPs with vertices, edges, and requestgrant pairs with denoting the size to describe the request grant pairs, the current bestknown bound is .
Our contributions.
In this work, our main contributions are randomized nearlinear time (i.e. linear times a polylogarithmic factor) algorithms for graphs and MDPs with Streett objectives. In detail, our contributions are as follows:

First, we present a nearlinear time randomized algorithm for graphs with Streett objectives where the expected running time is , where the notation hides polylog factors. Our algorithm is based on a recent randomized algorithm for maintaining the SCC decomposition of graphs under edge deletions, where the expected total running time is near linear [4].

Second, by exploiting the results of [4] we present a randomized nearlinear time algorithm for computing the MEC decomposition of an MDP where the expected running time is . We extend the results of [4] from graphs to MDPs and present a randomized algorithm to maintain the MEC decomposition of an MDP under edge deletions, where the expected total running time is near linear [4].

Finally, we use the result of the above item to present a nearlinear time randomized algorithm for MDPs with Streett objectives where the expected running time is .
All our algorithms are randomized and since they are nearlinear in the size of the input, they are optimal up to polylog factors. An important open question is whether there are deterministic algorithms that can improve the existing running time bound for graphs and MDPs with Streett objectives. Our algorithms are deterministic except for the invocation of the decremental SCC algorithm presented in [4].
2 Preliminaries
A Markov decision process (MDP) consists of a finite set of vertices partitioned into the player1 vertices and the random vertices , a finite set of edges , and a probabilistic transition function . The probabilistic transition function maps every random vertex in to an element of , where
is the set of probability distributions over the set of vertices
. A random vertex has an edge to a vertex , i.e. iff . An edge is a random edge if otherwise it is a player1 edge. W.l.o.g. we assumeto be the uniform distribution over vertices
with .Graphs are a special case of MDPs with . The set () describes the set of predecessors (successors) of a vertex . More formally, is defined as the set and . When is a set of vertices, we define to be the set of all edges incident to the vertices in . More formally, . With we denote the subgraph of a graph induced by the set of vertices . Let be the set of vertices in that can reach a vertex of . The set can be found in linear time using depthfirst search [27]. When a vertex can reach another vertex and vice versa, we say that and are strongly connected.
A play is an infinite sequence of vertices such that each for all . The set of all plays is denoted with . A play is initialized by placing a token on an initial vertex. If the token is on a vertex owned by player1, he moves the token along one of the outgoing edges, whereas if the token is at a random vertex , the next vertex is chosen according to the probability distribution . The infinite sequence of vertices (infinite walk) formed in this way is a play.
Strategies are recipes for player 1 to extend finite prefixes of plays. Formally, a player1 strategy is a function which maps every finite prefix of a play that ends in a player1 vertex to a successor vertex , i.e., . A player1 strategy is memoryless if for all that end in the same vertex , i.e., the strategy does not depend on the entire prefix, but only on the last vertex. We write for the set of all strategies for player 1.
The outcome of strategies is defined as follows: In graphs, given a starting vertex, a strategy for player 1 induces a unique play in the graph. In MDPs, given a starting vertex and a strategy , the outcome of the game is a random walk for which the probability of every event is uniquely defined, where an event is a measurable set of plays [28]. For a vertex , strategy and an event , we denote by the probability that a play belongs to if the game starts at and player 1 follows .
An objective for player 1 is an event, i.e., objectives describe the set of winning plays. A play satisfies the objective if . In MDPs, a player1 strategy is almostsure (a.s.) winning from a starting vertex for an objective iff . The winning set for player 1 is the set of vertices from which player 1 has an almostsure winning strategy. We consider Reachability objectives and pair Streett objectives.
Given a set of vertices, the reachability objective requires that some vertex in be visited. Formally, the sets of winning plays are . We say can reach almostsurely (a.s.) if .
The pair Streett objective consists of Streett
pairs
where all
for . An infinite path satisfies the objective iff for all some vertex
of is visited infinitely often, then some vertex of is visited
infinitely often.
Given an MDP , an endcomponent is a set of vertices s.t. (1) the subgraph induced by is strongly connected (i.e., is strongly connected) and (2) all random vertices have their outgoing edges in . More formally, for all and all we have . In a graph, if (1) holds for a set of vertices we call the set strongly connected subgraph (SCS). An endcomponent, SCS respectively, is trivial if it only contains a single vertex with no edges. All other endcomponents, SCSs respectively, are nontrivial. A maximal endcomponent (MEC) is an endcomponent which is maximal under set inclusion. The importance of MECs is as follows: (i) it generalizes strongly connected components (SCCs) in graphs (with
) and closed recurrent sets of Markov chains (with
); and (ii) in a MEC , player1 can almostsurely reach all vertices from every vertex . The MECdecomposition of an MDP is the partition of the vertex set into MECs and the set of vertices which do not belong to any MEC. The condensation of a graph denoted by is the graph where all vertices in the same SCC in are contracted. The vertices of are called nodes to distinguish them from the vertices in .Let be a strongly connected component (SCC) of . The SCC is a bottom SCC if no vertex has an edge to a vertex in , i.e., no outgoing edges. Consider an MDP and notice that every bottom SCC in the graph is a MEC because no vertex (and thus no random vertex) has an outgoing edge.
A decremental graph algorithm allows the deletion of player1 edges while maintaining the solution to a graph problem. It usually allows three kinds of operations: (1) preprocessing, which is computed when the initial input is first received, (2) delete, which deletes a player1 edge and updates the data structure, and (3) query, which computes the answer to the problem. The query time is the time needed to compute the answer to the query. The update time of a decremental algorithm is the cost for a delete operation. We sometimes refer to the delete operations as update operation. The running time of a decremental algorithm is characterized by the total update time, i.e., the sum of the update times over the entire sequence of deletions. Sometimes a decremental algorithm is randomized and assumes an oblivious adversary who fixes the sequence of updates in advance. When we use a decremental algorithm which assumes such an oblivious adversary as a subprocedure the sequence of deleted edges must not depend on the random choices of the decremental algorithm.
3 Decremental SCCs
We first recall the result about decremental strongly connected components maintenance in [4] (cf. Theorem 3 below) and then augment the result for our purposes.
[Theorem 1.1 in [4]] Given a graph with edges and vertices, we can maintain a data structure that supports the operations

: Deletes the edge from the graph .

: Returns whether and are in the same SCC in ,
in total expected update time and with worstcase constant query time. The bound holds against an oblivious adversary.
The preprocessing time of the algorithm is using [27]. To use this algorithm we extend the query and update operations with three new operations described in Corollary 3.
Given a graph with edges and vertices, we can maintain a data structure that supports the operations

(queryoperation): Returns a reference to the SCC containing the vertex .

(updateoperation): Deletes the set of edges from the graph . If the edge deletion creates new SCCs the operation returns a list of references to the new SCCs.

(updateoperation): Deletes the set of edges from the graph . The operation returns a list of references to all new SCCs with no outgoing edges.
in total expected update time and worstcase constant query time for the first operation. The bound holds against an oblivious adaptive adversary. The first function is available in the algorithm described in [4]. The second function can be implemented directly from the construction of the data structure maintained in [4]. The key idea for the third function is that when an SCC splits, we consider the new SCCs. We distinguish between the largest of them and the others which we call small SCCs. We then consider all edges incident to the small SCCs: Note that as the new outgoing edges in the large SCC are also incident to a small SCC we can also determine the outgoing edges of the large SCC. Observe that whenever an SCC splits all the small SCCs are at most half the size of the original SCC. That is, each vertex can appear only times in small SCCs during the whole algorithm. As an edge is only considered if one of the incident vertices is in a small SCC each edge is considered times and the additional running time is bounded by . Furthermore, we define as the running time of the best decremental SCC algorithm which supports the operations in Corollary 3. Currently, .
4 Graphs with Streett Objectives
In this section, we present an algorithm which computes the winning regions for graphs with Streett objectives. The input is a directed graph and Streett pairs for . The size of the input is measured in terms of , , and .
Algorithm and good component detection.
Let be an SCC of . In the good component detection problem, we compute (a) a nontrivial SCS induced by the set of vertices , such that for all either or or (b) that no such SCS exists. In the first case, there exists an infinite path that eventually stays in and satisfies the Streett objective, while in the latter case, there exists no path which satisfies the Streett objective in . From the results of [1, Chapter 9, Proposition 9.4] the following algorithm, called Algorithm , suffices for the winning set computation:

Compute the SCC decomposition of the graph;

For each SCC for which the good component detection returns an SCS, label the SCC as satisfying.

Output the set of vertices that can reach a satisfying SCC as the winning set.
Since the first and last step are computable in linear time, the running time of Algorithm is dominated by the detection of good components in SCCs. In the following, we assume that the input graph is strongly connected and focus on the good component detection.
Bad vertices. A vertex is bad if there is some such that the vertex is in but it is not strongly connected to any vertex of . All other vertices are good. Note that a good vertex might become bad if a vertex deletion disconnects an SCS or a vertex of a set . A good component is a nontrivial SCS that contains only good vertices.
Decremental strongly connected components. Throughout the algorithm, we use the algorithm described in Section 3 to maintain the SCCs of a graph when deleting edges. In particular, we use Corollary 3 to obtain a list of the new SCCs which are created by removing bad vertices. Note that we can ‘remove’ a vertex by deleting all its incident edges. Because the decremental SCC algorithm assumes an oblivious adversary we sort the list of the new SCCs as otherwise the edge deletions performed by our algorithm would depend on the random choices of the decremental SCC algorithm.
Data structure. During the course of the algorithm, we maintain a decomposition of the vertices in : We maintain a list of certain sets such that every SCC of is contained in some stored in . The list provides two operations: enqueues to ; and dequeues an arbitrary element from . For each set in the decomposition, we store a data structure in the list . This data structure supports the following operations

: initializes the data structure for the set

updates to for a set and returns for the new set .

returns a reference to the set

returns the set of SCCs currently in . We implement as a balanced binary search tree which allows logarithmic and updates and deletions.
In [19] an implementation of this data structure with functions (1)(3) is described that achieves the following running times. For a set of vertices , let be defined as .
[Lemma 2.1 in [19]] After a onetime preprocessing of time , the data structure can be implemented in time for , time for and constant running time for .
We augment the data structure with the function which runs in total time of a decremental SCC algorithm supporting the first function in Corollary 3.
Algorithm Description. The key idea is that the algorithm maintains the list of data structures as described above when deleting bad vertices. Initially, we enqueue the data structure returned by to . As long as is nonempty, the algorithm repeatedly pulls a set from and identifies and deletes bad vertices from . If no edge is contained in , the set is removed as it can only induce trivial SCCs. Otherwise, the subgraph is either determined to be strongly connected and output as a good component or we identify and remove an SCC with at most half of the vertices in . Consider Figure 1 for an illustration of an example run of Algorithm 1.
Outline correctness and running time. In the following, when we talk about the input graph we mean the unmodified, strongly connected graph which we use to initialize Algorithm 1. In contrast, with the current graph we refer to the graph where we already deleted vertices and their incident edges in the course of finding a good component. For the correctness of Algorithm 1, we show that if a good component exists, then there is a set stored in list which contains all vertices of this good component.
To obtain the running time bound of Algorithm 1, we use the fact that we can maintain the SCC decomposition under deletions in total time. With the properties of the data structure described in Lemma 4 we get a running time of for the maintenance of the data structure and identification of bad vertices over the whole algorithm. Combined, these ideas lead to a total running time of which is using Corollary 3.
Algorithm 1 runs in expected time .
Proof.
The preprocessing and initialization of the data structure and the removal
of bad vertices in the whole algorithm takes time using
Lemma 4. Since each vertex is deleted at most once, the data structure
can be constructed and maintained in total time .
Announcing the new SCCs after deleting the bad vertices at
Line 1 is in total time by
Corollary 3.
Consider an iteration of the while loop at Line 1:
A set is removed from . Let us denote by the number of vertices of .
If does not contain any edge after the removal of bad vertices,
then is not considered further by the algorithm.
Otherwise, the forloop at Line 1 considers all new SCCs.
Note the we can implement the forloop in a lockstep fashion:
In each step for each SCC we access the th vertex and as soon as all of the vertices
of an SCC are accessed we add it to the list .
When only one SCC is left we compute its size using the original set and
the sizes of the other SCCs. If its size is at most we add it to .
Note that this can be done in time proportional to the number of vertices in the SCCs in
of size at most .
The sorting operation at Line 1 takes time plus the size of all
the SCCs in , that is . Note that . Let be an SCC stored in .
Note that during the algorithm each vertex can appear at most times in the list .
This is by the fact that only contains SCCs that are at most half the size of the original set .
We obtain a running time bound
of for Lines 11.
Consider the second forloop at Line 1:
Let . The operations and are called once per
found SCC with and take by
Lemma 4 time. Whenever a vertex is in
, the size of the set in containing originally is reduced by at least a factor of two due to the fact
that . This happens at most
times. By charging to the vertices in and, respectively,
to , the total running time for
Lines 1 & 1 can be bounded by as each vertex and bit is only charged times.
Combining all parts yields the claimed running time bound of .
∎
The correctness of the algorithm is similar to the analysis given in [11, Lemmas 3.6 & 3.7] except that we additionally have to prove that holds the SCCs of . Lemma 4 shows that we maintain properly for all the data structures in .
After each iteration of the outer whileloop every nontrivial SCC of the current graph is contained in one of the subgraphs for which the data structure is maintained in and stores a list of all SCCs contained in .
We prove the next Lemma by showing that we never remove edges of vertices of good components. After each iteration of the outer whileloop every good component of the input graph is contained in one of the subgraphs for which the data structure is maintained in the list .
Algorithm 1 outputs a good component if one exists, otherwise the algorithm reports that no such component exists.
Proof.
First consider the case where Algorithm 1 outputs a subgraph . We show that is a good component: Line 1 ensures only nontrivial SCSs are considered. After the removal of bad vertices from in Lines 11, we know that for all that if . Due to Line 1 there is only one SCC in and thus is a good component. Second, if Algorithm 1 terminates without a good component, by Lemma 4, we have that the initial graph has no good component and thus the result is correct as well. ∎
The running time bounds for the decremental SCC algorithm of [4] (cf. Corollary 3) only hold against an oblivious adversary. Thus we have to show that in our algorithm the sequence of edge deletions does not depend on the random choices of the decremental SCC algorithm. The key observation is that only the order of the computed SCCs depends on the random choices of the decremental SCC and we eliminate this effect by sorting the SCCs. The sequence of deleted edges does not depend on the random choices of the decremental SCC Algorithm but only on the given instance.
5 Algorithms for MDPs
In this section, we present expected nearlinear time algorithms for computing a MEC decomposition, deciding almostsure reachability and maintaining a MEC decomposition in a decremental setting. In the last section, we present an algorithm for MDPs with Streett objectives by using the new algorithm for the decremental MEC decomposition.
Random attractor. First, we introduce the notion of a random attractor for a set . The random attractor is defined inductively as follows: and for all . Given a set , the random attractor includes all vertices (1) in , (2) random vertices with an edge to , (3) player1 vertices with all outgoing edges in . Due to [21, 3] we can compute the random attractor of a set in time .
5.1 Maximal EndComponent Decomposition
In this section, we present an expected near linear time algorithm for MEC decomposition. Our algorithm is an efficient implementation of the static algorithm presented in [9, p. 29]: The difference is that the bottom SCCs are computed with a dynamic SCC algorithm instead of recomputing the static SCC algorithm. A similar algorithm was independently proposed in an unpublished extended version of [12].
Algorithm Description. The MEC algorithm described in Algorithm 2 repeatedly removes bottom SCCs and the corresponding random attractor. After removing bottom SCCs the new SCC decomposition with its bottom SCCs is computed using a dynamic SCC algorithm.
Correctness follows because our algorithm just removes attractors of bottom SCCs and marks bottom SCCs as MECs. This is precisely the second static algorithm presented in [9, p. 29] except that the bottom SCCs are computed using a dynamic data structure. By using the decremental SCC algorithm described in Subsection 3 we obtain the following lemma. Algorithm 2 returns the MECdecomposition of an MDP in expected time .
Proof.
The running time of algorithm is in total time by Theorem 3 and Corollary 3. Initially, computing the SCC decomposition and determining the SCCs with no outgoing edges takes time by using [27]. Each time we compute the attractor of a bottom SCC at Line 2 we remove it from the graph by deleting all its edges and never process these edges and vertices again. Since we can compute the attractor at Line 2 in time , we need total time for computing the attractors of all bottom SCCs. Hence, the running time is dominated by the decremental SCC algorithm , which is . ∎
The algorithm uses space due to the fact that the decremental SCC algorithm uses space and only contains vertices. Given an MDP the MECdecomposition can be computed in expected time. The algorithm uses space.
Note that we can use the decremental SCC Algorithm of [4] even though this algorithm only works against an oblivious adversary as the sequence of deleted edges does not depend on the random choices of the decremental SCC Algorithm.
5.2 AlmostSure Reachability
In this section, we present an expected near lineartime algorithm for the almostsure reachability problem. In the almostsure reachability problem, we are given an MDP and a target set and we ask for which vertices player 1 has a strategy to reach almost surely, i.e., . Due to [5, Theorem 4.1] we can determine the set in time where is the running time of the fastest MEC algorithm. We use Theorem 5.1 to compute the MEC decomposition and obtain the following theorem.
Given an MDP and a set of vertices we can compute in expected time.
5.3 Decremental Maximal EndComponent Decomposition
We present an expected nearlinear time algorithm for the MECdecomposition which supports player1 edge deletions and a query that answers if two vertices are in the same MEC. We need the following lemma from [7] to prove the correctness of our algorithm. Given an SCC we consider the set U of the random vertices in with edges leaving . The lemma states that for all nontrivial MECs in the intersection with is empty, i.e., . [Lemma 2.1(1), [7]] Let be an SCC in . Let be the random vertices in with edges leaving . Let . Then for all nontrivial MECs in we have and for any edge with and , must belong to .
The pure MDP graph of an MDP is the graph which contains only edges in nontrivial MECs of . More formally, the pure MDP graph is defined as follows: Let be the set of MECs of . where , and for each : the uniform distribution over vertices with .
Throughout the algorithm, we maintain the pure MDP graph for an input MDP . Note that every nontrivial SCC in is also a MEC due to the fact that there are only edges inside of MECs. Moreover, a trivial SCC is a MEC iff . Note furthermore that when a player1 edge of an MDP is deleted, existing MECs might split up into several MECs but no new vertices are added to existing MECs.
Initially, we compute the MECdecomposition in expected time using the algorithm described in Section 5.1. Then we remove every edge that is not in a MEC. The resulting graph is the pure MDP graph . Additionally, we invoke a decremental SCC algorithm which is able to (1) announce new SCCs under edge deletions and return a list of their vertices and (2) is able to answer queries that ask whether two vertices belong to the same SCC. When an edge is deleted, we know that (i) the MECdecomposition stays the same or (ii) one MEC splits up into new MECs and the rest of the decomposition stays the same. We first check if and are in the same MEC, i.e., if it exists in . If not, we are done. Otherwise, and are in the same MEC and either (1) the MEC does not split or (2) the MEC splits. In the case of (1) the SCCs of the pure MDP graph remain intact and nothing needs to be done. In the case of (2) we need to identify the new SCCs in using the decremental SCC algorithm . Let, w.l.o.g., be the SCC with the most vertices. We iterate through every edge of the vertices in the SCCs . By considering all the edges, we identify all SCCs (including ) which are also MECs. We remove all edges where and are not in the same SCC to maintain the pure MDP graph . For the SCCs that are not MECs let be the set of random vertices with edges leaving its SCC. We compute and remove (these vertices belong to no MEC due to Lemma 5.3) and recursively start the procedure on the new SCCs generated by the deletion of the attractor. The algorithm is illustrated in Figure 2.
Lemma 5.3 describes the key invariants of the whileloop at Line 3. We prove it with a straightforward induction on the number of iterations of the whileloop and apply Lemma 5.3. Assume that maintains the pure MDP graph before the deletion of then the whileloop at Line 3 maintains the following invariants:

For the graph stored in and all lists of SCCs in there are only edges inside the SCCs or between the SCCs in the list, i.e., for each we have .

If a nontrivial SCC of the graph in is not a MEC of the current MDP it is in .

If is a MEC of the current MDP then we do not delete an edge of in the whileloop.
Algorithm 3 maintains the pure MDP graph in the data structure under player1 edge deletions.
Proof.
We show that after deleting an edge using Algorithm 3 (i) every nontrivial SCC is a MEC and viceversa, and (ii) there are no edges going from one MEC to another. Initially, we compute the pure MDP graph and both conditions are fulfilled.
When we delete an edge and the whileloop at Line 3 terminates (i) is true due to Lemma 5.3(2,3). That is, as we never delete edges within MECs they are still strongly connected and when the whileloop terminates, which means that all SCCs are MECs.
For (ii) notice that each SCC is once processed as a List . Consider an arbitrary SCC and the corresponding list of SCCs of the iteration in which was identified as a MEC. By Lemma 5.3(1) there are no edges to SCCs not in the list. Additionally, due to Line 3 we remove all edges from to other SCCs in . ∎
Now that we maintain the pure MDP graph in , we can answer MEC queries of the form: : Returns whether and are in the same MEC in , by an SCC query on the pure MDP graph .
The key idea for the running time of Algorithm 3 is that we do not look at edges of the largest SCCs but the new SCC decomposition by inspecting the edges of the smaller SCCs. Note that we identify the largest SCC by processing the SCCs in a lockstep manner. This can only happen times for each edge. Additionally, when we sort the SCCs, we only look at the vertex ids of the smaller SCCs and when we charge this cost to the vertices we need additional time.
Algorithm 3 maintains the MECdecomposition of under player1 edge deletions in expected total time . Algorithm 3 answers queries that ask whether two vertices belong to the same MEC in . The algorithm uses space.
Due to the fact that the decremental SCC algorithm we use in Corollary 3 only works for an oblivious adversary, we prove the following proposition. The key idea is that we sort SCCs returned by the decremental SCC Algorithm. Thus, the order in which new SCCs are returned does only depend on the given instance. The sequence of deleted edges does not depend on the random choices of the decremental SCC Algorithm but only on the given instance.
The algorithm presented in [4] fulfills all the conditions of Proposition 5.3 due to Corollary 3. Therefore we obtain the following theorem due to Proposition 5.3 and Proposition 5.3.
Given an MDP with vertices and edges, the MECdecomposition can be maintained under the deletion of player1 edges in total expected time and we can answer queries that ask whether two vertices belong to the same MEC in time. The algorithm uses space. The bound holds against an oblivious adversary.
5.4 MDPs with Streett Objectives
Similar to graphs we compute the winning region of Streett objectives with pairs (for ) for an MDP as follows:

We compute the MECdecomposition of .

For each MEC, we find good endcomponents, i.e., endcomponents where or for all and label the MEC as satisfying.

We output the set of vertices that can almostsurely reach a satisfying MECs.
For 2., we find good endcomponents similar to how we find good components as in Section 4. The key idea is to use the decremental MECAlgorithm described in Section 5.3 instead of the decremental SCC Algorithm. We modify the Algorithm presented in Section 4 as follows to detect good endcomponents: First, we use the decremental MECalgorithm instead of the decremental SCC Algorithm. Towards this goal, we augment the decremental MECalgorithm with a function to return a list of references to the new MECs when we delete a set of edges. Second, the decremental MECalgorithm does not allow the deletion of arbitrary edges, but only player1 edges. To overcome this obstacle, we create an equivalent instance where we remove player1 edges when we remove ‘bad’ vertices. Given an MDP with edges and vertices, we can maintain a data structure that supports the operation

: Deletes the set of of player1 edges from the MDP . If the edge deletion creates new MECs the operation returns a list of references to the new nontrivial MECs.
in total expected update time . The bound holds against an oblivious adaptive adversary. Deleting bad vertices. As the decremental MECalgorithm only allows deletion of player1 edges, we first modify the original instance to a new instance such that we can remove bad vertices by deleting player1 edges only. In each vertex for is split into two vertices and such that and and for all . The new probability distribution is for and . Note that for each the corresponding vertex has the same probabilities to reach the representation of a vertex as . The described reduction allows us to remove bad vertices from MECs by removing the player1 edge .
The key idea for the following lemma is that for each original vertex either both and are part of a good endcomponent or none of them. Note that the only way that and are strongly connected is when the other vertex is also in the strongly connected component because () has only one outgoing (incoming) edges to (from ).
There is a good endcomponent in the modified instance iff there is a good component in the original instance .
On the modified instance the algorithm for MDPs is identical to Algorithm 1 except that we use a dynamic MEC algorithm instead of a dynamic SCC algorithm.
In an MDP the winning set for a pair Street objectives can be computed in expected time.
References
 [1] R. Alur and T.A. Henzinger. ComputerAided Verification. unpublished., 2004. URL: https://web.archive.org/web/20041207121830/http://www.cis.upenn.edu/group/cis673/.
 [2] C. Baier and J.P. Katoen. Principles of model checking. MIT Press, 2008.
 [3] C. Beeri. On the Membership Problem for Functional and Multivalued Dependencies in Relational Databases. ACM Trans. Database Syst., 5(3):241–259, 1980. doi:10.1145/320613.320614.
 [4] A. Bernstein, M. Probst, and C. WulffNilsen. Decremental StronglyConnected Components and SingleSource Reachability in NearLinear Time. In STOC, pages 365–376, 2019. doi:10.1145/3313276.3316335.
 [5] K. Chatterjee, W. Dvořák, M. Henzinger, and V. Loitzenbauer. Model and Objective Separation with Conditional Lower Bounds: Disjunction is Harder than Conjunction. In LICS, pages 197–206, 2016. doi:10.1145/2933575.2935304.
 [6] K. Chatterjee, A. Gaiser, and J. Kretínský. Automata with Generalized Rabin Pairs for Probabilistic Model Checking and LTL Synthesis. In CAV, pages 559–575, 2013. doi:10.1007/9783642397998_37.
 [7] K. Chatterjee and M. Henzinger. Faster and Dynamic Algorithms for Maximal EndComponent Decomposition and Related Graph Problems in Probabilistic Verification. In SODA, pages 1318–1336, 2011. doi:10.1137/1.9781611973082.101.
 [8] K. Chatterjee and M. Henzinger. An Time Algorithm for Alternating Büchi Games. In SODA, pages 1386–1399, 2012. URL: http://portal.acm.org/citation.cfm?id=2095225&CFID=63838676&CFTOKEN=79617016.
 [9] K. Chatterjee and M. Henzinger. Efficient and Dynamic Algorithms for Alternating Büchi Games and Maximal EndComponent Decomposition. J. ACM, 61(3):15.1–15.40, 2014. doi:10.1145/2597631.
 [10] K. Chatterjee, M. Henzinger, and V. Loitzenbauer. Improved Algorithms for OnePair and Pair Streett Objectives. In LICS, pages 269–280, 2015. doi:10.1109/LICS.2015.34.
 [11] K. Chatterjee, M. Henzinger, and V. Loitzenbauer. Improved Algorithms for Parity and Streett objectives. Logical Methods in Computer Science, 13(3):1–27, 2017. doi:10.23638/LMCS13(3:26)2017.
 [12] S. Chechik, T. D. Hansen, G. F. Italiano, J. Lacki, and N. Parotsidis. Decremental SingleSource Reachability and Strongly Connected Components in Õ(mn) Total Update Time. In FOCS, pages 315–324, 2016. doi:10.1109/FOCS.2016.42.
 [13] F. Ciesinski and C. Baier. LiQuor: A Tool for Qualitative and Quantitative Linear Time Analysis of Reactive Systems. In QEST, pages 131–132, 2006. doi:10.1109/QEST.2006.25.
 [14] A. Cimatti, E. Clarke, F. Giunchiglia, and M. Roveri. NUSMV: A new Symbolic Model Checker. International Journal on Software Tools for Technology Transfer (STTT), 2(4):410–425, 2000. doi:10.1007/s100090050046.
 [15] E. M. Clarke, Jr., O. Grumberg, and D. A. Peled. Model Checking. MIT Press, Cambridge, MA, USA, 1999.
 [16] C. Courcoubetis and M. Yannakakis. The Complexity of Probabilistic Verification. J. ACM, 42(4):857–907, 1995. doi:10.1145/210332.210339.
 [17] C. Dehnert, S. Junges, J.P. Katoen, and M. Volk. A Storm is Coming: A Modern Probabilistic Model Checker. In CAV, pages 592–600, 2017. doi:10.1007/9783319633909_31.
 [18] J. Esparza and J. Kretínský. From LTL to Deterministic Automata: A Safraless Compositional Approach. In CAV, pages 192–208, 2014. doi:10.1007/9783319088679_13.
 [19] M. Rauch Henzinger and J. A. Telle. Faster Algorithms for the Nonemptiness of Streett Automata and for Communication Protocol Pruning. In SWAT, pages 16–27, 1996. doi:10.1007/3540614222_117.
 [20] G. J. Holzmann. The Model Checker SPIN. IEEE Trans. Softw. Eng., 23(5):279–295, 1997. doi:10.1109/32.588521.
 [21] N. Immerman. Number of Quantifiers is Better Than Number of Tape Cells. J. Comput. Syst. Sci., 22(3):384–406, 1981. doi:10.1016/00220000(81)900398.
 [22] Z. Komárková and J. Kretínský. Rabinizer 3: Safraless Translation of LTL to Small Deterministic Automata. In ATVA, pages 235–241, 2014. doi:10.1007/9783319119366_17.
 [23] M. Z. Kwiatkowska, G. Norman, and D. Parker. PRISM 4.0: Verification of Probabilistic RealTime Systems. In CAV, pages 585–591, 2011. doi:10.1007/9783642221101_47.
 [24] Z. Manna and A. Pnueli. Temporal Verification of Reactive Systems: Progress (draft). http://theory.stanford.edu/~zm/tvors3.html, 1996.
 [25] S. Safra. On the Complexity of Automata. In FOCS, pages 319–327, 1988. doi:10.1109/SFCS.1988.21948.
 [26] R. Segala. Modeling and Verification of Randomized Distributed RealTime Systems. PhD thesis, MIT, 1995.
 [27] R. Tarjan. DepthFirst Search and Linear Graph Algorithms. SIAM J. Comput., 1(2):146–160, 1972.
 [28] M. Y. Vardi. Automatic Verification of Probabilistic Concurrent FiniteState Programs. In FOCS, pages 327–338, 1985. doi:10.1109/SFCS.1985.12.
Comments
There are no comments yet.