# A o(d) ·polylog n Monotonicity Tester for Boolean Functions over the Hypergrid [n]^d

We study monotonicity testing of Boolean functions over the hypergrid [n]^d and design a non-adaptive tester with 1-sided error whose query complexity is Õ(d^5/6)·poly( n,1/ϵ). Previous to our work, the best known testers had query complexity linear in d but independent of n. We improve upon these testers as long as n = 2^d^o(1). To obtain our results, we work with what we call the augmented hypergrid, which adds extra edges to the hypergrid. Our main technical contribution is a Margulis-style isoperimetric result for the augmented hypergrid, and our tester, like previous testers for the hypercube domain, performs directed random walks on this structure.

• 4 publications
• 20 publications
• 23 publications
11/04/2018

### Domain Reduction for Monotonicity Testing: A o(d) Tester for Boolean Functions in d-Dimensions

We describe a Õ(d^5/6)-query monotonicity tester for Boolean functions f...
12/05/2020

### A Theory for Backtrack-Downweighted Walks

We develop a complete theory for the combinatorics of walk-counting on a...
11/04/2018

### Domain Reduction for Monotonicity Testing: A o(d) Tester for Boolean Functions on Hypergrids

Testing monotonicity of Boolean functions over the hypergrid, f:[n]^d →{...
04/01/2019

### Random walks and forbidden minors II: A poly(dε^-1)-query tester for minor-closed properties of bounded-degree graphs

Let G be a graph with n vertices and maximum degree d. Fix some minor-cl...
11/18/2020

### Isoperimetric Inequalities for Real-Valued Functions with Applications to Monotonicity Testing

We generalize the celebrated isoperimetric inequality of Khot, Minzer, a...
01/09/2018

### Adaptive Boolean Monotonicity Testing in Total Influence Time

The problem of testing monotonicity of a Boolean function f:{0,1}^n →{0,...
06/26/2018

We study the problem of testing if a function depends on a small number ...

## 1 Introduction

Monotonicity testing is a classic property testing problem that asks whether a function defined over a partial order is monotone or not. Consider a function (where is a partial order and is an ordered range). The function is monotone if whenever in the partial order . The distance between two functions and is the fraction of points they differ in. The distance to monotonicity of is , where is the set of monotone functions. Given a parameter

, the aim of a property tester is to correctly determine, with high probability, whether

is monotone or the distance to monotonicity is at least . When the distance to monotonicity of is at least , we say that is -far from being monotone. In recent years, there has been a lot of work [GGL00, CS14a, CST14, CDST15, KMS15, BB16, CWX17] on understanding the testing question for Boolean functions defined over the -dimensional hypercube domain. This line of work has unearthed a connection between monotonicity testing and isoperimetric theorems on the directed hypercube. In this paper, we investigate monotonicity testing of Boolean functions over the -dimensional -hypergrid, . Apart from being a natural property testing question, our motivation is to unearth isoperimetry theorems for richer structures. Indeed, our main technical contribution is a Margulis-type isoperimetry theorem for a structure called the augmented hypergrid. Such a theorem allows us to design a tester with query complexity for Boolean functions defined on . As long as , this -query tester has the best query complexity among the testers known so far.

###### Theorem 1.1.

Given a function and a parameter , there is a randomized algorithm that makes non-adaptive queries and (a) returns YES with probability if the function is monotone, and (b) returns NO with probability if the function is -far from being monotone.

### 1.1 Perspective

#### The Hypercube.

Goldreich et al. [GGL00] gives the first monotonicity testers for Boolean functions over the hypercube. Their tester makes queries. Chakrabarty and Seshadhri [CS14a] describes a query tester via so-called directed isoperimetry theorems. Given , define to be the set of edges of the hypercube with their ”lower endpoint” evaluating to and their ”upper endpoint” evaluating to . That is, is the set of edges violating monotonicity. Define and where the numerator indicates cardinality of the largest matching in [CS14a] prove that if is -far from being monotone, then . If the edges of the hypercube were to be oriented from the point with fewer ones to the one with more, then this result connects the structure of directed edges leaving the set of points evaluating to one with the distance to monotonicity. In this sense, this result is a directed analogue of a result by Margulis [Mar74] which proves a similar statement on the undirected hypercube. Using this directed isoperimetry result, [CS14a] designs an -query tester. Chen et al. [CST14] refines the analysis in [CS14a] to give a tester with query complexity.A remarkable paper of Khot, Minzer, and Safra [KMS15] proves the following directed analogue of Talagrand’s [Tal93] isoperimetry theorem. If is the number of edges in incident on , then [KMS15] proves . The directed analogue of Talagrand’s [Tal93] isoperimetry theorem is stronger than the Margulis-type theorem (albeit with an extra in the denominator.). Khot et al. [KMS15] uses this stronger directed isoperimetry result to obtain a -query monotonicity tester. This bound is nearly optimal for non-adaptive testers [CDST15, KMS15]. However, the proof techniques of both these isoperimetry results are very different. Chakrabarty-Seshadhri [CS14a] use the combinatorial structure of the ”violation graph” to explicitly find either a large number of edges in , or to find a large matching in . Khot et al. [KMS15] instead propose an operator (the split operator) which converts a function that is far from monotone to a function with sufficient structure that allows them to prove the Talagrand-type isoperimetry theorem in a relatively easier way. This technique of [KMS15] is reminiscent of the original result of Goldreich et al. [GGL00] which also defines an operator (the switch operator) to convert a function to a monotone function and accounting for the number of violated edges. It appears that methods which change function values are harder to generalize for the hypergrid domain. In particular, it is not clear how to generalize the switch or the split operators for hypergrids.

#### The Hypergrid.

Dodis et al. [DGL99] is the first paper to study property testing on the -dimensional hypergrid . For Boolean functions, this paper describe a -query tester. Note that the query complexity is independent of . The proof follows via dimension reduction theorem for Boolean functions. This result asserts that if a Boolean function on the hypergrid is -far from being monotone, then the function restricted to a random line has an expected distance of to monotonicity. On a line it is not too hard to see that Boolean functions can be tested with -queries. This style of analysis was refined by Berman, Raskhodnikova and Yarovslavstev [BRY14a] which gives a -query tester for Boolean functions on hypergrids. This is the current best known tester. Since these testers project to a line they (a) have no dependence on , and (b) they seem to need the linear dependence on since the violations may be restricted to unknown dimensions which, if naively done, may take queries to detect. For real-valued functions over , Dodis et al. [DGL99] give a -query tester where is the range of the function. They do so by a clever range-reduction technique that reduces to testing Boolean functions over . One of the key ideas to emerge from results of Ergun et al. [EKK00] and Bhattacharyya et al. [BGJ12] on monotonicity testing on the line (and richer structures) is to compare points that are far apart. Chakrabarty and Seshadhri [CS13] exploit this idea to give an optimal -query tester for real-valued functions over , removing the dependence on . Specifically, their tester queries pairs in the hypergrid that may be apart by an arbitrary power of . One can think of adding these extra edges to get an augmented hypergrid. (This is the central theme of the transitive closure spanner idea of Bhattacharyya et al [BGJ12].) This notion of the augmented hypergrid is central to our paper. The main result of [CS13] was to show that if (even a real-valued) is -far from being monotone, then this augmented hypergrid has many violated edges. For -far Boolean valued functions, this implies that the ”out-edge-boundary” of the set of s must be large. The main technical result of this paper is proving a Margulis-style result for the augmented hypergrid generalizing the result of [CS14a]. It states that either the ”out-edge-boundary” is ”very large”, or the ”out-vertex-boundary” is large (details in §2.). One of the main tools that [CS14a] use is a routing theorem in the hypercube due to Lehman and Ron [LR01]. One of the ways this theorem is proved and used exploits the fact that the ”directed hypercube” is a layered DAG with vertices of the same Hamming weight forming the layers. The ”directed hypergrid” is also a layered DAG, but the augmented hypergrid is not. This technically poses many challenges, and our way out is to define ”good portions” of the hypergrid where a certain specified subgraph is indeed layered. We generalize Lehman-Ron, but more crucially we can show if a function is -far, then large good portions exists. The definitions of these good portions is perhaps our main conceptual combinatorial contribution.

### 1.2 Reducing to the case when n is a power of 2

It greatly simplifies the presentation to assume that is a power of . For monotonicity testing, this is no loss of generality. In §A, we show that monotonicity testing over general hypergrids can be reduced to the case when is a power of . Specifically, in Theorem A.1 we reduce testing over general to testing over where is a power of 2 and . In our case, this incurs a loss of in the query complexity. Thus, we assume that is a power of throughout the paper except in Theorem 1.1, where, in the query complexity, is replaced by to reflect this loss. To be specific, §3 and §4 do not need to be a power of , while we stress that §5, §B and §C do need to be a power of .

### 1.3 The Augmented Hypergrid

Given the -dimensional -hypergrid , we define the augmented hypergrid which is simply the standard hypergrid with additional edges connecting any two vertices which differ in exactly one dimension by a power of two in the range . This construction was explicitly introduced in [CS13]. It is useful to partition the edges of into a collection of matchings , where

• .

• .

Note that is a perfect matching, but is not. We let denote the shortest-path distance between two points in the augmented hypergrid.

### 1.4 The Monotonicity Tester

Our tester is a generalization of the tester described by Khot, Minzer, and Safra [KMS15] over the Boolean hypercube, which itself is inspired by the path tester described in [CST14, CS14a]. Instead of taking a random walk on the hypergrid, however, we perform a random walk on the augmented hypergrid.

The main result of this paper is the following theorem which easily implies Theorem 1.1.

###### Theorem 1.2.

Given a function which is -far from being monotone, the tester described in Fig. 2 with inputs and detects a violation with probability .

### 1.5 Related Work and Remarks

Monotonicity Testing has been extensively [EKK00, GGL00, DGL99, LR01, FLN02, HK03, AC06, HK08, ACCL07, Fis04, SS06, Bha08, BCSM12, FR10, BBM12, RRSW11, BGJ12, CS13, CS14a, CST14, BRY14a, BRY14b, CDST15, CDJS15, KMS15, BB16, CWX17] studied in the past two decades; in this section we discuss a few works most relevant to this paper. In property testing, the notion of distance between functions is usually the Hamming distance between them, that is, the fraction of points at which they differ. More generally one can think of a general measure over the domain and the distance is the measure of the points at which the two functions differ. Monotonicity testing has been studied [AC06, HK08, CDJS15] over general product measures. It is now known [CDJS15] that for functions over , there exist testers making -queries over any product distribution; in fact there exist better testers if the distribution is known. A simple argument (Claim 3.6 in [CDJS15]) shows that testing monotonicity of Boolean functions over over any product distribution reduces to testing over

over the uniform distribution. Thus our result gives

-query monotonicity testers for , even over -biased distributions; this holds even when ’s are not constants and depend on . Once again, it is not clear how to generalize the tester of Khot, Minzer, and Safra [KMS15] to obtain such a result. In a different take on the distance function, Berman, Raskhodnikova, and Yaroslavtsev [BRY14a] study property testing when the distance function is not the Hamming distance but could be a more general norm. That is the distance between and is the norm of (the measure over which the norm is taken is the uniform measure). One of their results is a black-box reduction (Lemma 2.2 in [BRY14a]) of -testing of functions in the range to ”usual ”-testing of Boolean functions. In particular, our result along with their reduction implies -query testers for -testing over . Another interesting result in the same paper is a separation between non-adaptive and adaptive testers. For Boolean functions defined over , Berman et al. [BRY14a] describe an -query adaptive tester and a lower bound of -for non-adaptive testers. It is an interesting question (even for the hypercube) whether adaptivity helps in Boolean monotonicity testing; it is known for real-valued functions it doesn’t [CS14b]. Some recent results [BB16, CWX17] point out some very interesting lower bounds for adaptive testers. Monotonicity testing is well-defined over any arbitrary poset. Our knowledge here is limited. Fischer et al. [FLN02] prove there exist -query testers over any poset of cardinality even for real-valued functions; they also prove an -lower bound even for Boolean functions. On the other hand, there are good testers for the hypercube and hypergrid even for real-valued functions. Can we understand the structure that allows for efficient testers? Our notion of ”good portions” (Lemma 2.9) holds for any poset, and may provide some directions towards this question. Finally, we comment on our tester’s dependence on . If does not seem possible to improve our current line of attack, since the number of edges in the augmented hypergrid (when divided by ) depends on . One direction may be to sparsify the augmented hypergrid in such a way that we don’t lose out on the Margulis-type inequality. It is an interesting direction to get a greater understanding of such isoperimetric inequalities and possibly remove this dependence on .

## 2 Isoperimetric Theorems on the Augmented Hypergrid

Given a function we consider it to be defined over the vertices of . We let denote the set of edges of with and , where is the lower endpoint. We let . If is -far from being monotone, then . This result is implicit in many earlier papers [DGL99, EKK00, CS13] on monotonicity testing over the hypergrid. If one considers the edges of being oriented from the lower to the upper endpoint, then the above theorem lower bounds the normalized ”out-edge-boundary” of the indicator set of a function which is far from monotone. It is instructive to note that to obtain this result one needs to look at the augmented hypergrid. If one considered the standard hypergrid then one would need an extra -factor in the denominator in the RHS. This is apparent even when and the function is on the first half of the line and on the second half. One can also think about the normalized out-vertex-boundary of in defined as . In fact, we focus on the following smaller quantity

 Γ−f:=1nd⋅(Size of the maximum-cardinality % matching in S−f)

Our main technical result is the following Margulis-style [Mar74] directed isoperimetry theorem over the augmented hypergrid.

###### Theorem 2.1.

If is -far from being monotone, then

 I−f⋅Γ−f=Ω(ε2)

As in the Boolean hypercube case, the Margulis-style isoperimetry theorem allows one to analyze the tester in Fig. 2. We follow the clean analysis of Khot et al [KMS15] and discuss this in §5. The proof of Theorem 2.1 follows the structure of that in Chakrabarty and Seshadhri [CS14a] for functions over the Boolean hypercube. Fix a function which is -far from being monotone. Consider a matching on the vertices of consisting of disjoint pairs of violations (a matching in the violation graph of ). A folklore theorem states that any such matching that is maximal has cardinality . We focus on maximal matchings in the violation graph of which minimize the average shortest path distance in between its endpoints. That is, we consider which minimizes . Let be this minimum value. The following theorem can be proved using the techniques developed in [CS14a, CS13]. In [CS14a], Chakrabarty and Seshadhri prove that for Boolean functions over the hypercube . A similar observation holds for real-valued functions over . Indeed, [CS13] proves for such functions. We show that holds for Boolean functions over and defer the complete proof to the appendix §C as it is not the main contribution of this paper and the techniques are very similar to the previous works discussed.

###### Theorem 2.2.

If the average distance between the endpoints of is , then .

Therefore, if is large then the edge boundary is large. Theorem 2.1 follows from the next theorem which shows if is small, then there is a large matching in .

###### Theorem 2.3.

If the average distance between the endpoints of is , then .

The above theorem is where the novelty of this paper lies. In the next subsection, we make key definitions and outline the roadmap of the proof of Theorem 2.3 which constitutes the bulk of the paper. §5 contains the final analysis of the tester which follows from the above theorem via by now standard analysis procedures. The interested reader can read §5 independent of the remainder of the paper.

### 2.1 Proof of Theorem 2.3: A Roadmap

Among all maximal matchings which have average distance , choose to be the one which maximizes the following potential function

 Ψ(M):=∑(x,y)∈Md2A(x,y)

Maximizing has the effect of uncrossing pairs in the matching, which is useful when we use for finding structured subgraphs in the augmented hypergrid. We also point out that this is the same potential function used in [CS14a] for the hypercube case. Let be the pairs with . Since the average distance of is , we get . For any , let be the ”lower endpoints” of which evaluate to and be the ”upper endpoints” which evaluate to . We now make a few definitions.

###### Definition 2.4 (Consistent Sets).

A pair of subsets of any poset is said to be -consistent if there exists a bijection such that .

Note that for all , we have that the sets are -consistent. The following definitions are key for proving the theorem.

###### Definition 2.5 (Cover Graph induced by Consistent Sets).

Given a pair of -consistent sets in a poset , we let denote the collection of paths in which originate from some vertex , terminate in some vertex , is a shortest path from to , and has length exactly . The -cover graph is a subgraph of formed by taking the union of all paths in .

We remark here that in any -consistent pair there may be and such that . In that case doesn’t contain any path from to . However, may contain a path from to . We illustrate an example of this fact in Fig. 3.

An -layered DAG is a directed acyclic graph with nodes partitioned into layers where each edge goes from a vertex in some to a vertex in . A layered DAG is a very structured subgraph.

###### Definition 2.6 (Good Pairs of Consistent Sets).

An -consistent pair is -good if the graph is an -layered DAG.

If is the hypercube , and are consistent pairs such that Hamming weight of each vertex in is the same, and that of each vertex in is the same (that is lies in one ”level” of the hypercube and so does ), then it is easy to see is a good pair. Lehman and Ron [LR01] prove that for such and , one can find vertex-disjoint paths. Our first lemma is a generalization of the Lehman-Ron theorem [LR01] to arbitrary good pairs in the augmented-hypergrid. This, in some sense, abstracts out the sufficient conditions needed for a Lehman-Ron like theorem. We prove this lemma in §4.

###### Lemma 2.7 (Generalized Lehman-Ron).

If are two subsets of such that is an -good, consistent pair for some , then there exists vertex disjoint paths from to .

###### Definition 2.8 (Independent Good Pairs).

Two -good pairs and are said to be independent if any path in is vertex-disjoint from any path in .

Independent good pairs are in some sense ”far away” from each other – if we find vertex disjoint paths from to , and from to , then these paths will not intersect each other. Recall the definition of the matching from the beginning of this section. Our second lemma shows that if is the -maximizing matching in any (not necessarily augmented hypergrid) poset, then for all there is a collection of pairwise independent, -good, consistent pairs of total size . This is proved in §3.

###### Lemma 2.9 (Existence of large pairwise-independent, good consistent pairs).

Given any poset and a function , let be the maximal cardinality matching with minimum average distance and maximum among these. Let be the subset of whose endpoints are at distance exactly . Then there exists a collection of pairwise independent -good pairs such that and .

###### Proof of Theorem 2.3.

We know there exists with . Lemma 2.9 gives us a collection of pairwise-independent, -good sets for each . Lemma 2.7 implies for each , we have a collection of vertex disjoint paths from to . Since they are pairwise independent, the union of these collections is vertex disjoint. This implies we have vertex disjoint paths from a point that evaluates to to a point that evaluates to . Since each path must contain at least one edge from , the theorem follows. ∎

## 3 Finding Good Portions in General Posets: Proof of Lemma 2.9

We are given the matching which is the maximal cardinality minimum average-distance matching maximizing . is the subset which looks at pairs exactly at distance . We find the sets via a recursive algorithm (Procedure to Get Pairwise Conflict-free ’s). To describe the algorithm, we first make a definition.

###### Definition 3.1 (Conflicting Sets).

Given a pair of disjoint subsets , let and , and similarly define and . We say that and conflict if there exists shortest paths going from some to some , and going from some to some such that (a) and have a vertex in common, and (b) and .

Therefore, two sets conflict if there are shortest paths from their respective ’s intersecting ”at the same level”. Note that the paths needn’t be from to , nor do we say the sets conflict if the paths intersect but ”at different levels”. However, as seen later in this section, the pairwise conflict-free sets we obtain from via our recursive algorithm indeed have pairwise disjoint cover graphs as otherwise we would obtain another matching with either (a) smaller average distance or (b) the same average distance and larger , contradicting our definition of . The following procedure returns a collection of subsets of , such that they are pairwise conflict-free. The sets are obtained by taking the lower and upper endpoints of .

Procedure to Get Pairwise Conflict-free ’s:

Suppose . The procedure is recursively defined:

Base Step: Define the leaves of as . Construct the base conflict graph as follows: each is a vertex and is connected by an edge to if they conflict (Definition 3.1) and . Exit if has no edges.

Recursive Step: For : for the th connected component in , construct a set which is the union of all ’s in the th connected component. Construct the graph on these nodes indexed by ’s. Exit if has no edges. Note that the number of nodes in is strictly less than that in since the latter has at least one edge. Also note may have new edges since the conflict sets are getting bigger.

Termination: Since the number of vertices in the ’s strictly decrease, this procedure terminates. Let be the collection of sets at this level. By definition these are pairwise conflict free. Let (resp, ) be the lower (resp, upper) set of endpoints of the pairs in .

Return .

First note that the lower and upper endpoints of any set is -consistent since they can be paired using the matching . Also note that at any iteration , every matched pair is in some . Therefore, the ’s partition and similarly ’s partition . What remains to be proven is that (a) each is good, and (b) they are pairwise independent. Before we do so we need the following ”rematching lemma” which is key. Fix one of the sets in the conflict-free collection, and let be the sets obtained. For the sake of the rematching lemma let us forsake the subscript .

###### Lemma 3.2 (Rematching Lemma).

For any , , it is possible to rearrange to form a new matching with the following properties:

• For any with : .

• and are the only vertices which become unmatched.

We defer the proof of the rematching lemma and first note how it helps us. Once again, since and arise from , they are -consistent and indeed is the pairing. What the above lemma says is that for any and , we can ”rewire” the matching to so that the average distance still remains . In particular, if , we would have a contradiction since we would get a different maximal matching with strictly less distance. With this in mind, let’s use the rematching lemma to prove that the ’s are good and pairwise independent (Definition 2.8) thus proving Lemma 2.9.

###### Lemma 3.3.

Let be the pairs of sets returned by ”Procedure to Get Pairwise Conflict-free ’s” on input . Each is -good. Moreover, and are independent for .

###### Proof.

Recall the definition of the cover graph – we need to show is a layered DAG. Suppose not. Then there must exist a vertex which (a) lies on a path from to , (b) also lies on a path from to where both paths are of length , are shortest paths between their endpoints, and (c) . The situation is illustrated as follows:

 ^sa→zi−a−−→t

and

 sb→zi−b−−→^t.

where we assume wlog that . Now, by Lemma 3.2 (rematching lemma), there exists a rearrangement of the endpoints of such that , and only and are unmatched. However, and have a path of length . Therefore, we can add to to obtain a matching whose average length is strictly smaller than that of . Contradiction. Therefore must be -good.We now claim and are independent. Suppose not, and there is a shortest path from to which intersects a shortest path from to . Suppose is the first (nearest to the ’s) at which they meet. Since each is good, the graph is layered, and therefore these paths have to be shortest paths of length . Two cases arise: (a) . But then that would mean the corresponding and conflict. Contradiction. (b) . Again apply the rematching Lemma 3.2 to get two rewired matchings and which leave and unmatched while all the other pairs are at distance . Now add the pairs and in the matching. Observe this has a larger since we replaced two pairs at distance with two pairs with unequal distances summing to . Contradiction again. In sum, all the ’s are pairwise independent and good. ∎

### 3.1 Proof of the Rematching Lemma 3.2

Just for this proof, we use without the superscript. This is purely for brevity’s sake. Fig. 4 accompanies the inductive step. Let be the smallest index such that and lie in the same for some . We know . We prove by induction on .Base Case: . Since , we know there is a path from to in . Suppose that the length of the shortest such path is . Let this path be from to . Let the th node in this path be . By definition, conflicts with . Therefore, we can rewire the matching which maps for all , to . By the definition of conflict, each of these pairs are at distance exactly . And and are the ones left unmatched. On the rest of the pairs, and agree. Inductive Step: Since is the smallest value such that we know there are sets and such that they’re disjoint, and and . Moreover, by construction of we know that there is a path from to in the conflict graph, . Let the shortest such path be of length and let the shortest path be . Let and for . For any in the range , the sets and conflict. Therefore, we know there exists and and such that

 sk≺zk+1≺tk and sk+1≺zk+1≺tk+1.

Also, we have and . Thus for all combinations of and . By induction on (which recall is ), we can rearrange to get where and are the only unmatched endpoints from (since and are both endpoints from ). Now, for all in the range , by induction on , we can rearrange to get where and are the only unmatched endpoints from . Finally, by induction on , we can rearrange to get where and are the only unmatched endpoints from . Our matching is now where , and the sets of unmatched endpoints are and . By the existence of for we can set for all in the range . Moreover, for all . The only remaining unmatched endpoints are and . This completes the proof of the rematching Lemma 3.2.

## 4 Routing on the Augmented Hypergrid: Proof of Lemma 2.7

In this section we prove the generalization of the routing theorem of Lehman-Ron [LR01] for good pairs in . This proof is akin to the proof in [LR01]. Suppose is a -good consistent pair in with . We show that there exists vertex disjoint paths from to in the -cover . Since are -consistent, there is a bijection with (recall definitions 2.4, 2.5 and 2.6). The proof is by induction on and . The base cases are trivial. If , then any path we choose from to suffices. If , then immediately gives a matching of edges in and this gives us our vertex disjoint paths. Since is good, is a layered graph. Let be its ’th layer for . For a vertex , let denote the in and out degree of in .

###### Claim 4.1.

If is reachable from in , then and .

###### Proof.

For , let and . For notational convenience in the following proof we let and stand for the vertex and edge sets of . To establish we show implies . Suppose and where . Since, we know there is some and such that , and . Since there is a path from to some of length . Clearly, and . This gives a path from to of length . Finally, we cannot have since this would contradict the fact that is a -layered DAG. That is, this would imply there is an edge in joining a vertex in some to where . Thus, lies on a shortest path of length from to and so . Similarly, implies and so . The proof is analogous to the previous paragraph and so is ommitted. ∎

We make use of Claim 4.1