Sparsifying, Shrinking and Splicing for Minimum Path Cover in Parameterized Linear Time

07/12/2021 ∙ by Manuel Cáceres, et al. ∙ Montana State University University of Verona Helsingin yliopisto 0

A minimum path cover (MPC) of a directed acyclic graph (DAG) G = (V,E) is a minimum-size set of paths that together cover all the vertices of the DAG. Computing an MPC is a basic polynomial problem, dating back to Dilworth's and Fulkerson's results in the 1950s. Since the size k of an MPC (also known as the width) can be small in practical applications, research has also studied algorithms whose complexity is parameterized on k. We obtain two new MPC parameterized algorithms for DAGs running in time O(k^2|V|log|V| + |E|) and O(k^3|V| + |E|). We also obtain a parallel algorithm running in O(k^2|V| + |E|) parallel steps and using O(log|V|) processors (in the PRAM model). Our latter two algorithms are the first solving the problem in parameterized linear time. Finally, we present an algorithm running in time O(k^2|V|) for transforming any MPC to another MPC using less than 2|V| distinct edges, which we prove to be asymptotically tight. As such, we also obtain edge sparsification algorithms preserving the width of the DAG with the same running time as our MPC algorithms. At the core of all our algorithms we interleave the usage of three techniques: transitive sparsification, shrinking of a path cover, and the splicing of a set of paths along a given path.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A Minimum Path Cover (MPC) of a (directed) graph is a minimum-sized set of paths such that every vertex appears in some path in the set. While computing an MPC is NP-hard in general, it is a classic result, dating back to Dilworth [15] and Fulkerson [19], that this can be done in polynomial time on directed acyclic graphs (DAGs). Computing an MPC of a DAG has applications in various fields. In bioinformatics, it allows efficient solutions to the problems of multi-assembly [16, 50, 46, 9, 35], perfect phylogeny haplotyping [5, 25], and alignment to pan-genomes [40, 37]. Other examples include scheduling [13, 14, 7, 51, 55, 41], computational logic [6, 20], distributed computing [49, 27], databases [28]

, evolutionary computation 

[29], program testing [43], cryptography [38], and programming languages [33]. Since in many of these applications the size (number of paths, also known as width) of an MPC is bounded, research has also focused in solutions whose complexity is parameterized by . This approach is also related to the line of research “FPT inside P” [23] of finding natural parameterizations for problems already in P (see also e.g. [18, 32, 1]).

MPC algorithms can be divided into those based on a reduction to maximum matching [19], and those based on a reduction to minimum flow [43]. The former compute an MPC of a transitive DAG by finding a maximum matching in a bipartite graph with vertices and edges. Thus, one can compute an MPC of a transitive DAG in time with the Hopcroft-Karp algorithm [26]. Further developments of this idea include the -time algorithm of Felsner et al. [17], and the and -time algorithms of Chen and Chen [10, 11].

The reduction to minimum flow consists in building a flow network from , where a global source and global sink are added, and each vertex of is split into an edge of with a demand (lower bound) of one unit of flow (see Section 2 for details). A minimum-valued (integral) flow of corresponds to an MPC of , which can be obtained by decomposing the flow into paths. This reduction (or similar) has been used several times in the literature to compute an MPC (or similar object) [43, 42, 22, 28, 12, 45, 44, 41], and it is used in the recent -time solution of Mäkinen et al. [40]. Furthermore, by noting that a path cover of size is always valid (one path per vertex) the problem can be reduced to maximum flow with capacities at most (see for example [3, Theorem 4.9.1]) and it can be solved by using maximum flow algorithms outputting integral solutions. As an example, using the Goldberg-Rao algorithm [24] the problem can be solved in time (the term is needed for decomposing the flow into an MPC). More recent maximum flow algorithms [34, 39, 36, 31, 52, 21] provide an abundant options of trade-offs, though none of them leads to a parameterized linear-time solution for the MPC problem. Next, we describe our techniques and results.

Sparsification, shrinking and splicing.

Across our solutions we interleave three techniques.

Transitive sparsification consists in the removal of some transitive edges111Transitive edges are edges whose removal does not disconnect its endpoints. while preserving the reachability among vertices, and thus the width of the DAG222Every edge in an MPC removed by a transitive sparsification can be re-routed through an alternative path.. We sparsify the edges to only, in overall time, obtaining thus a linear dependency on in our running times. Our idea is inspired by the work of Jagadish [28], which proposed a compressed index for answering reachability queries in constant time: for each vertex and path of an MPC, it stores the last vertex in that reaches (thus using overall space). However, three issues arise when trying to apply this idea inside an MPC algorithm: (i) it is dependent on an initial MPC (whereas we are trying to compute one), (ii) it can be computed in only time [40], and (iii) edges in the index are not necessarily in the DAG. We address (i) by using a suboptimal (but yet bounded) path cover whose gradual computation is interleaved with transitive sparsifications, and we address (ii) and (iii) by keeping only incoming edges per vertex in a single linear pass over the edges.

By shrinking we refer to the process of transforming an arbitrary path cover into an MPC. For example, using the flow network built from the given path cover, we can search for decrementing paths, until obtaining a minimum flow corresponding to an MPC. Given an approximation of an MPC, both algorithms of [17, 40] shrink this path cover in a separate step. In both of our algorithms, we do not use shrinking as a separate black-box, but instead interleave shrinking steps in the gradual computation of the MPC. Moreover, in the second algorithm we further guide the search for decrementing paths to amortize the search time to parameterized linear time.

Finally, by splicing we refer to the general process of reconnecting paths in a path cover so that (after splicing) at least one of them contains a certain path as a subpath, while working in time proportional to . In particular, we show how to perform splicing to apply the changes required by a decrementing path on a flow decomposition for obtaining an MPC (see Section 4.2.2), and also to reconnect paths for reducing the number of edges used by an MPC (see Section 5).

A simple divide-and-conquer approach.

As a first simple example of sparsification and shrinking interleaved inside an MPC algorithm, in Section 3 we show how these two techniques enable the first divide-and-conquer MPC algorithm.

Theorem 1.

Given a DAG of width , we compute an MPC in time .

Theorem 1 works by splitting a topological ordering of the vertices in half, and recursing in each half. When combining the MPCs from the two halves, we need to (i) account for the new edges between the two parts (here we exploit sparsification), and (ii) efficiently combine the two partial path covers into one for the entire graph (and here we use shrinking). Since this divides the problem in disjoint subgraphs, we also obtain the first linear-time parameterized parallel algorithm.

Theorem 2.

Given a DAG of width , we compute an MPC in parallel steps using single processors in the PRAM model [54].

The first linear-time parameterized algorithm.

Our second algorithm works on top of the minimum flow reduction, but instead of running a minimum flow algorithm and then extracting the corresponding paths (as previous approaches do [43, 42, 28, 12, 45, 44, 41, 40]), it processes the vertices in topological order, and incrementally maintains an MPC (i.e. a flow decomposition) of the corresponding induced subgraph. When a new vertex is processed, is used to sparsify the edges incoming to to at most (see Section 4). After that, the path cover is shrunk by searching for a single decrementing path in the corresponding residual graph. The search is guided by assigning an integer level to each vertex. We amortize the time of performing all the searches to time per vertex, thus obtaining the final running time.

Theorem 3.

Given a DAG of width , we compute an MPC in time .

The amortization is achieved by guiding the search through the assignment of integer levels to the vertices, which allows to perform the traversal in a layered manner, from the vertices of largest level to vertices of smallest level (see Section 4.2.1). If a decrementing path is found, is updated by splicing it along (see Section 4.2.2).

An antichain is a set of pairwise non-reachable vertices, and it is a well-known result, due to Dilworth [15], that the maximum size of an antichain equals the size of an MPC. Our level assignment defines a series of size-decreasing one-way cuts (Lemma 8). Moreover, by noting that these cuts in the network correspond to antichains (see e.g. [44]), the levels implicitly maintain a structure of antichains that sweep the graph during the algorithm. The high-level idea of maintaining a collection of antichains has been used previously by Felsner et al. [17] and Cáceres et al. [8] for the related problem of computing a maximum antichain. However, apart from being restricted to this related problem, these two approaches have intrinsic limitations. More precisely, Felsner et al. [17] maintain a tower of right-most antichains for transitive DAGs and , mentioning that “the case already seems to require an unpleasantly involved case analysis” [17, p. 359]. Cáceres et al. [8] overcome this by maintaining many frontier antichains, and obtaining a linear-time parameterized -time maximum antichain algorithm.

Based on the relation between maximum one-way cuts in the minimum flow reduction and maximum antichains in the original DAG (see for example [42, 44, 41]), we obtain algorithms computing a maximum antichain from any of our existing algorithms, preserving their running times (see Lemma 2). In particular, by using our second algorithm we obtain an exponential improvement on the function of of the algorithm of Cáceres et al. [8].

Edge sparsification in parameterized linear time.

Our last result in Section 5 is a structural result concerning the problem of edge sparsification preserving the width of the DAG. Edge sparsification is a general concept that consists in finding spanning subgraphs (usually with significantly less edges) while (approximately) preserving certain property of the graph. For example, spanners are distance preserving (up to multiplicative factors) sparsifiers, and it is a well-known result that cut sparsifiers can be computed efficiently [4]. We show that if the property we want to maintain is the (exact) width of a DAG, then its edges can be sparsified to less than . Moreover, we show that such sparsification is asymptotically tight (Remark 1), and it can be computed in time if an MPC is given as additional input. Therefore, by using our second algorithm we obtain the following result.

Corollary 1.

Given a DAG of width , we compute a spanning subgraph of with and width in time .

The main ingredient to obtain this result is an algorithm for transforming any path cover into one of the same size using less than  distinct edges, a surprising structural result.

Theorem 4.

Let be a DAG, and let be a path cover of . Then, we compute, in time, a path cover , whose number of distinct edges is less than .

We obtain Corollary 1 by using Theorem 4 with an MPC and defining as the edges in . Our approach adapts the techniques used by Schrijver [47] for finding a perfect matching in a regular bipartite graph. In our algorithm, we repeatedly search for undirected cycles  of edges joining vertices of high degree (in the graph induced by the path cover), and splice paths along (according to the multiplicty of the edges of ) to remove edges from the path cover.

Paper structure.

Section 2 presents basic concepts, the main preliminary results needed to understand the technical content of this paper, and results related to the three common techniques used in latter sections333We include the full version of this section in Appendix B for completeness.. Sections 4 and 3 present our and time algorithms for MPC, respectively444In Appendix C we show that our second algorithm implicitly maintains a structure of antichains.. Section 5 presents the algorithm of Theorem 4. Omitted proofs can be found in the Appendices.

2 Preliminaries

Basics.

We denote by () the set of out-neighbors (in-neighbors) of , and by () the edges outgoing (incoming) from (to) . A graph is said to be a subgraph of if and . If it is called spanning subgraph. If , then is the subgraph of induced by , defined as , where . A directed acyclic graph (DAG) is a directed graph without proper cycles. A topological ordering of a DAG is a total order of , , such that for all , . A topological ordering can be computed in time [30, 48]. If there exists a path from to , then it is said that reaches . The multiplicity of an edge with respect to a set of paths , (only if is clear from the context), is defined as the number of paths in that contain , . The width of a graph , , is the size of an MPC of . We will work with subgraphs induced by a consecutive subsequence of vertices in a topological ordering. The following lemma shows that we can bound the width of these subgraphs by .

Lemma 1 ([8]).

Let be a DAG, and a topological ordering of its vertices. Then, for all , , with .

Minimum Flow.

Given a (directed) graph , a source , a sink , and a function of lower bounds or demands on its edges , an -flow (or just flow when and are clear from the context) is a function on the edges , satisfying for all ( satisfies the demands) and for all (flow conservation). If a flow exists, the tuple is said to be a flow network. The size of is the net amount of flow exiting , formally . An -cut (or just cut when and are clear from the context) is a partition of such that and . An edge crosses the cut if and , or vice versa. If there are no edges crossing the cut from to , that is, if , then is a one-way cut (ow-cut). The demand of an ow-cut is the sum of the demands of the edges crossing the cut, formally . An ow-cut whose demand is maximum among the demands of all ow-cuts is a maximum ow-cut.

Given a flow network , the problem of minimum flow consists of finding a flow of minimum size among the flows of the network, such flow is a minimum flow. If a minimum flow exists, then is a feasible flow network. It is a known result [2, 12, 3] that the demand of a maximum ow-cut equals the size of a minimum flow.

Given a flow in a feasible flow network , the residual network of with respect to is defined as with , that is, the reverse edges of , plus the edges of on which the flow can be decreased without violating the demands (direct edges). Note that a path from to in can be used create another flow of smaller size by increasing flow on reverse edges and decreasing flow on direct edges of the path, such path it is called decrementing path. A flow is a minimum flow if and only if there is no decrementing path in (see Section B.2). A flow decomposition of is a set of paths in such that for all , in this case it is said that is the flow induced by . If is a flow decomposition of , then the residual network of with respect to is .

MPC in DAGs through Minimum Flow.

The problem of finding an MPC in a DAG can be solved by a reduction to the problem of minimum flow on an appropriate feasible flow network  [43], defined as: (); ; and if for some and otherwise. The tuple is the flow reduction of . Note that , , and is a DAG. Every flow of can be decomposed into paths corresponding to a path cover of (by removing and and merging the edges into , see Section B.3). A minimum flow of has size , thus providing an MPC of after decomposing it (see Section B.3). Moreover, the set of edges of the form crossing a maximum ow-cut corresponds to a maximum antichain of (by merging the edges into , see [42, 45, 44, 41]). By further noting that if is a minimum flow of , and defining , then corresponds to a maximum ow-cut, we obtain the following result.

Lemma 2.

Given a DAG of width and an MPC , we compute a maximum antichain of in time .

As such, this allows us to obtain algorithms computing a maximum antichain from any of our MPC algorithms, preserving their running times.

Sparsification, shrinking, splicing.

We say that a spanning subgraph of a DAG is a transitive sparsification of , if for every , reaches in if and only if reaches in . Since and have the same reachability relations on their vertices, they share their antichains, thus . As such, an MPC of is also an MPC of , thus the edges can be safely removed for the purpose of computing an MPC of . If we have a path cover of size of , then we can sparsify (remove some transitive edges) the incoming edges of a particular vertex to at most in time . If has more than in-neighbors then two of them belong to the same path, and we can remove the edge from the in-neighbor appearing first in the path. We create an array of elements initialized as , where is before every in topological order. Then, we process the edges incoming to , we set ( gives the ID of some path of containing ) and if is before in topological order we replace it . Finally, the edges in the sparsification are .

Observation 1.

Let be a DAG, a path cover, , a vertex of , and a function that answers in constant time , the ID of some path of containing . We can sparsify the incoming edges of to at most in time .

By first computing a function, and then applying creftypecap 1 to every vertex we obtain.

Lemma 3.

Let be a DAG, and , , be a path cover of . Then, we can sparsify to , such that is a path cover of and , in time.

The following lemma shows that we can locally sparsify a subgraph and apply these changes to the original graph to obtain a transitive sparsification.

Lemma 4.

Let be a graph, a subgraph of , and a transitive sparsification of . Then is a transitive sparsification of .

As explained before, shrinking is the process of transforming an arbitrary path cover into an MPC, and it can be solved by finding decrementing paths in , and then decomposing the resulting flow into an MPC. Mäkinen et al. [40] apply this idea to shrink a path cover of size . We generalize this approach in the following lemma.

Lemma 5.

Given a DAG of width , and a path cover , , of , we can obtain an MPC of in time .

As said before, splicing consists in reconnecting paths in a path cover so that (after reconnecting) at least one of the paths contains as a subpath a certain path , in time . Splicing additionally requires that for every edge of there is at least one path in containing .

Lemma 6.

Let be a DAG, a proper path, and path cover such that for every edge there exists . We obtain a path cover of such that and there exists containing as a subpath, in time . Moreover, for all .

Because of the last property of , the flow induced by is the same as the flow induced by . As such, if is a flow decomposition of a flow , then is also a flow decomposition of .

3 Divide and Conquer Algorithm

(a) Input graph
(b) Result of recursion
(c) Result of sparsification
(d) Result of shrinking
Figure 1: Main steps of the divide-and-conquer algorithm applied to a DAG . Figure 0(a) shows the input graph, a maximum antichain, and the division into and . Figure 0(b) shows the resulting graph after applying the algorithm recursively into and , the corresponding sparsifications and , and path covers and . Figure 0(c) shows the result of the sparsification algorithm run on with the paths . Figure 0(d) shows the result after shrinking.

See 1

Proof.

Before starting the recursion compute a topological ordering of the vertices in time . Solve recursively in the subgraph induced by , obtaining an MPC of a sparsification of with , and in the subgraph induced by , obtaining an MPC of a sparsification of with . By Lemma 1, and . Applying Lemma 4 with and we obtain that is a sparsification of with , where are the edges in from to . We consider the path cover of and use Lemma 3 to obtain a sparsification of in time such that . Finally, we shrink in to of size in time (Lemma 5).

The complexity analysis considers the recursion tree of the algorithm. Note that the complexity of a recursion step is , that is, every vertex of the corresponding subgraph costs and every edge going from the left subgraph to the right subgraph costs . Since the division of the graph generates disjoint subgraphs, every vertex appears in nodes in the recursion tree, and every edge going from left to right appears in exactly one node in the recursion tree. Therefore, the total cost is . Figure 1 illustrates the algorithm. ∎

Since our algorithm is based on divide and conquer, we can parallelize the work done on every sub-part of the input, and obtain a linear-time parallel algorithm for the MPC problem.

See 2

4 Progressive Flows Algorithm

In this section we prove Theorem 3. To achieve this result we rely on the reduction from MPC in a DAG to minimum flow (see Section 2). We process the vertices of one by one in a topological ordering . At each step, we maintain a set of st-flow paths that corresponds to a flow decomposition of a minimum flow of (the flow reduction of ), that is, an MPC of . When the next vertex is considered, we first use to sparsify its incoming edges to at most in time (see creftypecap 1 and Lemma 1). Then, we set , where corresponds to the edge representing in the flow reduction (we represent -flow paths either as a sequence of vertices or edges excluding the extremes for convenience). represents a path cover of , and we use it to try to find a decrementing path in . If such decrementing path is found, some flow paths along are spliced to generate , such that (see Section 4.2.2). Otherwise, if no decrementing path is found, we set .

We guide the traversal for a decrementing path by assigning an integer level to each vertex in . The search is performed in a layered manner: it starts from the highest reachable layer (the vertices of highest level according to ), and it only continues to the next highest reachable layer once all reachable vertices from the current layer have been visited (see Section 4.2.1). To allow the layered traversal and to achieve amortized time per vertex, we maintain three invariants in the algorithm (see Section 4.1) and update the level assignment accordingly (see Section 4.2.3).

4.1 Levels, layers and invariants

We define the level assignment given to the vertices of , , and the invariants maintained on . A layer is a maximal set of vertices with the same level, thus layer is . All layers form a partition of . We extend the definition of level assignment to paths, the level of a path is the maximum level of a vertex in the path, that is, if is a path of , then . We define , as the flow paths whose level is at least , . Note that if .

At the beginning we fix and . We also maintain that for all . Additionally, we maintain the following invariants:

Invariant A

: If is an edge in and , then .

Invariant B

: If is the last edge of some , then .

Invariant C

: If are positive integers with , then .

Note that, since we do not include and in the representation of flow paths, for all , moreover, by Invariant B, , thus . Also note that Invariant C implies that every layer is not empty.

4.2 Progressive flows algorithm

Our algorithm starts by using to obtain at most edges incoming to in time (see creftypecap 1). This procedure requires to answer (the ID of some path of containing ) queries in constant time. To satisfy this requirement, we maintain path IDs on every vertex/edge of every flow path . In each iteration of our algorithm, these path IDs can be broken by the splicing algorithm (Section 4.2.2) but are repaired before the beginning of the next iteration (Section 4.2.3). The following lemma states that the sparsification of incoming edges in produces an sparsification of outgoing edges in the residual.

Lemma 7.

For every , , in .

Proof.

If is of the form , then its only direct edge could be (only if appears in more than one path in ), its reverse edges are of the form , such that is an edge in , thus there are at most of such edges because of sparsification (recall that for , by Lemma 1). On the other hand, if is of the form , then the only reverse edge is . To bound the number of direct edges consider the -ow-cut , with . The flow induced by crossing the cut cannot be more that , and thus the number of direct edges is at most . ∎

4.2.1 Layered traversal

Our layered traversal performs a BFS in each reachable layer from highest to lowest. If is reached, the search stops and the algorithm proceeds to splice the flow paths along the decrementing path found. Since represents a minimum flow of , every decrementing path in starts with the edge and ends with an edge of the form such that some flow path of ends at . Moreover, since does not exist in , the second edge of must be a reverse edge of the form , such that is an in-neighbor of in .

We work with queues (one per layer), where contains the enqueued elements from layer , therefore it is initialized as . By Lemma 7, this initialization takes time, and it is charged to . We start working with . When working with , we obtain the first element from the queue (if no such element exists we move to layer and work with ), then we visit and for each non-visited out-neighbor we add to . Adding the out-neighbors of to the corresponding queues is charged to , which amounts to by Lemma 7. Since edges in the residual do not increase the level (Invariant A), out-neighbors can only be added to queues at an equal or lower layer. As such, this traversal advances in a layered manner, and it finds a decrementing path if one exists.

Note that the running time of the layered traversal can be bounded by per visited vertex. If no decrementing path is found we update the level of the vertices as explained in Section 4.2.3. Otherwise, we first splice flow paths along the decrementing path (Section 4.2.2).

4.2.2 Splicing algorithm

Given a decrementing path in , we splice flow paths along to obtain . Reverse edges in indicate that we should push unit of flow in the opposite direction, thus an edge representing this flow unit should be created. On the other hand, direct edges in indicate that we should subtract unit of flow from that edge, in other words, that this edge should be removed from some flow path containing it. As explained in Section 4.2.1, starts by a direct edge , followed by a reverse edge such that is an edge in . It then continues by a (possibly empty) sequence of reverse and direct edges, and it finishes by a direct edge , such that some flow path of ends at .

A direct (reverse) segment is a maximal subpath of direct (reverse) edges of . The splicing algorithm processes direct and reverse segments interleaved as they appear in . It starts by processing the first reverse segment (the one starting with ). The procedure that process reverse segments receives as input the suffix of a flow path (the first call receives ). It creates the corresponding flow subpath (the reverse of the segment), appends it to the path that received as input, and provides the resulting path as input of the procedure handling the next direct segment. The procedure that handles direct segments , also receives as input the suffix of a flow path. It splices the paths of the flow decomposition along using the procedure of Lemma 6, obtaining a new flow decomposition such that one of the paths contains as a subpath. It then removes from and reconnects the prefix of before with the path given as input, and provides the suffix of after as input of the procedure handling the next reverse segment.

Note that both procedures run in time proportional to the corresponding segment (see Lemma 6 for direct segments). As such, the splicing algorithm takes time. Moreover, since all vertices in the decrementing path are also vertices visited by the traversal, the running time is bounded above by the running time of the layered traversal, that is, per visited vertex.

(a) Before splicing
(b) After splicing
Figure 2: Effect of the splicing along a decrementing path of . We only show vertices , just four flow paths in blue, green, brown and purple (with some overlap), and two red vertices where splicing of flow paths occurs (splicing points). Figure 1(a) shows the four flow paths before splicing. Path is highlighted in dashed red (direct segments) and solid red (reverse segment). Figure 1(b) shows that splicing along transforms the four flow paths into three. The reverse segment creates a subpath (black) of one of these. The direct segments remove subpaths of previous flow paths. The splicing points now join subpaths of the previous brown and blue, and purple and brown, paths respectively.

Figure 2 illustrates the effect of the splicing algorithm on flow paths.

4.2.3 Level and path updates

After obtaining , we update the level of some vertices of to maintain the invariants (Section 4.1) of the level assignment . Moreover, to sparsify (Section 4.2) in the next iteration, we also repair the path IDs on the vertices/edges of that could be in an inconsistent state after running the splicing algorithm.

If the smallest layer visited during the traversal is layer , then we set , (to maintain Invariant B, see Section 4.4), and change the level of every vertex visited during the traversal to (to maintain Invariant A, see Section 4.4).

If a decrementing path was found (and the splicing algorithm was executed) we first repair the path IDs by traversing every flow path of backwards from the last vertex, until we arrive to a vertex of level less than , from which we obtain the corresponding path ID that we then update by going back (forwards) in the flow path. After that, the following observations hold.

Observation 2.

Let , and the singleton set containing the last vertex in the decrementing path found by the layered traversal in , or the empty set if no decrementing path was found. Then, .

Proof.

If no decrementing path was found the observation easily follows. On the other hand, if a decrementing path is found, the observation follows from the fact that the only edge in of the form with , comes from . ∎

Observation 3.

If is the smallest level visited by the layered traversal in , then = for every , and .

Therefore, this is the only way Invariant C can be broken by the algorithm. As such, after the level and path ID updates, we check if , and in that case we decrease the level of every vertex , , by . If this happens, we say that we merge layer .

The running time of all these updates is bounded by per vertex of level or more, which dominates the running time of an step of the algorithm (except the initial sparsification).

(a) Layered traversal
(b) Level updates
(c) Merge of layer
Figure 3: Execution of our second algorithm in an abstract example graph. Edges and flow paths are absent for simplicity. Layers are divided by dotted vertical strokes, . Figure 2(a) shows a decrementing path in (red) found by the layered traversal as well as all vertices visited (red and orange), is the smallest layer visited. Figure 2(b) shows the updates to the level assignment, all vertices visited by the traversal get level , and gets level . Figure 2(c) shows the result of merging layer , all vertices of level or more decrease their level by one.

Figure 3 illustrates the evolution of the level assignment in a step of the algorithm.

4.3 Running time

Note that the running time of step is bounded by (from sparsification) plus per vertex whose level is or more, where is the smallest level visited by the layered traversal in . The first part adds up to for the entire algorithm, whereas for the second part we show that every vertex is charged only times in the entire algorithm, thus adding up to in total. Every time a vertex is charged , then the minimum level visited in that step must be . Consider the sequence and its evolution until its final state