    We study the problem of interdicting a directed graph by deleting nodes with the goal of minimizing the local edge connectivity of the remaining graph from a given source to a sink. We show hardness of obtaining strictly unicriterion approximations for this basic vertex interdiction problem. We also introduce and study a general downgrading variant of the interdiction problem where the capacity of an arc is a function of the subset of its endpoints that are downgraded, and the goal is to minimize the downgraded capacity of a minimum source-sink cut subject to a node downgrading budget. This models the case when both ends of an arc must be downgraded to remove it, for example. For this generalization, we provide a bicriteria (4,4)-approximation that downgrades nodes with total weight at most 4 times the budget and provides a solution where the downgraded connectivity from the source to the sink is at most 4 times that in an optimal solution. WE accomplish this with an LP relaxation and round using a ball-growing algorithm based on the LP values. We further generalize the downgrading problem to one where each vertex can be downgraded to one of k levels, and the arc capacities are functions of the pairs of levels to which its ends are downgraded. We generalize our LP rounding to get (4k,4k)-approximation for this case.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Interdiction problems arise in evaluating the robustness of infrastructure and networks. For an optimization problem on a graph, the interdiction problem can be formulated as a game consisting of two players: an attacker and a defender. Every edge/vertex of the graph has an associated interdiction cost and the attacker interdicts the network by modifying the edges/vertices subject to a budget constraint. The defender solves the problem on the modified graph. The goal of the attacker is to hamper the defender as much as possible. Ford and Fulkerson initiated the study of interdiction problems with the maximum flow/minimum cut theorem [7, 17, 24]. Other examples of interdiction objectives include matchings , minimum spanning trees [20, 30], shortest paths [13, 18], -flows [23, 26, 28] and global minimum cuts [29, 6].

Most of the interdiction literature today involves the interdiction of edges while the study of interdicting vertices has received less attention (e.g.[27, 28]). The various applications for these interdiction problems, including drug interdiction, hospital infection control, and protecting electrical grids or other military installations against terrorist attacks, all naturally motivate the study of the vertex interdiction variant. In this paper, we focus on vertex interdiction problems related to the minimum -cut (which is equal to the maximum -flow and hence also termed network flow interdiction or network interdiction in the literature).

For -cut vertex interdiction problems, the set up is as follows. Consider a directed graph with vertices and arcs, an arc cost function , and an interdiction cost function defined on the set of vertices . A set of arcs is an -cut if no longer contains a directed path from to . Define the cost of as . For any subset of vertices , we denote its interdiction cost by . Let denote the cost of a minimum cut in the graph .

###### Problem 1.

Weighted Network Vertex Interdiction Problem (WNVIP) and its special cases. Given two specific vertices (source) and (sink) in and interdiction budget , the Weighted Network Vertex Interdiction Problem (WNVIP) asks to find an interdicting vertex set such that and is minimum. The special case of WNVIP where all the interdiction costs are unit will be termed NVIP, while the further special case when even the arc costs are unit will be termed NVIP with unit costs.

 cY(e)= u,v∉Y u∈Y,v∉Y u∉Y,v∈Y u,v∈Y ce ceu cev ceuv

Given a set of arcs , we define .

###### Problem 2.

Network Vertex Downgrading Problem (NVDP). Let be a directed graph with a source and a sink . For every arc , we are given non-negative costs as defined above. Given a (downgrading) budget , find a set and an -cut such that and minimizes .

While it is not immediately obvious as it is for WNVIP, we can still show that detecting a zero solution for NVDP is easy. The proof of the following theorem is in Appendix 0.B.

###### Theorem 1.1.

Given an instance of NVDP on graph with budget , there exists a polynomial time algorithm to determine if there exists and an -cut such that and .

First we present some useful reductions between the above problems.

1. In the NFI (Network Flow Interdiction) problem defined in , the given graph is undirected instead of directed and the adversary interdicts edges instead of vertices. The goal is to minimize the cost of the minimum -cut after interdiction. NFI can be reduced to the undirected version of WNVIP (where the underlying graph is undirected). Simply subdivide every undirected edge with a vertex . The interdiction cost of remains the same as the interdiction cost of while all original vertices have an interdiction cost of (or a very large number). The cut cost of the edges are equal to the original cost of cutting the edge .

2. The undirected version of WNVIP can be reduced to the (directed) WNVIP by replacing every edge with two parallel arcs going in opposite directions. Each new arc has the same cut cost as the original edge.

3. WNVIP is a special case of NVDP with costs for all .

The first two observations above imply that any hardness result for NFI in  also applies to WNVIP. Based on the second observation, we prove our hardness results for the (more specific) undirected version of WNVIP. As a consequence of the third observation, all of these hardness results also carry over to the more general NVDP.

###### Problem 3.

Network Vertex Leveling Downgrading Problem (NVLDP). Let be a directed graph with a source and a sink . For every vertex and , we have non-negative downgrading costs . For every arc and levels , we are given non-negative cut costs . Given a (downgrading) budget , find a map and an -cut such that and minimizes .

Note that when we have NVDP.

### Related Works.

###### Definition 1.

An bicriteria approximation for the interdiction (or downgrading) problem returns a solution that violates the interdiction budget by a factor of at most and provides a final cut (in the interdicted graph) with cost at most times the optimal cost of a minimum cut in a solution of interdiction budget at most .

Chestnut and Zenklusen  study the network flow interdiction problem (NFI), which is the undirected and edge interdiction version of WNVIP. NFI is also known to be essentially equivalent to the Budgeted minimum cut problem . NFI is also a recasting of the -route -cut problem [8, 16], where a minimum cost set of edges must be deleted to reduce the node or edge connectivity between and to be . The results of Chestnut and Zenklusen, and Chuzhoy et al.  show that an -approximation for WNVIP implies a -approximation for the notorious Densest k-Subgraph (DkS) problem. The results of Chuzhoy et al.  (Theorem 1.9 and Appendix section B) also imply such a hardness for NVIP even with unit edge costs. Furthermore, Chuzhoy et al.  also show that there is no -bi-criteria approximation for WNVIP assuming Feige’s Random -AND Hypothesis (for every and sufficiently small constant ). For example, under this hypothesis, they show hardness of approximation for WNVIP.

Chestnut and Zenklusen give a approximation algorithm for NFI for any graph with vertices. In the special case where the graph is planar, Philips  gave an FPTAS and Zenklusen  extended it to handle the vertex interdiction case.

Burch et al.  give a pseudo-approximation algorithm for NFI. Given any , this algorithm returns either a -approximation, or a solution violating the budget by a factor of but has a cut no more expensive than the optimal cost. However, we do not know which case occurs a priori. In this line of work, Chestnut and Zenklusen  have extended the technique of Burch et al. to derive pseudo-approximation algorithms for a larger class of NFI problems that have good LP descriptions (such as duals that are box-TDI). Chuzhoy et al.  provide an alternate proof of this result by subdividing edges with nodes of appropriate costs.

### Our Contributions.

1. We define and initiate the study of multi-level node downgrading problems by defining the Network Vertex Leveling Downgrading Problem (NVLDP) and provide the first results for it. This problem extends the study in  of the vertex interdiction problem in order to consider a richer set of interdiction functions.

2. Chuzhoy et al. , and Chestnut and Zenklusen  showed that a -approximation for NFI leads to a -approximation for the Densest k-Subgraph problem, implying a similar conclusion for WNVIP. The results of Chuzhoy et al.  also imply such a hardness for NVIP even with unit costs. Using similar techniques but more directly, we show a -approximation for WNVIP also leads to a -approximation for DkS. These results show that designing unicriterion approximations for WNVIP are at least “DkS hard”. Along with the harness of approximation for WNVIP due to Chuzhoy et al.  mentioned above, these suggest that we focus on bicriteria approximation results.

We sharpen our hardness result further and show that it is also “DkS hard” to obtain a -approximation for NVIP and NVIP with unit costs. Note that this is in sharp contrast to the edge interdiction case. NFI with unitary interdiction cost and unitary cut cost can be solved by first finding a minimum cut and then interdicting edges in that cut . (Details are in Appendix 0.D)

3. Burch et al.  gave a polynomial time algorithm that finds a or -approximation for any for WNVIP in digraphs. This was reproved more directly by Chuzhoy et al  by converting both interdiction and arc costs into costs on nodes. We show that this strategy can also be extended to give a simple -bicriteria approximation for the multiway cut generalization in directed graphs and a -approximation for the multicut vertex interdiction problem in undirected graphs, for any . (Details are in Appendix 0.E)

4. For the downgrading variant NVDP, we show that the problem of detecting whether there exists a downgrading set that gives a zero cost cut can be solved in polynomial time. We then design a new LP rounding approximation algorithm that provides a -approximation to NVDP. We use a carefully constructed auxiliary graph so that the level-cut algorithm based on ball growing for showing integrality of -cuts in digraphs (See. e.g. ) can be adapted to choose nodes to downgrade and arcs to cut based on the LP solution. (Section 2, Appendix 0.B)

5. For the most general version NVLDP with levels of downgrading each vertex and possible costs of cutting an edge, we generalize the LP rounding method for NVDP to give a -approximation (Section 0.C).

## 2 Network Vertex Downgrading Problem (NVDP)

As an introduction and motivation to the LP model and techniques used to solve NVLDP, in this section, we focus on the special case NVDP, where there is only one other level to downgrade each vertex to. Our main goal is to show the following theorem.

###### Theorem 2.1.

There exists a polynomial time algorithm that provides a -approximation to NVDP on an -node digraph.

#### LP Model for NVDP.

To formulate the NVDP as a LP, we begin with the following standard formulation for minimum -cuts .

 min ∑e∈A(G)c(e)xe s.t. dv≤du+xuv ∀uv∈A(G) (1) ds=0,dt≥1 xuv≥0 ∀uv∈A(G) (2)

An integer solution for -cut can be interpreted as setting to be 0 for nodes in the shore and 1 for nodes in the shore of the cut. Constraint (1) then insist that the -value for arcs crossing the cut to be set to 1. The potential at node can also be interpreted as a distance label starting from and using the nonnegative values as distances on the arcs. Any optimal solution to the above LP can be rounded to an optimal integer solution of no greater value by using the -values on the arcs as lengths, growing a ball around , and cutting it at a random threshold between 0 and the distance to (which is 1 in this case). The expected cost of the random cut can be shown to be the LP value (See e.g., ), and the minimum such ball can be found efficiently using Dijkstra’s algorithm. Our goal in this section is to generalize this formulation and ball-growing method to NVDP.

One difficulty in NVDP comes from the fact that every arc has four associated costs and we need to write an objective function that correctly captures the final cost of a chosen cut. One way to overcome this issue is to have a distinct arc associated with each cost. In other words, for every original arc , we create four new arcs with cost respectively. Then, every arc has its unique cost and it is now easier to characterize the final cost of a cut. We consider the following auxiliary graph. See Figure 1.

#### Constructing Auxiliary Graph H

Let where and . Define . Essentially, the vertices corresponds to the original vertices and for every arc , we replace it with a path where the four arcs on the path are . For convenience and consistency in notation, we define . Note that the vertices of will always be denoted as two lowercase letters in parenthesis while arcs in will be two lowercase letters in square brackets with subscript . The cost function is as follows: . Since we can only downgrade vertices in , to simplify the notation, we retain as the cost to downgrade vertex . Note that .

Given the auxiliary graph , we can now construct an LP similar to the one for -cuts. For vertices corresponding to original vertices of , we define a downgrading variable representing whether vertex is downgraded or not in . For every arc , we have a cut variable to indicate if the arc belongs in the final cut of the graph. Lastly for all vertices , we have a potential variable representing its distance from the source .

The idea is to construct an LP that forces to be at least distance apart from each other as before. This distance can only be contributed from the arc variables . The downgrading variables imposes limits on how large these distances of some of its incident arcs can be. The motivation is that the larger and are, the more we should allow arc to appear in the final cut over the other arcs in order to incur the cheaper cost of . Now consider the following downgrading LP henceforth called DLP.

 min ∑[uv]i∈A(H)c([uv]i)x[uv]i s.t. ∑(vv)∈V0(H)r(v)yv≤b (3) d(uv)i+1≤d(uv)i+x[uv]i ∀arc [uv]i,0≤i≤3 (4) x[uv]2+x[uv]3+x[vw]1+x[vw]2≤yv ∀ path (uv)3(vv)(vw)1 (5) d(ss)=0,d(tt)=1,ys=0,yt=0

Figure 1 also includes the list of variables associated with . Our objective is to minimize the cost of the final cut. Constraint (3) corresponds to the budget constraint for the downgrading variables. Constraint (4) corresponds to the triangle inequality for each arc in , analogous to the triangle inequality Constraint (1) in the LP for min-cuts.

Constraint (5

) relates cut and downgrade variables. If we do not consider any constraints related to downgrading variables for a moment, the LP will naturally always want to choose the cheapest arc

over when cutting somewhere between and . However, the cut should not be allowed to go through if one of is not downgraded. In other words should be at most . This reasoning gives the constraint and, all needs to be for in-arcs and out-arcs . Now consider an arc . In an integral solution, if is downgraded, the arc incurs a cost of either or but not both, since must lie on one side of the cut. This translates to a LP solution where only one of the arcs is in the final cut. Then, the variables corresponding to is at most . Thus, a better constraint to impose is . To push it further, consider a path in . In an integral solution, at most one of the arcs appears in the final cut. This implies if is downgraded, only one of the costs is incurred. This corresponds to the tighter constraint (5). Note that for every vertex , for every pair of incoming and outgoing arcs of , we need to add one such constraint. Then for every vertex in , we potentially have to add up to many constraints. In total, the number of constraints would still only be . The last few constraints in DLP make sure and are distance apart and cannot themselves be downgraded.

The validity of this LP model is shown by proving that a solution of NVDP corresponds to a feasible solution of DLP and an integral solution of DLP can be translated to a feasible solution of NVDP with the same or even better cost. These proofs are included in Appendix 0.B.

#### Bicriteria Approximation for NVDP.

We now prove Theorem 2.1. We will work with an optimal solution from DLP on the auxiliary graph . The idea is to use a ball-growing algorithm that greedily find cuts until one with the promised guarantee is produced. The reason this algorithm is successful is proved by analyzing a randomized algorithm that picks a number uniformly at random and chooses a cut at distance from the source . Then we choose vertices to downgrade and arcs to cut based on arcs in this cut at distance . By computing the expected downgrading cost and the expected cost of the cut arcs, the analysis will show the existence of a level cut that satisfies our approximation guarantee.

To achieve the desired result, we cannot work with the graph

directly. This is because the ball-growing algorithm works if the probability of cutting some arc can be bounded within some range. This bound exists for the final cut arcs but not for the final downgraded vertices. Consider a vertex

; it is downgraded if any arcs of the form is cut in . Thus it has potential of being cut anywhere between and . We would like to use Constraint (5) to bound this range but we cannot since we do not know how long the arc is. Thus we will contract some arcs first in order to properly use Constraint (5).

Let be an optimal solution to DLP where the optimal cost is . It follows from the validity of our model that is at most the cost of an optimal integral solution.

#### Constructing Graph H′

For every arc , we compare the value and . The reason we separate this way is because the variables in the second term are influenced by the downgrading values on . Thus the more we downgrade and , the larger we are allowed to increase the second sum, the more distance we can place between and . For an arc , if , we say is an aided arc since most of its distance is contributed by the downgrading values on the and thus the downgrading values help to reduce its cost. For all other arcs, we say is an unaided arc since most of the distance would be contributed by the arc , corresponding to simply paying for the original cost of deletion without the help from downgrading. To construct , if is an aided arc, then contract . Otherwise, contract .

Consider a path in . Note that the length of this path is shortened in depending on whether is an aided/unaided arc. However, since we always retain the larger of and in , the path’s length is reduced by at most one half. Then it follows that the distance between any two vertices in is reduced by at most one half. In particular, it follows that the shortest path between the source and the sink is at least . This property will be crucial in arguing that the solution chosen by our algorithm has low cost relative to the LP optimum.

In Algorithm 1, it follows the general ball-growing technique and looks at cuts at various distances from the source. Note that the algorithm adds at least one vertex to a node set at each iteration so it runs for at most steps when applied to the graph , where denotes the number of arcs in the original graph .

At each iteration, it associates , the set of original arcs corresponding to those in and a vertex subset , representing the set of vertices we should downgrade based on the arcs in . For example, if , then we should downgrade both . Note that since is a cut in , it follows that is also a cut in .

First, we show that there exists a cut at some distance from the source such that the associated sets provides the approximation guarantee.

###### Lemma 1.

There exists such that

The main idea of the proof is to pick a distance uniformly at random and study the cut at that distance. Then, argue that the extent to which an arc is cut (chosen in above) in the random cut is at most twice its -value, using the property of the model. For this, we crucially use the fact that the random cut distance is chosen between zero and at least half since even after the arc contractions, the distance of from is still at least half. When nodes are chosen in the random cut (in above) to be downgraded, we use constraint (5) in the LP along with the properties of the algorithm to argue similarly that the probability of downgrading a node in the process is at most twice its -value. To obtain a cut where we simultaneously do not exceed both bounds, we use Markov’s inequality to show a probability of at least half of being within twice these respective expectations, hence giving us a single cut with both bounds within four times their respective LP values. Details of the proof of the lemma are in Appendix 0.A.

Lastly, in order to prove the validity of Algorithm 1, we need to show it can eventually find the correct cut at distance .

###### Lemma 2.

Let be the smallest value such that the associated cut at distance provides the promised apporximation guarantee. Then Algorithm 1 can find it by checking all cuts at distance .

The proof is included in Appendix 0.A. Then, Theorem 2.1 is proved by simply running Algorithm 1 on the auxiliary graph .

## References

•  A. Agarwal, N. Alon, and M. S. Charikar. Improved approximation for directed cut problems. In

Proceedings of the thirty-ninth annual ACM symposium on Theory of computing

, pages 671–680. ACM, 2007.
•  A. Bhaskara, M. Charikar, E. Chlamtac, U. Feige, and A. Vijayaraghavan. Detecting high log-densities: an ) approximation for densest k-subgraph. In Proceedings of the forty-second ACM symposium on Theory of computing, pages 201–210. ACM, 2010.
•  C. Burch, R. Carr, S. Krumke, M. Marathe, C. Phillips, and E. Sundberg. A decomposition-based pseudoapproximation algorithm for network flow inhibition. In W. D. L., editor, Network Interdiction and Stochastic Integer Programming, volume 26, pages 51–68. springer, 2003.
•  G. Călinescu, H. Karloff, and Y. Rabani. An improved approximation algorithm for multiway cut. Journal of Computer and System Sciences, 60(3):564–574, 2000.
•  C. Chekuri and V. Madan. Simple and fast rounding algorithms for directed and node-weighted multiway cut. In Proceedings of the twenty-seventh annual ACM-SIAM symposium on Discrete algorithms, pages 797–807. SIAM, 2016.
•  S. R. Chestnut and R. Zenklusen.

Interdicting structured combinatorial optimization problems with

0, 1-objectives.
Mathematics of Operations Research, 42(1):144–166, 2016.
•  S. R. Chestnut and R. Zenklusen. Hardness and approximation for network flow interdiction. Networks, 69(4):378–387, 2017.
•  J. Chuzhoy, Y. Makarychev, A. Vijayaraghavan, and Y. Zhou. Approximation algorithms and hardness of the k-route cut problem. ACM Transactions on Algorithms (TALG), 12(1):2, 2016.
•  J. Chuzoy. Flows, cuts and integral routing in graphs - an approximation algorithmist’s perspective. In Proc. of the International Congress of Mathematicians, pages 585–607, 2014.
•  U. Feige. Relations between average case complexity and approximation complexity. In Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, pages 534–543. ACM, 2002.
•  N. Garg, V. V. Vazirani, and M. Yannakakis. Approximate max-flow min-(multi) cut theorems and their applications. SIAM Journal on Computing, 25(2):235–251, 1996.
•  N. Garg, V. V. Vazirani, and M. Yannakakis. Multiway cuts in node weighted graphs. Journal of Algorithms, 50(1):49–61, 2004.
•  B. Golden. A problem in network interdiction. Naval Research Logistics Quarterly, 25(4):711–713, 1978.
•  B. Guenin, J. Könemann, and L. Tuncel. A gentle introduction to optimization. Cambridge University Press, 2014.
•  A. Gupta and R. O’Donnell. Lecture 18 on multicuts. Lecture Notes for 15-854(B): Advanced Approximation Algorithms, Spring 2008.
•  G. Guruganesh, L. Sanita, and C. Swamy. Improved region-growing and combinatorial algorithms for k-route cut problems. In Proceedings of the twenty-sixth annual ACM-SIAM symposium on Discrete algorithms, pages 676–695. Society for Industrial and Applied Mathematics, 2015.
•  T. E. Harris and F. S. Ross. Fundamentals of a method for evaluating rail net capacities. Technical report, Santa Monica, California, 1955.
•  E. Israeli and R. K. Wood. Shortest-path network interdiction. Networks: An International Journal, 40(2):97–111, 2002.
•  S. Khot. Ruling out ptas for graph min-bisection, dense k-subgraph, and bipartite clique. SIAM Journal on Computing, 36(4):1025–1071, 2006.
•  A. Linhares and C. Swamy. Improved algorithms for mst and metric-tsp interdiction. Proceedings of 44th International Colloquium on Automata, Languages, and Programming, 32:1–14, 2017.
•  J. Naor and L. Zosin. A 2-approximation algorithm for the directed multiway cut problem. SIAM Journal on Computing, 31(2):477–482, 2001.
•  C. H. Papadimitriou and M. Yannakakis. On the approximability of trade-offs and optimal access of web sources. In Proceedings 41st Annual Symposium on Foundations of Computer Science, pages 86–92. IEEE, 2000.
•  C. A. Phillips. The network inhibition problem. In Proceedings of the Twenty-fifth Annual ACM Symposium on Theory of Computing, STOC ’93, pages 776–785, New York, NY, USA, 1993. ACM.
•  A. Schrijver. On the history of the transportation and maximum flow problems. Mathematical Programming, 91(3):437–445, 2002.
•  A. Sharma and J. Vondrák. Multiway cut, pairwise realizable distributions, and descending thresholds. In STOC, 2014.
•  R. Wood. Deterministic network interdiction. Mathematical and Computer Modeling, 17(2):1–18, 1993.
•  R. Zenklusen. Matching interdiction. Discrete Applied Mathematics, 145(15), 2010.
•  R. Zenklusen. Network flow interdiction on planar graphs. Discrete Applied Mathematics, 158(13), 2010.
•  R. Zenklusen. Connectivity interdiction. Operations Research Letters, 42(67):450–454, 2014.
•  R. Zenklusen. An approximation for minimum spanning tree interdiction. Proceedings of 56th Annual IEEE Symposium on Foundations of Computer Science, pages 709–728, 2015.

## Appendix 0.A Validity of Algorithm  1

###### Proof of Lemma 1.

Let be the distance from the source to any vertex viewing the variables as distances. Note that since the original distance is at least and reduces the distance by at most . Note that the triangle-inequality holds under this distance metric where is at most the distance between and .

#### Defining the Random Variables

Let be chosen uniformly at random from the interval . Consider , the cut at distance in . Let , representing the original arcs corresponding to those in . Let , representing the set of vertices we should downgrade so that the final cost of the arcs matches the cost associated to . More precisely, we want . Note that by construction is a -cut in . Let

. Our goal is to show that these two random variables

have low expectations and obtain our approximation guarantee using Markov’ inequality. In particular, we will prove that , and that where is the optimal value of DLP.

To understand , for every arc , we introduce the indicator variables to be if arc and otherwise. Then . To study the value of , we can break into several cases depending on which arc . Note that if for , then and . Next, if we assume , then one can check that . The proof of this claim is similar to the proof of Claim 0.B.2 where we check the different cases depending on whether .

Slightly abusing the notation, define the indicator variable for arc to be if and 0 otherwise. Then, we can upper-bound the expectation of using conditional expectations of the events as follows.

 E[E]= Σe∈A(G)E[EecVα(e)] = Σe∈A(G)Σ3i=0E[cvα(e)|E[uv]i=1]⋅Pr[E[uv]i=1] ≤ Σe∈A(G)Σ3i=0c([uv]i)Pr[E[uv]i=1]

To understand the probability of , note that an arc if and only if . Then, since . Combining with the previous inequalities, we see that

 E[E]≤ Σuv∈A(G)Σ3i=0c([uv]i)Pr[E[uv]i=1] ≤ Σuv∈A(G)Σ3i=0c([uv]i)2x∗[uv]i=2c∗.

Next, we show similar result for . Note that . Recall that if and only if there exists a vertex or such that at least one of . Note that if is an unaided arc, then would have been contracted in and would never be chosen in . Therefore, we only need to consider aided arcs. In order to upper-bound the probability of choosing into , we thus need to find the range of possible that might affect . For any vertex , it follows that we only need to examine aided arcs incident to the vertex . Let such that is an aided arc and is minimum. Let such that is an aided arc and is maximum. By definition, all arcs of the form lie between the range . Then, is chosen only if is between and . Note that the distance between and is upper-bounded by the length of a shortest path in . Note that since is an aided arc, is contracted in . Then is a path in . Thus where the last inequality follows from Constraint (5)111This is the main reason why we distinguish between aided and unaided arcs and contract the appropriate one to construct . Without the contraction, the distance between and includes the arc and thus could be arbitrarily larger than .. Then, . Therefore

 E[V]= Σv∈V(G)r(v)⋅Pr[v∈Vα] ≤ Σv∈V(G)r(v)2y∗v≤2b.

Lastly, by Markov’s inequality, . Then it follows there exists such that and , proving our lemma.

###### Proof of Lemma 2.

Let and consider the cuts at distance respectively. Let be the vertices of the connected component of containing respectively. By definition, all vertices in are at distance at most from respectively. Thus, . Then are distinct cuts if and only if is a proper subset of . This implies that the cuts at distance at most can be properly ordered and there are only polynomially many such cuts. Then, we will use induction to prove our claim that Algorithm 1 checks all cuts in this order.

The base case where is obviously true. Now suppose are two consecutive cuts in the ordering and the algorithm just checked . Let and let be the set of all arcs between and . First, note that all arcs in has the same length, otherwise there exists another cut at distance strictly between and (by adding the other end of the shortest arc in into the set and looking at the cut it induces). Then, the algorithm first pick up any arc in and extends the set a bit. Then, it continues to to pick up all other arcs in before choosing any other arcs due to Step 4 of Algorithm 1. Thus, it will eventually also reach and check .

## Appendix 0.B Appendix for NVDP

### 0.b.1 Detecting Zero in NVDP in Polynomial Time

In this subsection, we provide an algorithm to detect, in a given instance of NVDP, whether there exists nodes to downgrade such that the downgrading cost is less than the budget and the min cut after downgrading is zero.

In order to demonstrate the main idea of the proof, we first work on a special case of NVDP. Suppose for every arc , and . In other words, every arc is unit cost and requires the downgrading of both ends in order to reduce the cost down to zero. For every vertex , we assume the interdiction cost . We call this the Double-Downgrading Network Problem (DDNP). We first prove the following.

###### Lemma 3.

Given an instance of DDNP on graph with budget , there exist a polynomial time algorithm to determine if there exists and a -cut such that and .

###### Proof.

Let be a minimum set of vertices to downgrade such that the resulting graph contains a cut of zero cost. Let be the set of arcs in the graph induced by (i.e., with both ends in ). Note that are the only arcs with cost zero and hence is an arc cut in . Furthermore, since is optimal, is the set of vertices incident to (there are no isolated vertices in the graph induced by ). Let be the set of vertices in the component of that contains respectively.

Consider the graph where we add arc to if there exists such that . First we claim that is a vertex cut in . Suppose there is an path in where the first arc crossing over from to is . Note that any such and are distance 3 apart and hence do not have an arc between them in , a contradiction.

Given any vertex cut in , we claim that downgrading in creates a -cut of zero cost, by deleting the arcs induced by from . Suppose for a contradiction there is an -path after downgrading and deleting the zero-cost arcs induced by . Then the path cannot have two consecutive nodes in . Let be a single node along the path with neighbors . Note that , and shortcutting over all such single node occurrences from in the path gives us a -path in , a contradiction.

This proves that a minimum size downgrading vertex set in whose downgrading produces a zero-cost -cut is also a minimum vertex-cut in . Then, one can check if a zero-cut solution exists with budget for DDNP by simply checking if the minimum vertex-cut in is at most . ∎

Now, to prove Theorem 1.1, we have to slightly modify the graph and the construction of in order to adapt to the various costs. Our goal is still to look for a minimum vertex cut in an auxiliary graph using as vertex cost.

###### Proof.

Given an instance of NVDP on with a budget , vertex downgrading costs and arc costs , consider the following auxiliary graph . First, we delete any arc where since they are free to cut anyways. For every arc where , subdivide with a vertex and let . In some sense, since , downgrading cannot reduce the cost of to zero. Then, we should never be allowed to touch the vertex . Let be the set of all newly-added subdivided vertices. To finish constructing , our next step is to properly simulate .

We classify arcs into five types based on which of its costs are zero. Note that we no longer have any arcs where

. Let , the arcs where downgrading either ends reduce its cost to zero. Let respectively represent arcs that require the downgrading of its left tail, its right head, or both in order to reduce its cost. Let be all remaining arcs, those incident to the newly subdivided vertex . Now, for every path of length two, we consider adding the arc based on the following rules (see Figure 2 for example of newly added arcs):

If