1 Introduction
Let be an undirected graph with vertex set and edge set , where and every has a nonnegative cost . In addition, we are given a designated “source” vertex . We are concerned with attempting to mitigate some sort of “disaster” that begins at and infectiously spreads through the network via the edges. This means that vertices that are connected to (i.e., there exists an undirected path in ) are at some sort of risk or disadvantage.
A natural approach to mitigate the aforementioned spread is to remove edges from , in an attempt to disconnect as many vertices of the graph from as possible. Specifically, if we remove a cutset or simply cut from the graph, we denote by the set of vertices in that are no longer connected to in , and hence are protected from the infectious process. At a highlevel, the edge removal strategy contains the disastrous event within the set . Observe now that there is a clear tradeoff between the cost ^{1}^{1}1
For a vector
and a subset , we use to denote of the cut and , with existing literature having addressed the problem of either minimizing subject to a lower bound constraint on , or maximizing (equivalently minimizing ) subject to an upperbound constraint on [Hayrapetyan et al., 2005, Svitkina and Tardos, 2004].Inspired by the recent interest revolving around algorithmic fairness, our goal in this paper is to incorporate such ideas in the previously described problem scenario, and initiate the discussion of fairness requirements for cuts in graphs. To the best of our knowledge, our work here is the first to combine fairness with this family of problems.
The first notion of fairness that we consider is called Demographic Fairness, and was first introduced in the seminal work of Dwork et al. [2012]. In this definition, the set of elements that require “service” consists of various subsets—say demographic groups—and the solution should equally and fairly treat and represent each of these groups. In our case, if the vertices of the graph belong to different groups, we would like our solution to fairly separate points of each of them from the designated node . In this way, we will avoid outcomes that completely ignore certain groups for the sake of minimizing the objective function. Hence, we define the following problem.
DemFairCut: In addition to a graph with weights and source , for some given integer we are given sets and values , such that , ^{2}^{2}2We use to denote for some integer we have and . Note that each may actually belong to multiple sets . Letting , the goal is to find a cut with the minimum possible subject to the constraint that for all . In words, if each is interpreted as a demographic, we want the minimum cost cut under the condition that at least an fraction of the points in are disconnected from (for all ).
Instantiating this definition with different values of allows us to model a variety of fairness scenarios. For example, setting would let us guarantee a solution that protects at least half the vertices of each . Alternatively, we can set to be a decreasing function of , and thus yield a solution that focuses more on protecting smaller demographics.
The second notion of fairness we consider is called Probabilistic Individual Fairness, and was first introduced in the context of robust clustering [Harris et al., 2019, Anegg et al., 2020]. In this definition the solution should not be a single cut, but rather a distribution
over cuts. The fairness constraint is that for each input element (vertex), the probability that it will get “good service” (be disconnected from the source) in a randomly drawn cut from this distribution is at least some given parameter. Obviously, sampling from this constructed distribution
must be possible in polynomial time (we call such distributions efficientlysampleable). Under this notion of fairness we avoid outcomes that deterministically prevent satisfactory outcomes for certain individuals. In our case, this implies providing solutions that ensure each vertex is disconnected from with a certain probability:IndFairCut: In addition to a graph with weights and source , for each we are also given a value . The goal is to find an efficientlysampleable distribution over the cuts , such that for each , and is the minimum possible.
Finally, in the context of demographic fairness, we also study an additional setting. In this case, we assume a standard SIR model of spread [Eubank et al., 2006], where each vertex is in one of the states S (susceptible), I (infectious) or R (recovered). An infectious vertex infects each susceptible neighbor once, independently with some known probability , where and is assumed to be a constant. This model is equivalent to the following percolation process [PastorSatorras et al., 2015, Marathe and Vullikanti, 2013]. Consider a random subgraph , obtained by retaining each edge independently with probability (and thus removing each edge with probability ). We sometimes abuse notation and let also represent the distribution over subgraphs thus obtained. Then, the probability that a set of vertices is not reachable from in , is precisely equal to the probability that the set does not become infected during the SIR process. As a result, for the removal of an edgeset , we have that the expected number of vertices it protects in the SIR process is precisely . Therefore, we can focus on the case:
RandDemFairCut: For a given instance of DemFairCut in this stochastic setting, we want to compute with the minimum possible , such that for every . Thus, we want the expected fraction of protected vertices from each to be at least , where the expectation is over the randomness of the model and any randomization in our algorithm.
Observation 1.
In all our problems, we can assume that the disastrous event simultaneously starts from a set of vertices , instead of just a single designated vertex. This assumption is without loss of generality, since can be merged into a single vertex (by retaining all edges between and ), thus giving an equivalent formulation that matches ours where we have a single startvertex .
1.1 Contribution and outlines
Our main contribution is introducing the first fair variants of graphcut problems, together with approximation algorithms with provable guarantees for them.
In Section 2 we present a technique that is required in our approach for solving DemFairCut and IndFairCut. The key insight is that we can reduce these problems on general graphs to the same problems on trees by using a tree embedding result of Räcke [2008].
Definition 2.
We call an algorithm for DemFairCut bicriteria, if for any given problem instance with optimal value , it returns a solution such that 1) , and 2) .
In Section 3 we address the demographic fairness case. We first give an bicriteria algorithm for DemFairCut based on dynamic programming, when the number of groups is a constant. Then, for any , we give an bicriteria algorithm for DemFairCut for arbitrary (not necessarily constant)
. The latter algorithm is based on a linear programming relaxation together with a new
dependent randomized rounding technique. Finally, we present a sampling scheme that given any bicriteria algorithm for DemFairCut constructs a bicriteria algorithm for RandDemFairCut when the input graph is sufficiently dense. Specifically, if is the size of the minimum cut of , our sampling scheme works for . We later present examples of reallife networks where this cut condition is realistic.1.2 Motivating examples
Regarding demographic fairness, consider the following potential application. The vertices of the graph would correspond to geographic areas across the globe, and an edge would denote whether or not there is underlying infrastructure, e.g., highways or airplane routes, that can transport people between areas and . The disastrous event in this scenario is the spread of a disease in a global health crisis. If an area is “infected”, then it is natural to assume that neighboring areas (i.e., areas with ) can also get infected if we allow people to travel between and . A central planner will now naturally try to break a set of connections from the infrastructure graph, such that the total cost of these actions will be as small as possible, while some guarantee on the number of protected areas is also satisfied. The value can be interpreted as the economic cost of the proposed strategy , e.g., the lost revenue of airline companies resulting from cancelling flights.
In terms of fairness, we can think of the areas as coming from different countries, with being the areas associated with country . Then, a fair solution would not tolerate a discrepancy in how many areas are protected across different countries. For example, a fair approach would be to ensure that each country has at least half of its areas protected, since the less "infected" areas each country has, the more easily it can keep its local crisis under control. Finally, in this application the minimum degree of the infrastructure graph must be significantly large, because we can assume that each geographic area contains a large airport, and the total number of flights departing and arriving from such an airport is sufficiently high. The latter observation immediately implies that must be large, and hence even our results for RandDemFairCut are applicable here.
As far as individual fairness is concerned, consider a computer network facing the spread of a computer virus. In this scenario, we want to minimize the cost of the connections removed, such that the infectious process is kept under control. However, each individual user of the network would arguably prefer to be in the set of protected vertices. Our notion of individual fairness as studied in IndFairCut, will ensure exactly that in a stochastic sense.
1.3 Related work
Unfair variants of DemFairCut and IndFairCut are studied by Hayrapetyan et al. [2005], Svitkina and Tardos [2004], where the goal is to either minimize subject to a lower bound constraint on , or maximize (equivalently minimize ) subject to an upper bound constraint on .
Regarding the SIR model, it has been extensively studied in computational epidemiology, with plenty of different variants considered [Bollobás and Riordan, 2004, Marathe and Vullikanti, 2013, Aspnes et al., 2005, Anil Kumar et al., 2010, Eubank et al., 2006, Yang et al., 2019, Eames et al., 2009, Cohen et al., 2003, Miller and Hyman, 2007, Barabási and Albert, 1999, Sambaturu et al., 2020]. Examples of such variants include among others, a random initial source, vertex removal instead of edge removal, and different stochastic regimes for the random graph . However, none of these models has considered fair versions of the problem.
The study of fairness in machine learning was initiated by the realization that datasets include implicit biases, and hence when algorithms are trained on them, they learn to perpetuate the underlying biases. Examples of this include racial bias in Airbnb rentals [Badger, 2016], gender bias in Google’s Ad Settings [Datta et al., 2015] and discrimination in housing ads in Facebook [Benner et al., 2019].
The seminal work of Dwork et al. [2012] first addressed the issue of fairness in algorithmic design. Examples of fair algorithms related to our work are abundant in the clustering literature, with Chierichetti et al. [2017], Bercea et al. [2019], Bera et al. [2019], Huang et al. [2019], Backurs et al. [2019], Ahmadian et al. [2019] considering notions of demographic fairness, while Brubach et al. [2020], Anegg et al. [2020], Harris et al. [2019] focus on notions of individual fairness.
2 Reduction to tree instances
In this section we show how both DemFairCut and IndFairCut can be effectively reduced to solving the corresponding problem in tree instances. To do this, we use a the following lemma, where for any we denote by the set of edges in with exactly one endpoint in .
Lemma 3.
[Räcke, 2008] For any undirected with edge costs , we can efficiently construct a collection of trees with tree having an edgecost function , and find nonnegative multipliers , such that and ^{3}^{3}3Throughout, “poly" will denote an arbitrary univariate polynomial: its usage in different places could connote different polynomials.. Moreover, for any let . Then:

for every and every

for every
Lemma 4.
If we have a bicriteria algorithm for DemFairCut in trees, we can get a bicriteria algorithm for DemFairCut in general graphs.
Proof.
If is the general instance, we first apply the result of Lemma 3 in order to get a collection of trees , where each tree has an associated edge weight function . We then use the given algorithm and solve DemFairCut in each tree instance , and get a solution in return. For the solution we compute for , let , and note that the properties of the algorithm ensure .
After running the algorithm in each tree instance, we find the tree with , and we set our solution for the general graph to be . This means that in our general solution . Combining this observation with the fact that for all , implies that we satisfy an fraction of all demographic constraints in the general solution as well. We now only have to reason about the cost of .
Let be the set of vertices not connected to in the optimal solution of . If is the value of the latter, then . Also, since satisfies all demographic constraints exactly, the set is a feasible solution for , and . Hence:
(1) 
Using the definition of and first property of the trees from Lemma 3 gives
(2) 
Combining (1), (2) and the second property of Lemma 3 yields
(3) 
Our approach for tackling IndFairCut uses as a blackbox an algorithm for a problem introduced in [Svitkina and Tardos, 2004], called Maximum Size Bounded Capacity Cut (MaxSBCC).
MaxSBCC: We are given an udirected graph , a designated vertex , and a budget . In addition, each has a weight , and each vertex has a value . The goal is to find a cut with , that maximizes .
Definition 5.
We say that an algorithm is bicriteria for MaxSBCC, if for any given instance of the problem with optimal value , it returns a set of edges , such that 1) and 2) .
Even though we can use the bicriteria algorithm for MaxSBCC presented in [Svitkina and Tardos, 2004], we develop a more efficient bicriteria algorithm for this problem, using again a reduction to tree instances. Our improved algorithm for MaxSBCC will eventually yield better guarantees for IndFairCut.
Lemma 6.
If we have a bicriteria algorithm for MaxSBCC in tree instances, we can devise a bicriteria algorithm for MaxSBCC in general graphs.
Proof.
Let be an instance of MaxSBCC for a general graph. We first apply the result of Lemma 3 in order to get a collection of trees with edgeweight functions . Then, for each such tree we create an instance , and we use the given algorithm to solve MaxSBCC on it. Let the solution we get for , and for notational convenience let again . After that, we find the tree with , and we set our solution for the general graph to be . This means that in our general solution we again get .
Because , the properties of the bicriteria algorithm give for every . From the first property in Lemma 3 we thus get
To conclude we need to show that , where the value of the optimal solution of . Let also be the set of vertices disconnected from in the optimal solution of . Since is the optimal such set of vertices, we have . Moreover, let . Using the definition of and the second property from Lemma 3 gives
Hence is feasible for , and since the given algorithm is a bicriteria we get
3 Addressing demographic fairness
In this section we tackle our cut problems of interest that involve demographic fairness constraints. We begin by presenting two algorithms for DemFairCut. The first works for a constant , and is an approximation (i.e., an bicriteria). The second addresses the case of an arbitrary , and for any it is an bicriteria algorithm. Finally, we demonstrate a sampling scheme which given any bicriteria algorithm for DemFairCut, produces a bicriteria algorithm for RandDemFairCut.
3.1 Solving DemFairCut for
Given Lemma 4, we can focus on only solving the problem in tree instances. Specifically, we show that when the problem in trees can be solved optimally via dynamic programming. Without loss of generality, we can also assume that the given tree is rooted at and it is binary. For details on why this assumption is safe to use, we refer the reader to Lemma from [Williamson and Shmoys, 2011]. Before we describe our approach we need some additional notation. For a vertex , let if and otherwise.
Our dynamic programming algorithm is based on a table , where represents the minimum cost of a cut in the subtree rooted at , so that there are exactly nodes from that are connected to . Let be the right child of , and let be the left child of . Observe that the optimal solution either cuts neither of the edges from to its children, just the left edge, just the right edge, or both of the edges. So, we set to the minimum of the following:




if for all .
The first case above corresponds to cutting neither of the edges , , the second to cutting only , the third to cutting only , and the fourth to cutting both.
To fill in , we begin by initializing for all leaves of the tree, and all other leaf related entries to . Then we proceed by filling the table bottomup. There are at most table entries, and to compute each one we need to access at most other ones. Thus, the total runtime is . Finally, in order to find the optimal cut, we look for the minimum entry , such that for all .
Theorem 7.
When is a constant, we have an optimal dynamic programming algorithm for DemFairCut in trees, running in time .
Theorem 8.
When , we provide an approximation algorithm for DemFairCut.
3.2 Solving DemFairCut for an arbitrary
Given Lemma 4, we again focus on instances , where the underlying graph is a tree. Moreover, we can assume without loss of generality that the tree is rooted at . Before we proceed with the description of our algorithm, we need some more notation. For every let be the unique path from to in the tree, and . In addition, for every let , with . In words, contains the edges of the path that starts from and finishes just before reaching . The following linear program (LP) is then a valid relaxation of our problem.
(4)  
(5)  
(6)  
(7) 
In the integral version of LP (4)(7), iff edge is included in the cut. Now notice that because the underlying graph is a tree and the edge weights are nonnegative, for any the optimal solution would not choose more than one edge from . Therefore, by constraints (5) and (7) we see that iff is separated from in the optimal outcome. Consequently, constraint (6) naturally captures the demographic covering requirements.
Our approach begins by solving LP (4)(7) in order to get a fractional solution . We then apply the following dependent randomized rounding scheme. We consider the edges of the tree in nondecreasing order of , and for an edge for which no other edge in is already chosen for the cut, we remove it with probability if . The latter action makes sense because for every we have , and hence is considered before in the given ordering. Further, if an edge is chosen to be placed in the cut, then all with are now disconnected from . In addition, observe that due to the dependent nature of this process, no path will have more than one edge of it in the solution.
Algorithm 1 demonstrates all necessary details of the rounding, with
being an indicator random variable denoting whether or not
is included in the solution, andan indicator random variable that is
iff is disconnected from in the final outcome.Lemma 9.
When we decide to include in the cut, we do so with a valid probability.
Proof.
Lemma 10.
For every and , we have and .
Proof.
Let us begin with an for which we never made a random decision because , and hence . If with , then and . Because of constraints (5) and (7) for we first get . Therefore, constraints (5) and (7) applied this time for yield , which indeed gives .
Now let us consider an edge with . Because for each we have , we also get . The latter means that for all other edges in a random decision can potentially take place. Furthermore, analysis of the algorithm’s actions shows that is equal to
(8) 
Let the edges of in increasing order of . Then because , expression (8) can be rewritten as a telescopic product of fractions:
As for a vertex , we have because there is a unique path from to it. Moreover, since our rounding will never put more than one edges of in the cut, for all with we get . Hence, by the inclusionexclusion principle , where the last equality follows from constraint (5). ∎
We will now analyze the satisfaction of the coverage constraints for the different demographics. If is the number of vertices from that are not connected to in the solution, we see that . Using Lemma 10 and constraint (6) gives . We thus need to calculate how much can deviate from . For that we will need the following two lemmas.
Lemma 11.
Janson [1998] Let be Bernoulli random variables, where for all . Let the dependency graph on the . For , and are dependent if there exists an edge between them in , and we denote that as . Let also , , , and . Then for any
Lemma 12.
For every and some sequence of nonnegative numbers we have:
Proof.
We prove the statement via induction on . For it is trivial. Suppose that the lemma holds up to some . We then prove it for :
The first inequality uses the inductive hypothesis, while the last one the fact that . ∎
Lemma 13.
For all and any , we have .
Proof.
Due to Lemma 10, the random variables for are Bernoulli with . Because of the tree structure they are also to some extent dependent. Our goal here is to apply Lemma 11 for , and towards that end we need to upper bound the dependency factors. Since we do not know exactly the underlying dependency graph , in what follows we assume that all pairs are dependent. We thus begin by upperbounding the parameter of Lemma 11.
Now let be the values for all in nondecreasing order. Then we have:
To get the first inequality we used Lemma 12. Therefore, we get . Moreover, a straightforward upper bound for each is . Thus, . Finally, we also need bounds for the following two quantities, where :
Since for any , Lemma 11 immediately gives the desired bound. ∎
To conclude, suppose that for some , we repeat Algorithm 1 independently times, and in each run of it (with ) we compute a set of edges that are chosen to be removed. Our final solution is set to be . Then we have the following.
Theorem 14.
For DemFairCut in trees, we give an bicriteria algorithm, which runs in expected polynomial time.
Proof.
Focus on a specific demographic , and let the random variable denoting the number of nodes of separated from in . By Lemma 13 and the independent nature of the runs:
Thus, because for all , we have
A union bound over all demographics would finally give
By Lemma 10, in each run an edge gets removed with probability . Hence, with a union bound over all runs, the probability that gets removed is at most . Therefore, the total expected cost of our algorithm is , and since LP (4)(7) is a valid relaxation of the problem and for every , we immediately get the desired approximation ratio on expectation. By Markov’s inequality we can further prove that with probability at most we do not get a cost above for some constant .
Thus, with constant probability our algorithm satisfies both the approximation ratio of , and the approximate satisfaction of the demographic constraints. This means that repeating the whole process an expected logarithmic number of times, guarantees we will hit both targets. ∎
Theorem 15.
For any given constant , we provide an bicriteria algorithm for DemFairCut, which also runs in expected polynomial time.
3.3 A sampling scheme for RandDemFairCut
Here we are going to show how one can exploit our algorithms for DemFairCut, in order to provide results in the model, with our ultimate goal being solving RandDemFairCut. In addition, we assume that we have the decision version of a bicriteria algorithm for DemFairCut. Specifically, let be an algorithm that given a target and an instance of DemFairCut with optimal value , it either returns a set with and for all , or “INFEASIBLE”. In the latter case there is indeed no solution to of cost at most , and hence if we get this answer we know with certainty that . Going from the optimization version (as presented in sections 3.1 and 3.2) to the decision version as described above, is straightforward.
Now we proceed with the description of our framework, which crucially depends on a sampling scheme. Therefore, let the instance of RandDemFairCut we are trying to solve, and without loss of generality we assume that for all ^{4}^{4}4This can be easily achieved by a rescalling step, where we set the weights to be . At first, we modify the edgeweight function in the following manner. For some input parameter , we set
Comments
There are no comments yet.