Constant Factor Approximation Algorithm for Weighted Flow Time on a Single Machine in Pseudo-polynomial time

02/21/2018
by   Amit Kumar, et al.
0

In the weighted flow-time problem on a single machine, we are given a set of n jobs, where each job has a processing requirement p_j, release date r_j and weight w_j. The goal is to find a preemptive schedule which minimizes the sum of weighted flow-time of jobs, where the flow-time of a job is the difference between its completion time and its released date. We give the first pseudo-polynomial time constant approximation algorithm for this problem. The running time of our algorithm is polynomial in n, the number of jobs, and P, which is the ratio of the largest to the smallest processing requirement of a job. Our algorithm relies on a novel reduction of this problem to a generalization of the multi-cut problem on trees, which we call the Demand Multi-Cut problem. Even though we do not give a constant factor approximation algorithm for the Demand Multi-Cut problem on trees, we show that the specific instances of Demand Multi-Cut obtained by reduction from weighted flow-time problem instances have more structure in them, and we are able to employ techniques based on dynamic programming. Our dynamic programming algorithm relies on showing that there are near optimal solutions which have nice smoothness properties, and we exploit these properties to reduce the size of DP table.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/11/2020

A (2+ε)-approximation algorithm for preemptive weighted flow time on a single machine

Weighted flow time is a fundamental and very well-studied objective func...
05/02/2021

Weighted completion time minimization for capacitated parallel machines

We consider the weighted completion time minimization problem for capaci...
01/14/2018

Near-optimal approximation algorithm for simultaneous Max-Cut

In the simultaneous Max-Cut problem, we are given k weighted graphs on t...
07/13/2018

An O(^1.5n n) Approximation Algorithm for Mean Isoperimetry and Robust k-means

Given a weighted graph G=(V,E), and U⊆ V, the normalized cut value for U...
11/10/2017

Scheduling with regular performance measures and optional job rejection on a single machine

We address single machine problems with optional job rejection, studied ...
06/24/2020

Approximation algorithms for the MAXSPACE advertisement problem

In the MAXSPACE problem, given a set of ads A, one wants to schedule a s...
10/30/2017

An FPTAS of Minimizing Total Weighted Completion Time on Single Machine with Position Constraint

In this paper we study the classical scheduling problem of minimizing th...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Scheduling jobs to minimize the average waiting time is one of the most fundamental problems in scheduling theory with numerous applications. We consider the setting where jobs arrive over time (i.e., have release dates), and need to be processed such that the average flow-time is minimized. The flow-time, of a job , is defined as the difference between its completion time, , and release date, . It is well known that for the case of single machine, the SRPT policy (Shortest Remaining Processing Time) gives an optimal algorithm for this objective.

In the weighted version of this problem, jobs have weights and we would like to minimize the weighted sum of flow-time of jobs. However, the problem of minimizing weighted flow-time (WtdFlowTime) turns out to be NP-hard and it has been widely conjectured that there should a constant factor approximation algorithm (or even PTAS) for it. In this paper, we make substantial progress towards this problem by giving the first constant factor approximation algorithm for this problem in pseudo-polynomial time. More formally, we prove the following result.

Theorem 1.1.

There is a constant factor approximation algorithm for WtdFlowTime where the running time of the algorithm is polynomial in and . Here, denotes the number of jobs in the instance, and denotes the ratio of the largest to the smallest processing time of a job in the instance respectively.

We obtain this result by reducing WtdFlowTime to a generalization of the multi-cut problem on trees, which we call Demand MultiCut. The Demand MultiCut problem is a natural generalization of the multi-cut problem where edges have sizes and costs, and input paths (between terminal pairs) have demands. We would like to select a minimum cost subset of edges such that for every path in the input, the total size of the selected edges in the path is at least the demand of the path. When all demands and sizes are 1, this is the usual multi-cut problem. The natural integer program for this problem has the property that all non-zero entries in any column of the constraint matrix are the same. Such integer programs, called column restricted covering integer programs, were studied by Chakrabarty et al. [7]. They showed that one can get a constant factor approximation algorithm for Demand MultiCut provided one could prove that the integrality gap of the natural LP relaxations for the following two special cases is constant – (i) the version where the constraint matrix has 0-1 entries only, and (ii) the priority version, where paths and edges in the tree have priorities (instead of sizes and demands respectively), and we want to pick minimum cost subset of edges such that for each path, we pick at least one edge in it of priority which is at least the priority of this path. Although the first problem turns out to be easy, we do not know how to round the LP relaxation of the priority version. This is similar to the situation faced by Bansal and Pruhs [4], where they need to round the priority version of a geometric set cover problem. They appeal to the notion of shallow cell complexity [8] to get an -approximation for this problem. It turns out the shallow cell complexity of the priority version of Demand MultiCut is also unbounded (depends on the number of distinct priorities) [8], and so it is unlikely that this approach will yield a constant factor approximation.

However, the specific instances of Demand MultiCut produced by our reduction have more structure, namely each node has at most 2 children, each path goes from an ancestor to a descendant, and the tree has depth if we shortcut all degree 2 vertices. We show that one can effectively use dynamic programming techniques for such instances. We show that there is a near optimal solution which has nice “smoothness” properties so that the dynamic programming table can manage with storing small amount of information.

1.1 Related Work

There has been a lot of work on the WtdFlowTime problem on a single machine, though polynomial time constant factor approximation algorithm has remained elusive. Bansal and Dhamdhere [1] gave an -competitive on-line algorithm for this problem, where is the ratio of the maximum to the minimum weight of a job. They also gave a semi-online (where the algorithm needs to know the parameters and in advance) -competitive algorithm for WtdFlowTime, where is the ratio of the largest to the smallest processing time of a job. Chekuri et al. [10] gave a semi-online -competitive algorithm.

Recently, Bansal and Pruhs [4] made significant progress towards this problem by giving an -approximation algorithm. In fact, their result applies to a more general setting where the objective function is , where is any monotone function of the completion time of job . Their work, along with a constant factor approximation for the generalized caching problem [5], implies a constant factor approximation algorithm for this setting when all release dates are 0. Chekuri and Khanna [9] gave a quasi-PTAS for this problem, where the running time was . In the special case of stretch metric, where , PTAS is known [6, 9]. The problem of minimizing (unweighted) norm of flow-times was studied by Im and Moseley [12] who gave a constant factor approximation in polynomial time.

In the speed augmentation model introduced by Kalyanasundaram and Pruhs [13], the algorithm is given -times extra speed than the optimal algorithm. Bansal and Pruhs [3] showed that Highest Density First (HDF) is -competitive for weighted norms of flow-time for all values of .

The multi-cut problem on trees is known to be NP-hard, and a 2-approximation algorithm was given by Garg et al. [11]. As mentioned earlier, Chakrabarty et al. [7] gave a systematic study of column restricted covering integer programs (see also [2] for follow-up results). The notion of shallow cell complexity for - covering integer programs was formalized by Chan et al. [8], where they relied on and generalized the techniques of Vardarajan [14].

2 Preliminaries

An instance of the WtdFlowTime problem is specified by a set of jobs. Each job has a processing requirement , weight and release date . We assume wlog that all of these quantities are integers, and let denote the ratio of the largest to the smallest processing requirement of a job. We divide the time line into unit length slots – we shall often refer to the time slot as slot . A feasible schedule needs to process a job for units after its release date. Note that we allow a job to be preempted. The weighted flow-time of a job is defined as , where is the slot in which the job finishes processing. The objective is to find a schedule which minimizes the sum over all jobs of their weighted flow-time.

Note that any schedule would occupy exactly slots. We say that a schedule is busy if it does not leave any slot vacant even though there are jobs waiting to be finished. We can assume that the optimal schedule is a busy schedule (otherwise, we can always shift some processing back and improve the objective function). We also assume that any busy schedule fills the slots in (otherwise, we can break it into independent instances satisfying this property).

We shall also consider a generalization of the multi-cut problem on trees, which we call the Demand MultiCut problem. Here, edges have cost and size, and demands are specified by ancestor-descendant paths. Each such path has a demand, and the goal is to select a minimum cost subset of edges such that for each path, the total size of selected edges in the path is at least the demand of this path.

In Section 2.1, we describe a well-known integer program for WtdFlowTime. This IP has variables for every job and time , and it is supposed to be 1 if completes processing after time . The constraints in the IP consist of several covering constraints. However, there is an additional complicating factor that must hold for all . To get around this problem, we propose a different IP in Section 3. In this IP, we define variables of the form , where are exponentially increasing intervals starting from the release date of . This variable indicates whether is alive during the entire duration of . The idea is that if the flow-time of lies between and , we can count for it, and say that is alive during the entire period . Conversely, if the variable is 1 for an interval of the form , we can assume (at a factor 2 loss) that it is also alive during . This allows us to decouple the variables for different . By an additional trick, we can ensure that these intervals are laminar for different jobs. From here, the reduction to the Demand MultiCut problem is immediate (see Section 4 for details). In Section 5, we show that the specific instances of Demand MultiCut obtained by such reductions have additional properties. We use the property that the tree obtained from shortcutting all degree two vertices is binary and has depth. We shall use the term segment to define a maximal degree 2 (ancestor-descendant) path in the tree. So the property can be restated as – any root to leaf path has at most segments. We give a dynamic programming algorithm for such instances. In the DP table for a vertex in the tree, we will look at a sub-instance defined by the sub-tree below this vertex. However, we also need to maintain the “state” of edges above it, where the state means the ancestor edges selected by the algorithm. This would require too much book-keeping. We use two ideas to reduce the size of this state – (i) We first show that the optimum can be assumed to have certain smoothness properties, which cuts down on the number of possible configurations. The smoothness property essentially says that the cost spent by the optimum on a segment does not vary by more than a constant factor as we go to neighbouring segments, (ii) If we could spend twice the amount spent by the algorithm on a segment , and select low density edges, we could ignore the edges in a segment lying above in the tree.

2.1 An integer program

We describe an integer program for the WtdFlowTime problem. This is well known (see e.g. [4]

), but we give details for sake of completeness. We will have binary variables

for every job and time , where . This variable is meant to be 1 iff is alive at time , i.e., its completion time is at least . Clearly, the objective function is We now specify the constraints of the integer program. Consider a time interval , where , and and are integers. Let denote the length of this time interval, i.e., . Let denote the set of jobs released during , i.e., , and denote the total processing time of jobs in . Clearly, the total volume occupied by jobs in beyond must be at least . Thus, we get the following integer program: (IP1)

(1)
(2)
(3)

It is easy to see that this is a relaxation – given any schedule, the corresponding variables will satisfy the constraints mentioned above, and the objective function captures the total weighted flow-time of this schedule. The converse is also true – given any solution to the above integer program, there is a corresponding schedule of the same cost.

Theorem 2.1.

Suppose is a feasible solution to (IP1). Then, there is a schedule for which the total weighted flow-time is equal to the cost of the solution .

Proof.

We show how to build such a schedule. The integral solution gives us deadlines for each job. For a job , define as one plus the last time such that . Note that for every . We would like to find a schedule which completes each job by time  : if such a schedule exists, then the weighted flow-time of a job will be at most , which is what we want.

We begin by observing a simple property of a feasible solution to the integer program.

Claim 2.2.

Consider an interval , . Let be a subset of such that . If is a feasible solution to (IP1), then there must exist a job such that .

Proof.

Suppose not. Then the LHS of constraint (2) for would be at most , whereas the RHS would be , a contradiction. ∎

It is natural to use the Earliest Deadline First rule to find the required schedule. We build the schedule from time onwards. At any time , we say that a job is alive if , and has not been completely processed by time . Starting from time , we process the alive job with earliest deadline during . We need to show that every job will complete before its deadline. Suppose not. Let be the job with the earliest deadline which is not able to finish by . Let be first time before such that the algorithm processes a job whose deadline is more than during , or it is idle during this time slot (if there is no such time slot, it must have busy from time onwards, and so set to 0). The algorithm processes jobs whose deadline is at most during – call these jobs . We claim that jobs in were released after – indeed if such a job was released before time , it would have been alive at time (since it gets processed after time ). Further its deadline is at most , and so, the algorithm should not be processing a job whose deadline is more than during (or being idle). But now, consider the interval . Observe that – indeed, and it is not completely processed during , but the algorithm processes jobs from only during . Claim 2.2 now implies that there must be a job in for which – but then the deadline of is more than , a contradiction. ∎

3 A Different Integer Program

We now write a weaker integer program, but it has more structure in it. We first assume that

is a power of 2 – if not, we can pad the instance with a job of zero weight (this will increase the ratio

by at most a factor only). Let be . We now divide the time line into nested dyadic segments. A dyadic segment is an interval of the form for some non-negative integers and (we shall use segments to denote such intervals to avoid any confusion with intervals used in the integer program). For , we define as the set of dyadic segments of length starting from 0, i.e., . Clearly, any segment of is contained inside a unique segment of . Now, for every job we shall define a sequence of dyadic segments . The sequence of segments in partition the interval . The construction of is described in Figure 1 (also see the example in Figure 2). It is easy to show by induction on that the parameter at the beginning of iteration in Step 2 of the algorithm is a multiple of . Therefore, the segments added during the iteration for belong to . Although we do not specify for how long we run the for loop in Step 2, we stop when reaches (this will always happen because takes values from the set of end-points in the segments in ). Therefore the set of segments in are disjoint and cover .

5.8 in Algorithm FormSegments()

1. Initialize .

2. For

(i) If is a multiple of ,

add the segments (from the set ) to

update .

(ii) Else add the segment (from the set ) to .

update .

Figure 1: Forming .
Figure 2: The dyadic segments and the corresponding for two jobs

For a job and segment , we shall refer to the tuple as a job-segment. For a time , we say that (or contains ) if . We now show a crucial nesting property of these segments.

Lemma 3.1.

Suppose and are two job-segments such that there is a time for which and . Suppose , and . Then .

Proof.

We prove this by induction on . When , this is trivially true because would be 0. Suppose it is true for some . Let and be the job segments containing . Suppose . By induction hypothesis, we know that . Let be the job-segment containing , and let ( could be same as We know that . Therefore, the only interesting case is and . Since , the two segments and must be same (because all segments in are mutually disjoint). Since , it must be that for some . The algorithm for constructing adds a segment from after adding to . Therefore must be a multiple of . What does the algorithm for constructing do after adding to ? If it adds a segment from , then we are done again. Suppose it adds a segment from . The right end-point of this segment would be . After adding this segment, the algorithm would add a segment from (as it cannot add more than 2 segments from to ). But this can only happen if is a multiple of – this is not true because is a multiple of . Thus we get a contradiction, and so the next segment (after ) in must come from as well. ∎

We now write a new IP. The idea is that if a job is alive at some time , then we will keep it alive during the entire duration of the segment in containing . Since the segments in have lengths in exponentially increasing order (except for two consecutive segments), this will not increase the weighted flow-time by more than a constant factor. For each job segment we have a binary variable , which is meant to be 1 iff the job is alive during the entire duration . For each job segment , define its weight as – this is the contribution towards weighted flow-time of if remains alive during the entire segment . We get the following integer program (IP2):

(4)
(5)

Observe that for any interval , the constraint (5) for has precisely one job segment for every job which gets released in . Another interesting feature of this IP is that we do not have constraints corresponding to (3), and so it is possible that and for two job segments and even though appears before in . We now relate the two integer programs.

Lemma 3.2.

Given a solution for (IP1), we can construct a solution for (IP2) of cost at most 8 times the cost of . Similarly, given a solution for (IP2), we can construct a solution for (IP1) of cost at most 4 times the cost of .

Proof.

Suppose we are given a solution for (IP1). For every job , let be the highest for which . Let the segments in (in the order they were added) be . Let be the segment in which contains . Then we set to 1 for all , and to 0 for all . This defines the solution . First we observe that is feasible for (IP2). Indeed, consider an interval . If and , then we do have for the job segment containing . Therefore, the LHS of constraints (2) and (5) for are same. Also, observe that

where the last inequality follows from the fact that there are at most two segments from any particular set in , and so, the length of every alternate segment in increases exponentially. So, Finally observe that . Indeed, the length of is at least half of that of . So,

Thus, the total contribution to the cost of from job segments corresponding to is at most This proves the first statement in the lemma.

Now we prove the second statement. Let be a solution to (IP2). For each job , let be the last job segment in for which is 1. We set to 1 for every , where is the right end-point of , and 0 for . It is again easy to check that is a feasible solution to (IP1). For a job the contribution of towards the cost of is

The above lemma states that it is sufficient to find a solution for (IP2). Note that (IP2) is a covering problem. It is also worth noting that the constraints (5) need to be written only for those intervals for which a job segment starts or ends at or . Since the number of job segments is , it follows that (IP2) can be turned into a polynomial size integer program.

4 Reduction to Demand MultiCut on Trees

We now show that (IP2) can be viewed as a covering problem on trees. We define the covering problem, which we call Demand Multi-cut(Demand MultiCut) on trees. An instance of this problem consists of a tuple , where is a rooted tree, and consists of a set of ancestor-descendant paths. Each edge in has a cost and size . Each path has a demand . Our goal is to pick a minimum cost subset of vertices such that for every path , the set of vertices in have total size at least .

We now reduce WtdFlowTime to Demand MultiCut on trees. Consider an instance of WtdFlowTime consisting of a set of jobs . We reduce it to an instance of Demand MultiCut. In our reduction, will be a forest instead of a tree, but we can then consider each tree as an independent problem instance of Demand MultiCut.

We order the jobs in according to release dates (breaking ties arbitrarily) – let be this total ordering (so, implies that ). We now define the forest . The vertex set of will consist of all job segments . For such a vertex , let be the job immediately preceding in the total order . Since the job segments in partition , and , there is a pair in such that intersects , and so contains , by Lemma 3.1. We define as the parent of . It is easy to see that this defines a forest structure, where the root vertices correspond to , with being the first job in . Indeed, if is a sequence of nodes with being the parent of , then , and so no node in this sequence can be repeated.

For each tree in this forest with the root vertex being , we add a new root vertex and make it the parent of . We now define the cost and size of each edge. Let be an edge in the tree, where is the parent of . Let correspond to the job segment . Then and . In other words, picking edge corresponds to selecting the job segment .

Now we define the set of paths . For each constraint (5) in (IP2), we will add one path in . We first observe the following property. Fix an interval and consider the constraint (5) corresponding to it. Let be the vertices in corresponding to the job segments appearing in the LHS of this constraint.

Lemma 4.1.

The vertices in form a path in from an ancestor to a descendant.

Proof.

Let be the jobs which are released in arranged according to . Note that these will form a consecutive subsequence of the sequence obtained by arranging jobs according to . Each of these jobs will have exactly one job segment appearing on the LHS of this constraint (because for any such job , the segments in partition ). All these job segments contain , and so, these segment intersect. Now, by construction of , it follows that the parent of in the tree would be . This proves the claim.

Let the vertices in be arranged from ancestor to descendant. Let be the parent of (this is the reason why we added an extra root to each tree – just in case corresponds to the first job in , it will still have a parent). We add a path to – Lemma 4.1 guarantees that this will be an ancestor-descendant path. The demand of this path is the quantity in the RHS of the corresponding constraint (5) for the interval . The following claim is now easy to check.

Claim 4.2.

Given a solution to the Demand MultiCut instance , there is a solution to (IP2) for the instance of the same objective function value as that of .

Proof.

Consider a solution to consisting of a set of edges . For each edge where is the child of , we set . For rest of the job segments , define to be 0. Since the cost of such an edge is equal to , it is easy to see that the two solutions have the same cost. Feasibility of (IP2) also follows directly from the manner in which the paths in are defined. ∎

This completes the reduction from WtdFlowTime to Demand MultiCut. This reduction is polynomial time because number of vertices in is equal to the number of job segments, which is . Each path in goes between any two vertices in , and there is no need to have two paths between the same pair of vertices. Therefore the size of the instance is polynomial in the size of the instance of WtdFlowTime.

5 Approximation Algorithm for the Demand MultiCut problem

In this section we give a constant factor approximation algorithm for the special class of Demand MultiCut problems which arise in the reduction from WtdFlowTime. To understand the special structure of such instances, we begin with some definitions. Let be an instance of Demand MultiCut. The density of an edge is defined as the ratio . Let denote the tree obtained from by short-cutting all non-root degree 2 vertices (see Figure 3 for an example). There is a clear correspondence between the vertices of and the non-root vertices in which do not have degree 2. In fact, we shall use to denote the latter set of vertices. The reduced height of is defined as the height of . In this section, we prove the following result. We say that a (rooted) tree is binary if every node has at most 2 children.

Figure 3: Tree and the corresponding tree . Note that the vertices in are also present in , and the segments in correspond to edges in . The tree has 4 segments, e.g., the path between and .
Theorem 5.1.

There is a constant factor approximation algorithm for instances of Demand MultiCut where is a binary tree. The running time of this algorithm is , where denotes the number of nodes in , denotes the reduced height of , and and are the maximum and the minimum density of an edge in respectively.

Remark: In the instance above, some edges may have 0 size. These edges are not considered while defining and .

Before we prove this theorem, let us see why it implies the main result in Theorem 1.1.

Proof of Theorem 1.1: Consider an instance of obtained via reduction from an instance of WtdFlowTime. Let denote the number of jobs in and denote the ratio of the largest to the smallest job size in this instance. We had argued in the previous section that , the number of nodes in , is . We first perform some pre-processing on such that the quantites do not become too large.

  • Let and denote the maximum and the minimum size of a job in the instance . Each edge in corresponds to a job interval in the instance . We select all edges for which the corresponding job interval has length at most . Note that after selecting these edges, we will contract them in and adjust the demands of paths in accordingly. For a fixed job , the total cost of such selected edges would be at most (as in the proof of Lemma 3.2, the corresponding job intervals have lengths which are powers of 2, and there are at most two intervals of the same length). Note that the cost of any optimal solution for is at least , and so we are incurring an extra cost of at most 4 times the cost of the optimal solution.

    So we can assume that any edge in corresponds to a job interval in whose length lies in the range , because the length of the schedule is at most (recall that we are assuming that there are no gaps in the schedule).

  • Let be the maximum cost of an edge selected by the optimal solution (we can cycle over all possibilities for , and select the best solution obtained over all such solutions). We remove (i.e., contract) all edges of cost more than , and select all edges of cost at most (i.e., contract them and adjust demands of paths going through them) – the cost of these selected edges will be at most a constant times the optimal cost. Therefore, we can assume that the costs of the edges lie in the range . Therefore, the densities of the edges in lie in the range .

Having performed the above steps, we now modify the tree so that it becomes a binary tree. Recall that each vertex in corresponds to a dyadic interval , and if is a child of then is contained in (for the root vertex, we can assign it the dyadic interval ). Now, consider a vertex with of size and suppose it has more than 2 children. Since the dyadic intervals for the children are mutually disjoint and contained in , each of these will be of size at most . Let and be the two dyadic intervals of length contained in . Consider . Let be the children of for which the corresponding interval is contained in . If , we create a new node below (with corresponding interval being ) and make children of . The cost and size of the edge is 0. We proceed similarly for . Thus, each node will now have at most 2 children. Note that we will blow up the number of vertices by a factor 2 only.

We can now estimate the reduced height

of . Consider a root to leaf path in , and let the vertices in this path be . Let denote the parent of . Since each has two children in , the job interval corresponding to will be at least twice that for . From the first preprocessing step above, it follows that the length of this path is bounded by , where denotes . Thus, is . It now follows from Theorem 5.1 that we can get a constant factor approximation algorithm for the instance in time. ∎

We now prove Theorem 5.1 in rest of the paper.

5.1 Some Special Cases

To motivate our algorithm, we consider some special cases first. Again, fix an instance of Demand MultiCut. Recall that the tree is obtained by short-cutting all degree 2 vertices in . Each edge in corresponds to a path in – in fact, there are maximal paths in for which all internal nodes have degree 2. We call such paths segments (to avoid confusion with paths in ). See Figure 3 for an example. Thus, there is a 1-1 correspondence between edges in and segments in . Recall that every vertex in corresponds to a vertex in as well, and we will use the same notation for both the vertices.

Figure 4: The left instance represents a segment confined instance whereas the right one is a segment spanning instance.

5.1.1 Segment Confined Instances

The instance is said to be segment confined if all paths in are confined to one segment, i.e., for every path , there is a segment in such that the edges of are contained in . An example is shown in Figure 4. In this section, we show that one can obtain constant factor polynomial time approximation algorithms for such instances. In fact, this result follows from prior work on column restricted covering integer programs [7]. Since each path in is confined to one segment, we can think of this instance as several independent instances, one for each segment. For a segment , let be the instance obtained from by considering edges in only and the subset of paths which are contained in . We show how to obtain a constant factor approximation algorithm for for a fixed segment .

Let the edges in (in top to down order) be . The following integer program (IP3) captures the Demand MultiCut problem for :

(6)
(7)
(8)

Note that this is a covering integer program (IP) where the coefficient of in each constraint is either 0 or . Such an IP comes under the class of Column Restricted Covering IP as described in [7]. Chakrabarty et al. [7] show that one can obtain a constant factor approximation algorithm for this problem provided one can prove that the integrality gaps of the corresponding LP relaxations for the following two special class of problems are constant: (i) - instances, where the values are either 0 or 1, (ii) priority versions, where paths in and edges have priorities (which can be thought of as positive integers), and the selected edges satisfy the property that for each path , we selected at least one edge in it of priority at least that of (it is easy to check that this is a special case of Demand MultiCut problem by assigning exponentially increasing demands to paths of increasing priority, and similarly for edges).

Consider the class of - instances first. We need to consider only those edges for which is 1 ( contract the edges for which is 0). Now observe that the constraint matrix on the LHS in (IP3) has consecutive ones property (order the paths in in increasing order of left end-point and write the constraints in this order). Therefore, the LP relaxation has integrality gap of 1.

Rounding the Priority Version We now consider the priority version of this problem. For each edge , we now have an associated priority (instead of size), and each path in also has a priority demand , instead of its demand. We need to argue about the integrality gap of the following LP relaxation:

(9)
(10)
(11)

We shall use the notion of shallow cell complexity used in [8]. Let be the constraint matrix on the LHS above. We first notice the following property of .

Claim 5.2.

Let be a subset of columns of . For a parameter , there are at most distinct rows in with or fewer 1’s (two rows of

are distinct iff they are not same as row vectors).

Proof.

Columns of correspond to edges in . Contract all edges which are not in . Let be the remaining (i.e., uncontracted) edges in . Each path in now maps to a new path obtained by contracting these edges. Let denote the set of resulting paths. For a path , let be the edges in whose priority is at least that of . In the constraint matrix , the constraint for the path has 1’s in exactly the edges in . We can assume that the set is distinct for every path (because we are interested in counting the number of paths with distinct sets ).

Let be the paths in for which . We need to count the cardinality of this set. Fix an edge , let be the edges in of priority at least that of . Let be a path in which has as the least priority edge in (breaking ties arbitrarily). Let and be the leftmost and the rightmost edges in respectively. Note that is exactly the edges in which lie between and . Since there are at most choices for and (look at the edges to the left and to the right of in the set ), it follows that there are at most paths in which have as the least priority edge in . For every path in , there are at most choices for the least priority edge. Therefore the size of is at most . ∎

In the notation of [8], the shallow cell complexity of this LP relaxation is . It now follows from Theorem 1.1 in [8] that the integrality gap of the LP relaxation for the priority version is a constant. Thus we obtain a constant factor approximation algorithm for segment restricted instances.

5.1.2 Segment Spanning Instances on Binary Trees

We now consider instances for which each path starts and ends at the end-points of a segment, i.e., the starting or ending vertex of belongs to the set of vertices in . An example is shown in Figure 4. Although we will not use this result in the algorithm for the general case, many of the ideas will get extended to the general case. We will use dynamic programming. For a vertex , let be the sub-tree of rooted below (and including ). Let denote the subset of consisting of those paths which contain at least one edge in . By scaling the costs of edges, we will assume that the cost of the optimal solution lies in the range – if is the maximum cost of an edge selected by the optimal algorithm, then its cost lies in the range .

Before stating the dynamic programming algorithm, we give some intuition for the DP table. We will consider sub-problems which correspond to covering paths in by edges in for every vertex . However, to solve this sub-problem, we will also need to store the edges in which are ancestors of and are selected by our algorithm. Storing all such subsets would lead to too many DP table entries. Instead, we will work with the following idea – for each segment , let be the total cost of edges in which get selected by an optimal algorithm. If we know , then we can decide which edges in can be picked. Indeed, the optimal algorithm will solve a knapsack cover problem – for the segment , it will pick edges of maximum total size subject to the constraint that their total cost is at most (note that we are using the fact that every path in which includes an edge in must include all the edges in ). Although knapsack cover is NP-hard, here is a simple greedy algorithm which exceeds the budget by a factor of 2, and does as well as the optimal solution (in terms of total size of selected edges) – order the edges in whose cost is at most in order of increasing density. Keep selecting them in this order till we exceed the budget . Note that we pay at most twice of because the last edge will have cost at most . The fact that the total size of selected edges is at least that of the corresponding optimal value follows from standard greedy arguments.

Therefore, if denote the segments which lie above (in the order from the root to ), it will suffice if we store with the DP table entry for . We can further cut-down the search space by assuming that each of the quantities is a power of 2 (we will lose only a multiplicative 2 in the cost of the solution). Thus, the total number of possibilities for is , because each of the quantities lies in the range (recall that we had assumed that the optimal value lies in the range and now we are rounding this to power of 2). This is at most , which is still not polynomial in and . We can further reduce this by assuming that for any two consecutive segments , the quantities and differ by a factor of at most 8 – it is not clear why we can make this assumption, but we will show later that this does leads to a constant factor loss only. We now state the algorithm formally.

Dynamic Programming Algorithm

We first describe the greedy algorithm outlined above. The algorithm GreedySelect is given in Figure 5.

6.5in Algorithm GreedySelect:

Input: A segment in and a budget .

1. Initialize a set to emptyset.

2. Arrange the edges in of cost at most in ascending order of density.

3. Keep adding these edges to till their total cost exceeds .

4. Output .

Figure 5: Algorithm GreedySelect for selecting edges in a segment with a budget .

For a vertex , define the reduced depth of as its at depth in (root has reduced depth 0). We say that a sequence is a valid state sequence at a vertex in with reduced depth if it satisfies the following conditions:

  • For all , is a power of 2 and lies in the range .

  • For any , lies in the range .

If is the sequence of segments visited while going from the root to , then will correspond to .

Consider a vertex at reduced depth , and a child of in (at reduced depth ). Let and be valid state sequences at these two vertices respectively. We say that is an extension of if for . In the dynamic program, we maintain a table entry for each vertex in and valid state sequence at . Informally, this table entry stores the following quantity. Let be the segments from the root to the vertex . This table entry stores the minimum cost of a subset of edges in such that is a feasible solution for the paths in , where is the union of the set of edges selected by GreedySelect in the segments with budgets respectively.

The algorithm is described in Figure 6. We first compute the set as outlined above. Let the children of in the tree be and . Let the segments corresponding to and be and respectively. For both these children, we find the best extension of . For the node , we try out all possibilities for the budget for the segment . For each of these choices, we select a set of edges in as given by GreedySelect and lookup the table entry for and the corresponding state sequence. We pick the choice for for which the combined cost is smallest (see line 7(i)(c)).

6.5in Fill DP Table :

Input: A node at reduced depth , and a state sequence

0. If is a leaf node, set to 0, and exit.

1. Let be the segments visited while going from the root to in .

2. Initialize .

3. For

(i) Let be the edges returned by GreedySelect().

(ii)

4. Let be the two children of in and

the corresponding segments be .

5. Initialize to .

6. For (go to each of the two children and solve the subproblems)

(i) For each extension of do

(a) Let be the edges returned by GreedySelect(