Detecting Feedback Vertex Sets of Size k in O^(2.7^k) Time

06/28/2019 ∙ by Jason Li, et al. ∙ TU Eindhoven 0

In the Feedback Vertex Set problem, one is given an undirected graph G and an integer k, and one needs to determine whether there exists a set of k vertices that intersects all cycles of G (a so-called feedback vertex set). Feedback Vertex Set is one of the most central problems in parameterized complexity: It served as an excellent test bed for many important algorithmic techniques in the field such as Iterative Compression [Guo et al. (JCSS'06)], Randomized Branching [Becker et al. (J. Artif. Intell. Res'00)] and Cut&Count [Cygan et al. (FOCS'11)]. In particular, there has been a long race for the smallest dependence f(k) in run times of the type O^(f(k)), where the O^ notation omits factors polynomial in n. This race seemed to be run in 2011, when a randomized algorithm O^(3^k) time algorithm based on Cut&Count was introduced. In this work, we show the contrary and give a O^(2.7^k) time randomized algorithm. Our algorithm combines all mentioned techniques with substantial new ideas: First, we show that, given a feedback vertex set of size k of bounded average degree, a tree decomposition of width (1-Ω(1))k can be found in polynomial time. Second, we give a randomized branching strategy inspired by the one from [Becker et al. (J. Artif. Intell. Res'00)] to reduce to the aforementioned bounded average degree setting. Third, we obtain significant run time improvements by employing fast matrix multiplication.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Feedback Vertex Set (FVS) is one of the most fundamental NP-complete problems; for example, it was among Karp’s original 21 problems [Kar72]. In FVS we are given an undirected graph and integer , and are asked whether there exists a set such that is a forest (i.e.  intersects all cycles of ). In the realm of parameterized complexity, where we aim for algorithms with running times of the type 111The notation omits factors polynomial in . with as small as possible (albeit exponential), FVS is clearly one of the most central problems: To quote [Cao18], to date the number of parameterized algorithms for FVS published in the literature exceeds the number of parameterized algorithms for any other single problem.

There are several reasons why FVS is the one of the most central problem in parameterized complexity: First and foremost, the main point of parameterized complexity, being that in many instance the parameter is small, is very applicable for FVS: In the instances arising from e.g. resolving deadlocks in systems of processors [BGNR98]

, or from Bayesian inference or constraint satisfaction, one is only interested in whether small FVS’s exist 

[BBG00, Dec90, WLS85]. Second, FVS is a very natural graph modification problems (remove/add few vertices/edges to make the graph satisfy a certain property) that serves as excellent starting point for many other graph modification problems such a planarization or treewidth-deletion (see e.g. [GLL18] for a recent overview). Third, FVS and many of its variants (see e.g. [KK18]) admit elegant duality theorems such as the Erdös-Pósa property; understanding their use in designing algorithms can be instrumental to solve many problems different from FVS faster. The popularity of FVS also led to work on a broad spectrum of its variations such as Subset, Group, Connected, Simultaneous, or Independent FVS (see for example [AGSS16] and the references therein).

In this paper we study the most basic setting concerning the parameterized complexity of FVS, and aim to design an algorithm with runtime with as small as possible.

One motivation for this study is that we want to get a better insight into the fine-grained complexity of computational problems: How hard is FVS really to solve in the worst-case setting? Can the current algorithms still be improved significantly or are they close to some computational barrier implied by some hypothesis or conjecture such as, for example, the Strong Exponential Time Hypothesis?

A second motivation is that, lowering the exponential factor of the running time is a logical first step towards more practical algorithms. For example, the vertex cover problem222Given a graph and integer , find vertices of that intersect every edge of . can be solved in time [CKX10], and a similar running time for FVS would be entirely consistent with our current knowledge. Algorithms with such run times likely outperform other algorithms for a wide variety of instances from practice. Note there already has been considerable interest in practical algorithms for FVS as it was the subject of the first Parameterized Algorithms and Computational Experiments Challenge (PACE, see e.g. [DHJ16]).

For a third motivation of such a study, experience shows an improvement of the running time algorithms for well-studied benchmark problems as FVS naturally goes hand in hand with important new algorithmic tools: The ‘race’ for the fastest algorithm for FVS and its variants gave rise to important techniques in parameterized complexity such as Iterative Compression [DFL07, GGH06, RSV04], Randomized Branching [BBG00] and Cut&Count [CNP11].

The race for the fastest FVS algorithm.

The aforementioned ‘race’ (see Figure 1) started in the early days of parameterized complexity (see e.g [AEFM89]) with an time deterministic algorithm by Downey and Fellows [DF92]. We briefly discuss four relevant results from this race. A substantial improvement of the algorithm from [DF92] to an time randomized algorithm was obtained by Becker et al. [BBG00]. Their simple but powerful idea is to argue that, if some simple reduction rules do not apply, a random ‘probabilistic branching’ procedure works well. A few years later, in [DFL07, GGH06] it was shown how to obtain time in the deterministic regime using Iterative Compression. This technique allows the algorithm to assume a feedback vertex set of size is given, which turns out to be useful for detecting feedback vertex sets of size . The race however stagnated with the paper that introduced the Cut&Count technique [CNP11] and gave a time randomized algorithm. In particular, the Cut&Count technique gave a time algorithm for FVS if a tree decomposition (see Section 2 for definitions) of width tw is given, and this assumption can be made due to the iterative compression technique. After this result, no progress on randomized algorithms for FVS was made as it seemed that improvements over the running time were not within reach: In [CNP11] it was also proven that any time algorithm, for some , would violate the SETH. It was therefore natural to expect the base is also optimal for the parameterization by the solution size . Moreover, the very similar time algorithm from [CNP11] for the Connected Vertex Cover problem was shown to be optimal under the Set Cover Conjecture [CDL16].

Reference Running Time Deterministic? Year
Downey and Fellows [DF92] YES 1992
Bodlaender [Bod94] YES 1994
Becker et al. [BBG00] NO 2000
Raman et al. [RSS02] YES 2002
Kanj et al. [KPS04] YES 2004
Raman et al. [RSS06] YES 2006
Guo et al. [GGH06] YES 2006
Dehne et al. [DFL07] YES 2007
Chen et al. [CFL08] YES 2008
Cao et al. [CCL15] YES 2010
Cygan et al. [CNP11] NO 2011
Kociumaka and Pilipczuk [KP14] YES 2014
this paper , or if NO 2019
Figure 1: The ‘race’ for the fastest parameterized algorithm for Feedback Vertex Set.

Our contributions.

We show that, somewhat surprisingly, the time Cut&Count algorithm for FVS can be improved:

Theorem 1.

There is a randomized algorithm that solves FVS in time . If , then the algorithm takes time .

Here is the smallest number such that two by matrices can be multiplied in time [Gal14]. Theorem 1 solves a natural open problem stated explicitly in previous literature [CFJ14].

Using the method from [FGLS16] that transforms time algorithms for FVS into we directly obtain the following improvement over the previously fastest time algorithm:

Corollary 1.

There is a randomized algorithm that solves FVS on an -vertex graph in time .

The above algorithms require space exponential in , but we also provide an algorithm using polynomial space at the cost of the running time:

Theorem 2.

There is a randomized algorithm that solves FVS in time and polynomial space.

Our Techniques.

We build upon the time algorithm from [CNP11]. The starting standard observation is that a feedback vertex set of size (which we can assume to be known to us by the iterative compression technique) gives a tree decomposition of treewidth with very special properties. We show how to leverage these properties using the additional assumption that the average degree of all vertices in the feedback vertex set is constant:

Lemma 1.

Let be a graph and be a feedback vertex set of of size at most , and define . There is an algorithm that, given and , computes a tree decomposition of of width at most , and runs in polynomial time in expectation.

To the best of our knowledge, Lemma 1 is new even for the special case where is a vertex cover of . We expect this result to be useful for other problems parameterized by the feedback vertex set or vertex cover size (such parameterizations are studied in for example [JJ17]). Lemma 1 is proven via an application of the probabilistic method analyzed via proper colorings in a dependency graph of low average degree. It is presented in more detail in Section 3.

Lemma 1, combined with the time algorithm from [CNP11], implies that we only need to ensure the feedback vertex set has constant average degree in order to get a time algorithm for some . To ensure this property, we extend the randomized time algorithm of Becker et al. [BBG00]. The algorithm from [BBG00]

first applies a set reduction rules exhaustively, and then selects a vertex with probability proportional to its degree.

333The sampling is usually described as choosing a random edge and then a random vertex of this chosen edge, which has the same sampling distribution. They show that this chosen vertex appears in an optimal feedback vertex set with probability at least . To modify this algorithm, we observe that after applying the reduction rules in [BBG00], every vertex has degree at least , so one idea is to select vertices with probability proportional to instead.444Let us assume that the graph is not -regular, since if it were, then the feedback vertex set has constant average degree and we could proceed as before. It turns out that if , then this biases us more towards selecting a vertex in an optimal feedback vertex set . Indeed, we will show that if , then we succeed to select a vertex of with probability at least . This is much better than even success probability , which is what we need to beat to improve the running time.

Closer analysis of this process shows that even if , as long as the graph itself has large enough average degree, then we also get success probability . It follows that if the sampling does not give success probability , then the graph has and constant average degree. Therefore, the graph has only edges, and even if all of them are incident to the feedback vertex set of size , the feedback vertex set still has constant average degree. Therefore, we can apply Lemma 1, which gives us a modest improvement of the running time to time.

To obtain improvements to a time and polynomial space algorithm, we introduce the new case , where we simply add a random vertex to the FVS , which clearly succeeds with probability . We then refine our analysis and apply the Cut&Count method from the algorithm in a way similar to [CNP11, Theorem B.1].

To obtain Theorem 1 and further improve the above running times, we extend the proof behind Lemma 1 to decompose the graph using a “three-way separation” (see Definition 3) and leverage such a decomposition by combining the Cut&Count method with fast matrix multiplication. This idea to improve the running time is loosely inspired by previous approaches for MAX-SAT [CS15] and connectivity problems parameterized by branch-width [PBvR16].

Paper Organization.

This paper is organized as follows: We first define notation and list preliminaries in Section 2. We present the proof of Lemma 1 in Section 3. In Section 4, we introduce a probabilistic reduction rule and its analysis. Subsequently we focus on improving the time algorithm for FVS in Section 5. The algorithm presented there only obtains a modest improvement, but illustrates our main ideas and uses previous results as a black box.

In the second half of the paper we show how to further improve our algorithms and prove our main theorems: Section 6 proves Theorem 2, and in Section 7 we prove Theorem 1. Both these sections rely on rather technical extensions of the Cut&Count method that we postpone to Section 8 to improve readability.

2 Preliminaries

Let be an undirected graph. For a vertex in , is the degree of in , and for a set of vertices, we define . If we denote for all edges intersecting both , and denote . For a set denotes all partitions of into three subsets. As we only briefly use tree-decompositions we refer to [CFK15, Chapter 7] for its definitions and standard terminology.

Randomized Algorithms.

All algorithms in this paper will be randomized algorithms for search problems with one-sided error-probability. The (success) probability of such an algorithm is the probability it will output the asked solution, if it exists. In this paper we define with high probability to be probability at least for some large where is the input, instead of the usual . This is because FPT algorithms take more than simply time, so a probability bound of is more convenient when using an union bound to bound the probability any execution of the algorithm will fail.

Note that if the algorithm has constant success probability, we can always boost it to high probability using independent trials. For convenience, we record the folklore observation that this even works for algorithms with expected running time:

Lemma 2 (Folklore).

If a problem can be solved with success probability and in expected time , and its solutions can be verified for correctness in polynomial time, then it can be also solved in time with high probability.

Proof.

Consider independent runs of the algorithm for some large constant , and if a run outputs a solution, we then verify that solution and output YES if this is successful. Given that a solution exists, it is not found and verified in any of rounds with probability at most . The expected running time of the independent runs is , and by Markov’s inequality these jointly run in at most time with probability at least . Therefore our we can terminate our algorithm after time and by a union bound this gives and algorithm that solves the problem with constant success probability. To boost this success probability to high probability, simply use independent runs of the algorithm that reaches constant success probability. ∎

Using this lemma, we assume that all randomized algorithms with constant positive success probability actually solve their respective problems with high probability.

Separations.

The following notion will be instrumental in our algorithms.

Definition 1 (Separation).

Given a graph , a partition of is a separation if there are no edges between and .

Reduction Rules.

In the context of parameterized complexity, a reduction rule (for FVS) is a polynomial-time transformation of an input instance into a different instance such that has a FVS of size iff has a FVS of size . We state below the standard reduction rules for FVS, as described in [CFK15], Section 3.3. For simplicity, we group all four of their reduction rules FVS.1 to FVS.4 into a single one.

Reduction 1 ([Cfk15], folklore).

Apply the following rules exhaustively, until the remaining graph has no loops, only edges of multiplicity at most , and minimum vertex degree at least :

  1. If there is a loop at a vertex , delete from the graph and decrease by ; add to the output FVS.

  2. If there is an edge of multiplicity larger than , reduce its multiplicity to .

  3. If there is a vertex of degree at most , delete .

  4. If there is a vertex of degree , delete and connect its two neighbors by a new edge.

3 Treewidth and Separators

In this section, we show how to convert an FVS with small average degree into a good treewidth decomposition. In particular, suppose graph has a FVS of size with , where . We show how to construct a tree decomposition of width . Note that a treewidth decomposition of width is trivial: since is a forest, we can take a treewidth decomposition of of width and add to each bag. To achieve treewidth , we will crucially use the fact that .

We make the assumption that the algorithm already knows the small average degree FVS . This reasoning may seem circular at first glance: after all, the whole task is finding the FVS in the first place. Nevertheless, we later show how to remove this assumption using the standard technique of Iterative Compression.

We now present a high level outline of our approach. Our goal is to compute a small set of vertices—one of size at most —whose deletion leaves a graph of small enough treewidth. Then, taking the treewidth decomposition of and adding to each bag gives the desired treewidth decomposition. Of course, settling for and treewidth is easy: simply set so that the remaining graph is a forest, which has treewidth . Therefore, it is important that .

We now proceed with our method of constructing . First, temporarily remove the FVS from the graph, leaving a forest . We first select a set of vertices to remove from the forest, for some , to break it into connected components such that the edges between and are evenly split among the components. More precisely, we want every connected component of to share at most a fraction of all edges between and ; we show in Lemma 3 below that this is always possible. The vertices in will eventually go into every bag in the decomposition; this only increases the treewidth by , which is negligible. Hence, we can safely ignore the set .

Next, we perform a random coloring procedure as follows: randomly color every connected component of red or blue, uniformly and independently. Let be the union of all components colored red, and be the union of all components colored blue. For simplicity of exposition, we will assume here that is an independent set: that is, there are no edges between vertices in the FVS. Then, if a vertex in has all its neighbors in belonging to red components, then we can safely add to . Similarly, if all neighbors belong to blue components, then we can safely add to . Observe that the new graphs and have no edges between them.

What is the probability that a vertex in joins or ? Recall that , and since is an independent set, . If a vertex in has exactly edges to , then it has probability at least of joining , with equality when all of these edges go to different connected components in . Of course, we only have that vertices in have at most neighbors on average, but a convexity argument shows that in expectation, at least a fraction of vertices in join . That is, . We can make a symmetric argument for vertices joining . Of course, we need both events—enough vertices joining each of and —to hold simultaneously, which we handle with a concentration argument. From here, it is straightforward to finish the treewidth construction. We now present the formal proofs.

We begin with the following standard fact on balanced separators of forests:

Lemma 3.

Given a forest on vertices with vertex weights , for any , we can delete a set of vertices so that every connected component of has total weight at most .

Proof.

Root every component of the forest at an arbitrary vertex. Iteratively select a vertex of maximal depth whose subtree has total weight more than , and then remove and its subtree. The subtrees rooted at the children of have total weight at most , since otherwise, would not satisfy the maximal depth condition. Moreover, by removing the subtree rooted at , we remove at least total weight, and this can only happen times. ∎

Lemma 4 (Small Separator).

Given an instance and a FVS of of size at most , define , and suppose that . There is a polynomial time algorithm that computes a separation of such that:

Proof.

Fix a parameter throughout the proof. Apply Lemma 3 to the forest with parameter , with vertex weighted by , and let be the output. Observe that

and every connected component of satisfies

Now form a bipartite graph on vertex bipartition , where is the FVS, and there are two types of vertices in , the component vertices and the subdivision vertices. For every connected component in , there is a component vertex in that represents that component, and it is connected to all vertices in adjacent to at least one vertex in . For every edge in , there is a vertex in with and as its neighbors. Observe that (1) , (2) every vertex in has degree at most , and (3) the degree of a vertex in is at most .

The algorithm that finds a separator works as follows. For each vertex in , color it red or blue uniformly and independently at random. Every component in whose vertex is colored red is added to in the separation , and every component whose vertex is colored blue is added to . Every vertex in whose neighbors are all colored red joins , and every vertex in whose neighbors are all colored blue joins . The remaining vertices in , along with the vertices in , comprise .

Subclaim 1.

is a separation.

Proof.

Suppose for contradiction that there is an edge connecting and . The edge cannot connect two distinct components of , so it must have an endpoint in . The edge cannot connect a vertex in to a vertex in , since a vertex in only joins or if all of its neighbors in are colored the corresponding color. Therefore, the edge must connect two vertices in . But then, connects to both endpoints and is colored either red or blue, so it is impossible for one endpoint of to have all neighbors colored red, and the other endpoint to have all neighbors colored blue, contradiction. ∎

We now show that with good probability both Conditions (1) and (2) hold. The algorithm can then repeat the process until both conditions hold.

Subclaim 2.

With probability at least , Condition (1) holds for .

Proof.

There are at most vertices in with degree at least . Since they cannot affect condition (1) by an additive factor, we can simply ignore them; let be the vertices with degree at most . Consider the intersection graph on the vertices of , formed by connecting two vertices in iff they share a common neighbor (in ). Since every vertex in and has degree at most , the maximum degree of is . Using the standard greedy algorithm, we color with colors so that every color class forms an independent set in . In particular, within each color class, the outcome of each vertex—namely, whether it joins or or —is independent across vertices.

Let be the vertices colored . If , then ignore it; since and , the sum of all such is at most , so they only affect condition (1) by an additive factor. Henceforth, assume that . Each vertex has at most neighbors in , so it has independent probability at least of joining . Let be the number of vertices in that join ; by Hoeffding’s inequality555If are independent and Bernoulli and , then .,

for large enough .

By a union bound over all color classes with , the probability that for each is . In this case,

where the last inequality follows from convexity of the function . Recall that , and observe that since the vertices in are precisely those with degree exceeding some threshold. It

proving condition (1) for . Of course, the argument for is symmetric.

Subclaim 3.

With probability at least , Condition (2) holds for .

Proof.

At most vertices in can come from , and the other vertices in must be precisely , which has size . ∎

See 1

Proof.

Compute a separation following Lemma 4. Conditions (1) and (2) are easily checked in polynomial time, so if one of them fails to hold, then repeatedly compute a separation until they both hold. Since is a FVS of of size , we can compute a tree decomposition of of width as follows: start with a tree decomposition of width of the forest , and then add all vertices in to each bag. Similarly, compute a tree decomposition of in the same way. Finally, merge the two tree decompositions by adding an edge between an arbitrary node from each decomposition; since there is no edge connecting to , the result is a valid tree decomposition. ∎

4 Probabilistic Reduction

Whenever a reduction fails with a certain probability, we call it a probabilistic reduction. Our probabilistic reduction is inspired by the randomized FVS algorithm of [BBG00]. Whenever we introduce a probabilistic reduction, we include (P) in the header, such as in the reduction below.

Reduction 2 (P).

Assume that Reduction 1 does not hold and has a vertex of degree at least . Randomly sample a vertex proportional to . That is, each vertex is selected with probability . Delete and decrease by .

We say a probabilistic reduction succeeds if it selects a vertex in an optimal feedback vertex set.

Observation 1.

Let be a graph a FVS of . Denoting we have that

(1)
Proof.

Since is a forest, there can be at most edges in , each of which contributes to the summation . The only other edges contributing to are in , which contribute to both and . Therefore,

Lemma 5.

If and the instance is feasible, then Reduction 2 succeeds with probability at least .

Proof.

Let be a FVS of size .666From any FVS of size less than , we can arbitrarily add vertices until it has size . We show that the probability of selecting a vertex in is at least . Define , so that our goal is equivalent to showing that .

The value of can be rewritten as

(2)

By Observation 1,

(3)

Therefore,

Therefore, as long as , we can repeatedly apply Reductions 2 and 1 until either , which means we have succeeded with probability at least , or we have an instance with .

Later on, we will need the following bound based on the number of edges . Informally, it says that as long as the average degree is large enough, Reduction 2 will still succeed with probability close to (even if ).

Lemma 6.

Assume that . If the instance is feasible, then Reduction 2 succeeds with probability at least .

Proof.

There are at most edges not contributing to , so

If , then the success probability is at least , so assume otherwise. Following the proof of Lemma 5, we have

where the last inequality holds because . Finally, as the Lemma statement is vacuous when , the Lemma follows. ∎

5 Time Algorithm

In this section we present our simplest algorithm that achieves a running time of , for some . The improvement is very small, but we found this to be the simplest exposition that achieves the bound for any . We build on the following result:

Lemma 7 (Cygan et al. [Cnp11]).

There is an algorithm treewidthDP that, given a tree decomposition of the input graph of width tw, and parameter outputs a FVS of size at most with high probability if it exists. Moreover, the algorithm runs in time.

First, we combine the tree decomposition from the previous section with the standard technique of Iterative Compression to build an algorithm that runs in time time, assuming that (recall denotes the number of edges of the input graph). Then, we argue that by applying Reduction 2 whenever , we can essentially “reduce” to the case . Combining these two ideas gives us the algorithm.

The algorithm is introduced below in pseudocode. The iterative compression framework proceeds as follows. We start with the empty graph, and add the vertices of one by one, while always maintaining a FVS of size at most in the current graph. Maintaining a FVS of the current graph allows us to use the small treewidth decomposition procedure of Section 3. Then, we add the next vertex in the ordering to each bag in the treewidth decomposition, and then solve for a new FVS in time using Lemma 7. Of course, if there is no FVS of size in the new graph, then there is no such FVS in either, so the algorithm can terminate early.

Input: Graph and parameter , with .
Output: FVS of size at most , or Infeasible if none exists.

1:Order the vertices arbitrarily as
2:
3:for  do Invariant: is a FVS of
4:     Compute a tree decomposition of by applying Lemma 1 on input
5:     Add to each bag in the tree decomposition
6:      a FVS of with parameter , computed using treewidthDP from Lemma 7
7:     if  is Infeasible then
8:          return Infeasible      
9:return
Algorithm 1 LABEL:alg:ic
Lemma 8.

On input instance with , runs in time . Moreover, if there exists a FVS of size at most , then LABEL:alg:ic will return a FVS of size at most with high probability.

Proof.

Suppose that there exists a FVS of size at most . Let be the ordering from Line 1, and define . Observe that is a FVS of , so the FVS problem on Line 6 is feasible. By Lemma 7, Line 6 correctly computes a FVS with high probability on any given iteration. Therefore, after using independent trials, with high probability a FVS is returned successfully.

We now bound the running time. On Line 4, the current set is a FVS of . To bound the value of used in Lemma 1, we use the (rather crude) bound

and moreover, since by assumption. Therefore, Lemma 1 guarantees a tree decomposition of width at most , and adding to each bag on Line 5 increases the width by at most . By Lemma 7, Line 6 runs in time time, as desired. ∎

We now claim below that if for a sufficiently large , then Reduction 2 succeeds with good probability (in particular, with probability greater than ).

Lemma 9.

If has a FVS of size and , then Reduction 2 succeeds with probability at least .

Proof.

We consider two cases. If , then the success probability is at least by Lemma 5. Otherwise, if , then , and Lemma 6 and the trivial bound give a success probability of at least

Hence, regardless of whether or not , Reduction 2 succeeds with probability at least . ∎

Below is the full randomized algorithm in pseudocode, which combines Reductions 2 and 1 with the iterative compression routine LABEL:alg:ic. After a trivial check and reduction rule, Line 3 flips a coin that needs to be flipped Heads in order to proceed to the iterative compression step.

The motivation for this is that we want each iteration of LABEL:alg:fvs1t to run quickly in expectation—in particular, in time—for simplicity of analysis. This way, if the algorithm has success probability for some constant , then we can repeat it times, succeeding with high probability and taking time in expectation. Since LABEL:alg:ic takes time by Lemma 8, we should call LABEL:alg:ic with probability at most , which is exactly the probability of the coin flipping Heads.

Input: Graph and parameter .
Output: A FVS of size with probability if one exists; Infeasible otherwise.

1:if  then return if is acyclic, and return Infeasible otherwise
2:Exhaustively apply Reduction 1 to to get vertex set and instance with edges
3:Flip a coin with Heads probability
4:if  and coin flipped Heads then
5:     
6:else
7:     Apply Reduction 2 to to get vertex and instance
8:      for any set
9:return
Algorithm 2 LABEL:alg:fvs1t
Lemma 10.

runs in expected time and has success probability.

Proof.

For the running time, the computation outside of Line 5 clearly takes time. For each , Line 5 is executed with probability and takes time, so in expectation, the total computation cost of Line 5 is per value of , and also overall.

It remains to lower bound the success probability. Define . We will prove by induction on that succeeds with probability at least . This statement is trivial for , since no probabilistic reductions are used and succeeds with probability . For the inductive step, consider an instance . First, suppose that . In this case, if LABEL:alg:ic in Line 5 is executed, then it will run in time by Lemma 8, and correctly output a FVS of size at most , with high probability. This happens with probability at least

as desired. If LABEL:alg:ic is not executed, then LABEL:alg:fvs1t can still succeed, but this only increases our overall success probability, so we disregard it.

Otherwise, suppose that . Then, by Lemma 9, applying Reduction 2 succeeds with probability at least . By induction, the recursive call on Line 8 succeeds with probability at least , so the overall probability of success is at least

as desired. ∎

The claimed time algorithm follows from Lemma 10 by boosting the success probability of Algorithm LABEL:alg:fvs1t according to Lemma 2.

6 Improved Algorithm and Polynomial Space

In this section, we present the time algorithm promised by Theorem 2. At a high level, our goal is to obtain a tighter bound on , which we only bounded loosely by in Section 5. Recall that the treewidth bound of from Lemma 1 has exponentially dependence on , so every constant factor savings in is crucial.

First, we introduce another simple reduction step, which works well when .

Reduction 3 (P).

Sample a uniformly random vertex . Delete and decrease by .

For the entire section, we will fix a constant and obtain a running time that depends on . At the very end, we will optimize for and achieve the running time . For formality, we define the following assumption (A1) and state the corresponding direct claim.

(A1)
Claim 1.

If (A1) is true, then Reduction 3 succeeds with probability at least .

Now suppose that (A1) is false. Observe that Reduction 2 succeeds with probability at least precisely when

By Observation 1, we have

and since (A1) is false,

We are interested in whether or not

which, if true, would imply that Reduction 2 succeeds with probability at least . Again, we present the assumption and corresponding claim:

(A2)
Claim 2.

If (A1) is false and (A2) is true, then Reduction 2 succeeds with probability at least .

An immediate issue in this assumption is that the algorithm does not know , so it cannot determine whether (A2) is true or not. This can be accomplished by designing an algorithm to find Feedback Vertex Sets with additional properties defined as follows:

Definition 2 (Bounded Total Degree FVS).

In the bounded total degree FVS (BFVS) problem, the input is an unweighted, undirected graph on vertices, and parameters and . The goal is to either output a FVS of size at most satisfying , or correctly conclude none exists.

Input: Graph and parameters and .
Output: A FVS of size at most satisfying