New Tools and Connections for Exponential-time Approximation

08/11/2017 ∙ by Nikhil Bansal, et al. ∙ aalto Weizmann Institute of Science KTH Royal Institute of Technology TU Eindhoven 0

In this paper, we develop new tools and connections for exponential time approximation. In this setting, we are given a problem instance and a parameter α>1, and the goal is to design an α-approximation algorithm with the fastest possible running time. We show the following results: - An r-approximation for maximum independent set in O^*((Õ(n/r ^2 r+r^2r))) time, - An r-approximation for chromatic number in O^*((Õ(n/r r+r^2r))) time, - A (2-1/r)-approximation for minimum vertex cover in O^*((n/r^Ω(r))) time, and - A (k-1/r)-approximation for minimum k-hypergraph vertex cover in O^*((n/(kr)^Ω(kr))) time. (Throughout, Õ and O^* omit polyloglog(r) and factors polynomial in the input size, respectively.) The best known time bounds for all problems were O^*(2^n/r) [Bourgeois et al. 2009, 2011 & Cygan et al. 2008]. For maximum independent set and chromatic number, these bounds were complemented by (n^1-o(1)/r^1+o(1)) lower bounds (under the Exponential Time Hypothesis (ETH)) [Chalermsook et al., 2013 & Laekhanukit, 2014 (Ph.D. Thesis)]. Our results show that the naturally-looking O^*(2^n/r) bounds are not tight for all these problems. The key to these algorithmic results is a sparsification procedure, allowing the use of better approximation algorithms for bounded degree graphs. For obtaining the first two results, we introduce a new randomized branching rule. Finally, we show a connection between PCP parameters and exponential-time approximation algorithms. This connection together with our independent set algorithm refute the possibility to overly reduce the size of Chan's PCP [Chan, 2016]. It also implies that a (significant) improvement over our result will refute the gap-ETH conjecture [Dinur 2016 & Manurangsi and Raghavendra, 2016].

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Independent Set, Vertex Cover, and Coloring

problems are central problems in combinatorial optimization and have been extensively studied. Most of the classical results concern either approximation algorithms that run in polynomial time or exact algorithms that run in (sub)exponential-time. While these algorithms are useful in most scenarios, they lack flexibility: Sometimes, we wish for a better approximation ratio with worse running time (e.g. computationally powerful devices), or faster algorithms with less accuracy. In particular, the trade-off between the running time and approximation ratios are needed in these settings.

Algorithmic results on the trade-offs between approximation ratio have been studied already in the literature in several settings, most notably in the context of Polynomial-time Approximation Schemes (PTAS). For instance, in planar graphs, Baker’s celebrated approximation scheme for several NP-hard problems [1] gives an -approximation for e.g. Independent Set in time time. In graphs of small treewidth, Czumaj et al. [14] give an time algorithm that given a graph along with a tree decomposition of it of width at most , find an -approximation for Independent Set. For general graphs, approximation results for several problems have been studied in several works (see e.g. [5, 6, 7, 13, 12, 11]). A basic building block that lies behind many of these results is to partition the input instance in smaller parts in which the optimal (sub)solution can be computed quickly (or at least faster than fully exponential-time). For example, to obtain an -approximation for Independent Set one may arbitrarily partition the vertex set in blocks and restrict attention to independent sets that are subsets of these blocks to get a time -approximation algorithm.

While at first sight one might think that such a naïve algorithm should be easily improvable via more advanced techniques, it was shown in [9, 5] that almost linear-size PCPs [15, 30] imply that -approximating Independent Set [9] and Coloring [27] requires at least time assuming the popular Exponential Time Hypothesis (ETH). In the setting of the more sophisticated Baker-style approximation schemes for planar graphs, Marx [29] showed that no -approximating algorithm for planar Independent Set can run in time assuming ETH, which implies that the algorithm of Czumaj cannot be improved to run in time .

These lower bounds, despite being interesting, are far from tight and by no means answer the question whether the known approximation trade-offs can be improved significantly, and in fact in many settings we are far from understanding the full power of exponential time approximation. For example we cannot exclude algorithms that -approximate -Independent Set 111That is, given a graph and integer answer YES if it has an independent set of size at least and NO if it has no independent set of size at least . in time for some function (see e.g. [26]), nor do we know algorithms that run asymptotically faster than the fastest exact algorithm that runs in time time [31].

In this paper we aim to advance this understanding and study the question of designing as fast as possible approximation algorithms for Independent Set, Coloring and Vertex Cover in general (hyper)graphs.

Our Results.

For Independent Set our result is the following. Here we use to omit log log factors in .

There is a randomized algorithm that given an -vertex graph and integer

outputs an independent set that, with constant probability, has size at least

, where denotes the maximum independent set size of . The algorithm runs in time .

To prove this result we introduce a new randomized branching rule that we will now introduce and put in context towards previous results. This follows a sparsification technique that reduces the maximum degree to a given number. This technique was already studied before in the setting of exponential time approximation algorithms Independent Set by Cygan et al. (see [11, paragraph ‘Search Tree Techniques’]) and Bourgeois et al. (see [7, Section 2.1]), but the authors did not obtain running times sub-exponential in . Specifically, the sparsification technique is to branch (e.g. select a vertex and try to both include in an independent set or discard and recurse for both possibilities) on vertices of sufficiently high degree. The key property is that if we decide to include a vertex and the independent set, we may discard all neighbors of . If we generate instances by keep branching on vertices of degree at least until the maximum degree is smaller than , then at most instances are created. In each such instance, the maximum independent set can be easily -approximated by a greedy argument. Cygan et al. [11] note that this gives worse than running times.

Our algorithm works along this line but incorporates two (simple) ideas. Our first observation is that instead of solving each leaf instance by greedy -approximation algorithm, one can use a recent approximation algorithm by Bansal et al. [2] for Independent Set on bounded degree graphs. If we choose , this immediately gives an improvement, an -approximation in time essentially . To improve this further we use present an additional (more innovative) idea introducing randomization. This idea relies on the fact that in the sparsification step we have (unexploited) slack as we aim for an approximation.222This observation was already made by Bourgeois et al. [7], but we exploit it in a new way. Specifically, whenever we branch, we only consider the ‘include’ branch with probability . This will lower the expected number of produced leaf instances in the sparsification step to and preserves the approximation factor with good probability.

Via fairly standard methods (see e.g. [4]) we show this also gives a faster algorithm for coloring in the following sense:

There is a randomized algorithm that, given an -vertex graph and an integer , outputs with constant probability a proper coloring of using at most colors. The algorithm runs in time .

As a final indication that sparsification is a very powerful tool to obtain fast exponential time approximation algorithms, we show that a combination of a result of Halperin [20] and the sparsification Lemma [22] gives the following result for the Vertex Cover problem in hypergraphs with edges of size at most (or Set Cover problem with frequency at most ).

For every , there is an such that for every there is an time -approximation algorithm for the Vertex Cover problem in hypergraphs with edges of size at most .

Note that for (e.g. vertex cover in graphs), this gives an running time, which gives an exponential improvement (in the denominator of the exponent) upon the approximation by Bonnet et al. [7] that runs in time . It was recently brought to our attention that Williams and Yu [32] independently have unpublished results for (hypergraph) vertex cover and independent set using sparsification techniques similar to ours.

Connections to PCP parameters

The question of approximating the maximum independent set problem in sub-exponential time has close connections to the trade-off between three important parameters of PCPs: size, gap and free-bit. We discuss the implications of our algorithmic results in terms of these PCP parameters.

Roughly speaking, the gap parameter is the ratio between completeness and soundness, while the freeness parameter is the number of distinct proofs that would cause the verifier to accept; the free-bit is simply a logarithm of freeness. For convenience, we will continue our discussions in terms of freeness, instead of freebit.

  • Freebit v.s. gap: The dependency between freeness and gap has played important role in hardness of approximation. Most notably, the existence of PCPs with freeness where is a gap parameter is “equivalent” to hardness of approximating maximum independent set [21, 3]; this result is a building block for proving other hardness of approximation for many other combinatorial problems, e.g., coloring [19], disjoint paths, induced matching, cycle packing, and pricing. So it is fair to say that this PCP parameter trade-off captures the approximability of many natural combinatorial problems.

    Better parameter trade-off implies stronger hardness results. The existence of a PCP with arbitrarily large gap and freeness (lowest possible) is in fact equivalent to inapproximability for Vertex Cover. The best known trade-off is due to Chan [10]: For any , there is a polynomial-sized PCP with gap and freeness , yielding the best known NP-hardness of approximating maximum independent set in sparse graphs, i.e. NP-hardness of approximating maximum independent set in degree- graphs. 333Roughly speaking, the existence of a PCP with freeness (where is a gap) implies hardness of approximating independent set in degree- graphs.

  • Size, freebit, and gap: When a polynomial-time approximation algorithm is the main concern, polynomial size PCPs are the only thing that matter. But when it comes to exponential time approximability, another important parameter, size of the PCPs, has come into play. The trade-off between size, freebit, and gap tightly captures the (sub-)exponential time approximability of many combinatorial problems. For instance, for any , Moshkovitz and Raz [30] constructs a PCP of size and freeness ; this implies that -approximating Independent Set requires time  [9].

Our exponential-time approximation result for Independent Set implies the following tradeoff results.

Unless ETH breaks, a freebit PCP with gap , freeness and size must satisfy .

In particular, this implies that (i) Chan’s PCP cannot be made smaller size than , unless ETH breaks, and (ii) in light of the equivalence between gap-amplifying freebit PCPs with freeness and approximation for Vertex Cover, our result shows that such a PCP must have size at least . We remark that no such trade-off results are known for polynomial-sized PCPs. To our knowledge, this is the first result of its kind.

Further related results

The best known results for Independent Set in the polynomial-time regime are an -approximation [17], and the hardness of (which also holds for Coloring[24]. For Vertex Cover, the best known hardness of approximation is NP-hardness [23] and hardness assuming the unique games conjecture [25]. All three problems (Independent Set, Coloring, and Vertex Cover) do not admit exact algorithms that run in time , unless ETH fails. Besides the aforementioned works [7, 11] sparsification techniques for exponential time approximation were studied by Bonnet and Paschos in [6], but mainly hardness results were obtained.

2 Preliminaries

We first formally define the three problems that we consider in this paper. Independent Set: Given a graph , we say that is an independent set if there is no edge with both endpoints in . The goal of Independent Set is to output an independent set of maximum cardinality. Denote by , the cardinality of the maximum independent set. Vertex Cover: Given a graph , we say that is a vertex cover of if every edge is incident to at least one vertex in . The goal of Vertex Cover is to output a vertex cover of minimum size. A generalization of vertex cover, called -Hypergraph Vertex Cover -Vertex Cover, is defined as follows. Given a hypergraph where each hyperedge has cardinality at most , the goal is to find a collection of vertices such that each hyperedge is incident to at least one vertex in , while minimizing . The degree of hypergraph is the maximum frequency of an element. Coloring: Given a graph , a proper -coloring of is a function such that for all . The goal of Coloring is to compute a minimum integer such that admits a (proper) -coloring; this number is referred to as the chromatic number, denote .

For a graph , denotes the set of neighbors of and denotes . If we let denote the graph i.e. the subgraph of induced by We use to denote in order to avoid superscripts. We use the -notation to suppress factors polynomial in the input size. We use and to suppress factors polyloglog in in respectively upper and lower bounds and write for all functions that are in both and .

3 Faster Approximation via Randomized Branching and Sparsification

3.1 Maximum Independent Set

In this section we prove Theorem 1. Below is our key lemma.

Suppose there is an approximation algorithm that runs in time and outputs an Independent Set of of size if has maximum degree , (where ). Then there is an algorithm running in expected time that outputs an independent set of expected size .

Proof.

Consider the following algorithm.

0:  
1:  if  then
2:     Draw a random Boolean variable such that .
3:     if  then
4:        return the largest of and .
5:     else
6:        return .
7:  else
8:     return .
Figure 1: Approximation algorithm for Independent Set using an approximation algorithm that works in bounded degree graphs.

For convenience, let us fix and . We start by analyzing the expected running time of this algorithm. Per recursive call the algorithm clearly uses time. It remains to bound the number of recursive calls made by when has vertices. We will bound for by induction on . Note that here is chosen such that

(1)

where we use for the inequality. For the base case of the induction, note that if the condition at Line 1 does not hold, the algorithm does not use any recursive calls and the statement is trivial as is clearly positive. For the inductive step, we see that

We continue by analyzing the output of the algorithm. It clearly returns a valid independent set as all neighbors of are discarded when is included in Line 4 and an independent set is returned at Line 8. It remains to show which we do by induction on . In the base case in which no recursive call is made, note that on Line 8 we indeed obtain an -approximation as has maximum degree . For the inductive case, let be a maximum independent set of and let be the vertex as picked on Line 1. We distinguish two cases based on whether . If , then and the inductive step follows as by the induction hypothesis. Otherwise, if , then is at least

as required. Here the first inequality uses the induction hypothesis twice. ∎

We will invoke the above lemma by using the algorithm by Bansal et al. [2] implied by the following theorem:

[[2], Theorem 1.3] There is an approximation algorithm for Independent Set on graphs of maximum degree running in time .

Proof of Theorem 1.

We may apply Lemma 3.1 with and, by virtue of Theorem 3.1, with , and . We obtain an expected time algorithm that outputs an independent set of expected size .

Since the size of the output is upper bounded by we obtain an independent set of size at least with probability at least , and we may boost this probability to by repetitions.

By Markov’s inequality these repetitions together run in time with probability . The theorem statement follows by a union bound as these repetitions run in the claimed running time and simultaneously some repetition finds an independent set of size at least , with probability at least . ∎

A deterministic algorithm:

Additionally, we also show a deterministic -approximation algorithm that runs in time . The algorithm utilizes Feige’s algorithm [17] as a blackbox, and is deferred to Appendix A.

3.2 Graph Coloring

Now we use the approximation algorithm for Independent Set as a subroutine for an approximation algorithm for Coloring to prove Theorem 1 as follows:

Proof of Theorem 1.

The algorithm combines the approximation algorithm from Section 3.1 for Independent Set with an exact algorithm for Coloring (see, e.g., [4]) as follows:

0:  
1:  Let , .
2:  while   do
3:     .
4:     .
5:     .
6:  Let be some optimum coloring of the remaining graph .
7:  return .
Figure 2: Approximation algorithm for the chromatic number.

We claim that returns with high probability a proper coloring of using colors. To prove the theorem, we invoke which has the same asymptotic running time. First, note that in each iteration of the while loop (Line 2 of Algorithm 2), is decreased by a multiplicative factor of at most because must have an independent set of size at least and therefore . Before the last iteration, we have . Thus, the number of iterations must satisfy

This implies that . Consequently, the number of colors used in the first phase of the algorithm (Line 1 to Line 5) is . The claimed upper bound on follows because the number of colors used for in the second phase (Line 6) is clearly upper bounded by .

To upper bound the running time, note that Line 4 runs in time

and implementing by using the time algorithm from [4], Line 6 also takes time and the running time follows. ∎

3.3 Vertex Cover and Hypergraph Vertex Cover

In this section, we show an application of the sparsification technique to Vertex Cover to obtain Theorem 1. Here the sparsification step is not applied explicitly. Instead, we utilize the sparsification Lemma of Impagliazzo et al. [22] as a blackbox. Subsequently, we solve each low-degree instance by using an algorithm of Halperin [20]. The sparsification lemma due to Impagliazzo et al. [22], shows that an instance of the -Hypergraph Vertex Cover problem can be reduced to a (sub-)exponential number of low-degree instances.444The original formulation is for the Set Cover problem and the most popular formulation is for CNF-SAT problem, but they are all equivalent by direct transformation.

[Sparsification Lemma, [22, 8]] There is an algorithm that, given a hypergraph with edges of size at most , a real number , produces set systems with edges of size at most in time such that

  1. every subset is a vertex cover of if and only if is a vertex cover of for some ,

  2. for every , the degree is at most ,

  3. is at most .

The next tool is an approximation algorithm for the -Hypergraph Vertex Cover problem when the input graph has low degree due to Halperin [20].

[[20]] There is a polynomial time -approximation algorithm for the vertex cover problem in hypergraphs with edges of size at most in which every element has degree at most , for large enough .

Now we complete the proof of the theorem by applying Lemma 3.3 with parameter . The number of low-degree instances produced by Lemma 3.3 is at most . Each graph has degree at most . Note that

Plugging this value of , Halperin’s algorithm gives the approximation factor of

Thus this gives an -approximation running in time which translates to an -approximation running in time .

4 PCP Parameters and Exponential-time approximation hardness

Exponential-time approximation has connections to the trade-off questions between three parameters of PCPs: size, freebit, and gap. To formally quantify this connection, we define new terms, formally illustrating the ideas that have been already around in the literature. We define a class of languages FGPCP which stands for Freebit and Gap-amplifiable PCP. Let be a positive real, and be non-decreasing functions. A language is in if there is a constant such that, for all constants , there is a verifier that, on input , has access to a proof and satisfies the properties:

  • The verifier runs in time.

  • If , then there is a proof such that accepts with probability .

  • If , then for any proof , accepts with probability .

  • For each and each random string , the verifier has accepting configurations.

The parameters , and are referred to as gap, size and freebit of the PCPs respectively. For convenience, we call the freeness of the PCP. An intuitive way to view this PCP is as a class of PCPs parameterized by gap . An interesting question in the PCPs and hardness of approximation literature has been to find the smallest functions and .

If for some function that is at least linearly growing in , then for any constant , -approximating Independent Set, in input graph , cannot be done in time unless ETH fails. (we think of as a fixed number, and therefore should be seen as a function on a single variable .)

We prove the theorem later in this section.

Assuming that SAT has no -time randomized algorithm and that , then it must be the case that .

Proof.

Otherwise, , and the Theorem 4 would imply that there is no , contradicting the existence of our Independent Set approximation algorithm. ∎

Now let us phrase the known PCPs in our framework of FGPCP. Chan’s PCPs [10] can be stated that . Applying our results, this means that if one wants to keep the same freebit parameters given by Chan’s PCPs, then the size must be at least . Another interesting consequence is a connection between Vertex Cover and Freebit PCPs in the polynomial time setting [3].

[[3]] Vertex Cover is hard to approximate if and only if .

The intended PCPs in Theorem 4 have arbitrary small soundness while the freeness remains . Our Corollary 4 implies that such a PCP must have size at least .

4.1 Proof of Theorem 4

Step 1: Creating a hard CSP

We will need the following lemma that creates a “hard” CSP from FGPCP. This CSP will be used later to construct a hard instance of Independent Set.

If , then, for any , there is a randomized reduction from an -variable SAT to a CSP having the following properties (w.h.p.):

  • The number of variables of is .

  • The number of clauses of is .

  • The freeness of is .

  • If is satisfiable, then . Otherwise, .

Proof.

Let be any number and be the corresponding verifier. On input , we create a CSP as follows. For each proof bit , we have variable . The set of variables is . We perform iterations. In iteration , the verifier picks a random string and create a predicate , where are the proof bits read by the verifier on random string . This predicate is true on assignment if and only if the verifier accepts the local assignment where for all .

First, assume that is satisfiable. Then there is a proof such that the verifier accepts with probability . Let be an assignment that agrees with the proof . So satisfies each predicate with probability , and therefore, the expected number of satisfied predicates is . By Chernoff’s bound, the probability that satisfies less than predicates is at most .

Next, assume that is not satisfiable. For each assignment , the fraction of random strings satisfied by the corresponding proof is at most . When we pick a random string , the probability that accepts is then at most . So, over all the choices of strings, the expected number of satisfied predicates is . By Chernoff’s bound, the probability that satisfies more than predicates is at most . By union bound over all possible proofs of length (there are such proofs), the probability that there is such a is at most . ∎

Step 2: FGLSS reduction

The FGLSS reduction is a standard reduction from CSP to Independent Set introduced by Feige et al. [18]. The reduction simply lists all possible configurations (partial assignment) for each clause as vertices and adding edges if there is a conflict between two configuration. In more detail, for each predicate and each partial assignment such that is true, we have a vertex . For each pair of vertices such that there is a variable appearing in both and for which , we have an edge between and . [FGLSS Reduction [18]] There is an algorithm that, given an input CSP with clauses, variables, and freeness , produces a graph such that (i) and (ii) , where denotes the maximum number of predicates of that can be satisfied by an assignment.

Combining everything

Assume that . Let be a constant and be the verifier of SAT that gives the gap of . By invoking Lemma 4.1, we have a CSP with variables and clauses. Moreover, the freeness and gap of are and respectively. Applying the FGLSS reduction, we have a graph with . Now assume that we have an algorithm that gives a approximation in time . Notice that and therefore algorithm distinguishes between Yes- and No-instance in time , a contradiction.

Hardness under Gap-ETH:

Dinur [16] and Manurangsi and Raghavendra [28] made a conjecture that SAT does not admit an approximation scheme that runs in time. We observe a Gap-ETH hardness of -approximating Independent Set in time for some constant . The proof uses a standard amplification technique and is deferred to Appendix B.

5 Further Research

Our work leaves ample opportunity for exciting research. An obvious open question is to derandomize our branching, e.g. whether Theorem 1 can be proved without randomized algorithms. While the probabilistic approximation guarantee can be easily derandomized by using a random partition of the vertex set in parts or splitters, it seems harder to strengthen the expected running time bound to a worst-case running time bound.

Can we improve the running times of the other algorithms mentioned in the introduction that use the partition argument, possibly using the randomized branching strategy? Specifically, can we -approximate Independent Set on planar graphs in time , or -approximate Independent Set in time ? As mentioned in the introduction, a result of Marx [29] still leaves room for such lower order improvements. Another open question in this category is how fast we can -approximate -Independent Set, where the goal is to find an independent st of size of . For example no time algorithm is known, where is a non-trivial function of , that distinguishes graphs with from graphs with . The partition argument gives only a running time of , and no strong lower bounds are known for this problem. Finally, a big open question in the area is to find or exclude a -approximation for Vertex Cover in graphs in subexponential time for some fixed constant .

Acknowledgment

NB is supported by a NWO Vidi grant 639.022.211 and ERC consolidator grant 617951. BL is supported by ISF Grant No. 621/12 and I-CORE Grant No. 4/11. DN is supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme under grant agreement No 715672 and the Swedish Research Council (Reg. No. 2015-04659). JN is supported by NWO Veni grant 639.021.438.

References

  • [1] Brenda S. Baker. Approximation algorithms for np-complete problems on planar graphs. J. ACM, 41(1):153–180, 1994.
  • [2] Nikhil Bansal, Anupam Gupta, and Guru Guruganesh. On the Lovász Theta Function for Independent Sets in Sparse Graphs. In

    Symposium on Theory of Computing, STOC

    , pages 193–200, 2015.
  • [3] Mihir Bellare, Oded Goldreich, and Madhu Sudan. Free bits, pcps, and nonapproximability-towards tight results. SIAM J. Comput., 27(3):804–915, 1998.
  • [4] Andreas Björklund, Thore Husfeldt, and Mikko Koivisto. Set partitioning via inclusion-exclusion. SIAM J. Comput., 39(2):546–563, 2009.
  • [5] Édouard Bonnet, Michael Lampis, and Vangelis Th. Paschos. Time-approximation trade-offs for inapproximable problems. In Symposium on Theoretical Aspects of Computer Science, STACS, pages 22:1–22:14, 2016.
  • [6] Édouard Bonnet and Vangelis Th. Paschos. Sparsification and subexponential approximation. Acta Informatica, pages 1–15, 2016.
  • [7] Nicolas Bourgeois, Bruno Escoffier, and Vangelis Th. Paschos. Approximation of max independent set, min vertex cover and related problems by moderately exponential algorithms. Discrete Applied Mathematics, 159(17):1954 – 1970, 2011.
  • [8] Chris Calabro, Russell Impagliazzo, and Ramamohan Paturi. A duality between clause width and clause density for SAT. In Conference on Computational Complexity (CCC), pages 252–260, 2006.
  • [9] Parinya Chalermsook, Bundit Laekhanukit, and Danupon Nanongkai. Independent set, induced matching, and pricing: Connections and tight (subexponential time) approximation hardnesses. In Foundations of Computer Science, FOCS, pages 370–379, 2013.
  • [10] Siu On Chan. Approximation resistance from pairwise-independent subgroups. J. ACM, 63(3):27:1–27:32, 2016.
  • [11] Marek Cygan, Lukasz Kowalik, Marcin Pilipczuk, and Mateusz Wykurz. Exponential-time approximation of hard problems. CoRR, abs/0810.4934, 2008.
  • [12] Marek Cygan, Lukasz Kowalik, and Mateusz Wykurz. Exponential-time approximation of weighted set cover. Inf. Process. Lett., 109(16):957–961, 2009.
  • [13] Marek Cygan and Marcin Pilipczuk. Exact and approximate bandwidth. Theor. Comput. Sci., 411(40-42):3701–3713, 2010.
  • [14] Artur Czumaj, Magnús M. Halldórsson, Andrzej Lingas, and Johan Nilsson. Approximation algorithms for optimization problems in graphs with superlogarithmic treewidth. Inf. Process. Lett., 94(2):49–53, 2005.
  • [15] Irit Dinur. The PCP theorem by gap amplification. J. ACM, 54(3):12, 2007.
  • [16] Irit Dinur. Mildly exponential reduction from gap 3sat to polynomial-gap label-cover. Electronic Colloquium on Computational Complexity (ECCC), 23:128, 2016.
  • [17] Uriel Feige. Approximating maximum clique by removing subgraphs. SIAM J. Discrete Math., 18(2):219–225, 2004.
  • [18] Uriel Feige, Shafi Goldwasser, László Lovász, Shmuel Safra, and Mario Szegedy. Interactive proofs and the hardness of approximating cliques. J. ACM, 43(2):268–292, 1996.
  • [19] Uriel Feige and Joe Kilian. Zero knowledge and the chromatic number. J. Comput. Syst. Sci., 57(2):187–199, 1998.
  • [20] Eran Halperin. Improved approximation algorithms for the vertex cover problem in graphs and hypergraphs. SIAM J. Comput., 31(5):1608–1623, 2002.
  • [21] Johan Håstad. Clique is hard to approximate within n. In 37th Annual Symposium on Foundations of Computer Science, FOCS, pages 627–636, 1996.
  • [22] Russell Impagliazzo, Ramamohan Paturi, and Francis Zane. Which problems have strongly exponential complexity? J. Comput. Syst. Sci., 63(4):512–530, 2001.
  • [23] Subhash Khot, Dor Minzer, and Muli Safra. On independent sets, 2-to-2 games and grassmann graphs. Electronic Colloquium on Computational Complexity (ECCC), 23:124, 2016.
  • [24] Subhash Khot and Ashok Kumar Ponnuswami. Better inapproximability results for maxclique, chromatic number and min-3lin-deletion. In Automata, Languages and Programming, International Colloquium, (ICALP), pages 226–237, 2006.
  • [25] Subhash Khot and Oded Regev. Vertex cover might be hard to approximate to within 2-epsilon. J. Comput. Syst. Sci., 74(3):335–349, 2008.
  • [26] Subhash Khot and Igor Shinkar. On hardness of approximating the parameterized clique problem. In Innovations in Theoretical Computer Science (ITCS), pages 37–45, New York, NY, USA, 2016. ACM. doi:10.1145/2840728.2840733.
  • [27] Bundit Laekhanukit. Inapproximability of Combinatorial Problems in Subexponential-Time. PhD thesis, McGill University, 2014.
  • [28] Pasin Manurangsi and Prasad Raghavendra. A birthday repetition theorem and complexity of approximating dense csps. CoRR, abs/1607.02986, 2016.
  • [29] Dániel Marx. On the optimality of planar and geometric approximation schemes. In Foundations of Computer Science (FOCS), pages 338–348, 2007.
  • [30] Dana Moshkovitz and Ran Raz. Two-query PCP with subconstant error. J. ACM, 57(5):29:1–29:29, 2010.
  • [31] Jaroslav Nešetřil and Svatopluk Poljak. On the complexity of the subgraph problem. Commentationes Mathematicae Universitatis Carolinae, 026(2):415–419, 1985.
  • [32] Ryan Williams and Huacheng Yu. Personal communication.

Appendix A A Deterministic Algorithm for Independent Set

In this section, we give a deterministic -approximation algorithm that runs in time . This algorithm is a simple consequence of Feige’s algorithm [17], that we restate below in a slightly different form.

[[17]] Let be a graph with independence ratio . Then, for any parameter , one can find an independent set of size in time .

Now, our algorithm proceeds as follows.

  • If , we can enumerate all independent sets of size (this is an -approximation) in time .

  • Otherwise, the independence ratio is at least where . We choose , so Feige’s algorithm finds an independent set of size at least

    The running time is

    If we redefine , then the algorithm is an -approximation algorithm that runs in time .

Appendix B Gap-ETH hardness of Independent Set (sketch)

We now sketch the proof. We are given an -variable 3-CNF-SAT formula with perfect completeness and soundness for some . We first perform standard amplification and sparsification to get with gap parameter , the number of clauses is , and freeness is . Now, we perform FGLSS reduction to get a graph such that . Therefore, -approximation in time would lead to an algorithm that satisfies more than fraction of clauses in 3-CNF-SAT formula in time . In other words, any -time algorithm that -approximates Independent Set can be turned into a -approximation algorithm for approximating 3-CNF-SAT in sub-exponential time.