1 Introduction
Fixedparameter approximation algorithm (in short, FPTapproximation algorithm) is a new concept emerging from a crossfertilization between two trends in coping with NPhard problems: approximation algorithms and fixedparameter tractable (FPT) algorithms. Roughly speaking, an FPTapproximation algorithm is similar to an FPT algorithm in that its running time can be of the form time (called the FPT time), where is any function (possibly super exponentially growing), is the input size, and is the value of the optimal solution^{1}^{1}1There are many ways to parameterize a problem. In this paper we focus on the standard parameterization which parameterizes the optimal solution.. It is similar to an approximation algorithm in that its output is an approximation of the optimal solution; however, the approximation factor is analyzed in terms of the optimal solution (OPT) and not the input size (). Thus, an algorithm for a maximization (respectively, minimization) problem is said to be FPTapproximation for some function if it outputs a solution of size at least (respectively, at most ). For a maximization problem, such an algorithm is nontrivial when is , while for a minimization problem, it is nontrivial for any computable function .
The notion of FPTapproximation is useful when we are interested in a small optimal solution, and in particular its existence connects to a fundamental question whether there is a nontrivial approximation algorithm when the optimal solution is small. Consider, for example, the Maximum Clique (Clique) problem, where the goal is to find a clique (complete subgraph) with maximum number of vertices in an vertex graph . By outputting any single vertex, we get a trivial polynomialtime approximation algorithm. The bound can be improved to and even to with clever ideas [Feige04]. Observe, however, that these bounds are quite meaningless when since outputting a single vertex already guarantees such bounds. In this case, a bound such as would be more meaningful. Unfortunately, no approximation ratio of the form is known even when FPTtime is allowed^{2}^{2}2In fact, for maximization problems, it can be shown that a problem admits an FPTapproximation algorithm for some function if and only if it admits a polynomialtime algorithm with approximation ratio for some function [GroheG07, Marx08] (also see [Marx13]). So, it does not matter whether the running time is polynomial on the size of the input or depends on . (Note that outputting a single vertex gives an approximation guarantee.)
Similar questions can be asked for a minimization problem. Consider for instance, Minimum Dominating Set (DomSet): Find the smallest set of vertices such that every vertex in an vertex input graph has a neighbor in . DomSet admits an approximation algorithm via a basic greedy method. However, if we want the approximation ratio to depend on and not , no approximation ratio is known for any function (not even ).
In fact, the existence of nontrivial FPTapproximation algorithms for Clique and DomSet has been raised several times in the literature (e.g., [Marx08, FellowGMS12, DowneyF13]). So far, the progress towards these questions can only rule out FPTapproximation algorithms for Clique. This was shown independently by Hajiaghayi et al. [HajiaghayiKK13] and Bonnet et al. [BonnetE0P15], assuming the Exponential Time Hypothesis (ETH) and that a linearsize PCP exists. Alternatively, Khot and Shinkar [KhotS16] proved this under a rather nonstandard assumption that solving quadratic equations over a finite field under a certain regime of parameters is not in FPT; unfortunately, this assumption was later shown to be false [Kayal2014]. For DomSet, Chen and Li [ChenL16] could rule out FPTapproximation algorithms assuming . Moreover, they improved the inapproximability ratio to for any constant under the exponential time hypothesis (ETH), which asserts that no subexponential time algorithms can decide whether a given SAT formula is satisfiable. Remark that ETH implies that .
Our Results and Techniques.
We show that there is no nontrivial FPTapproximation algorithm for both Clique and DomSet. That is, there is no FPTapproximation algorithm for Clique and no FPTapproximation algorithm for DomSet, for any function . Our results hold under the Gap Exponential Time Hypothesis (GapETH), which states that distinguishing between a satisfiable SAT formula and one which is not even satisfiable requires exponential time for some constant (see Section 2).
GapETH, first formalized in [Dinur16, ManR16], is a stronger version of the aforementioned ETH. It has recently been shown to be useful in proving finegrained hardness of approximation for problems such as dense CSP with large alphabets [ManR16] and DensestSubgraph with perfect completeness [Man17].
Note that GapETH is implied by ETH if we additionally assume that a linearsize PCP exists. So, our result for Clique significantly improves the results in [HajiaghayiKK13, BonnetLP16] under the same (in fact, weaker) assumption. Our result for DomSet also significantly improves the results in [ChenL16], but our assumption is stronger.
In fact, we can show even stronger results: the best way to solve Clique and DomSet, even approximately, is to enumerate all possibilities in the following sense. Finding a clique of size can be trivially done in time by checking whether any among all possible sets of vertices forms a clique. It was known under ETH that this is essentially the best one can do [ChenHKX06, ChenHKX06b]. We show further that this running time is still needed, even when we know that a clique of size much larger than exists in the graph (e.g., ), assuming GapETH. Similarly, for DomSet, we can always find a dominating set of size in time. Under GapETH, we show that there is no better way even when we just want to find a dominating set of size .
We now give an overview of our techniques. The main challenge in showing our results is that we want them to hold for the case where the optimal solution is arbitrarily smaller than the input size. (This is important to get the FPTinapproximability results.) To this end, (i) reductions cannot blow up the optimal solution by a function of the input size, and (ii) our reductions must start from problems with a large hardness gap, while having small . Fortunately, Property (i) holds for the known reductions we employ.
The challenge of (ii) is that existing gap amplifying techniques (e.g., the parallel repetition theorem [Raz98] or the randomized graph product [BermanS92]), while amplifying the gap to arbitrarily large, cause the input size to be too large that existing reduction techniques (e.g., [ChenHKX06, PatrascuW10]) cannot be applied efficiently (in particular, in subexponential time). We circumvent this by a step that amplifies the gap and reduce at the same time. In more detail, this step takes a 3SAT formula as an input and produces a “label cover”^{3}^{3}3Our problem is an optimization problem on Label Cover instance, with a slightly different objective from the standard Label Cover. Please refer to Section 4 for more detail. instance (roughly, a bipartite graph with constraints on edges) such that: For any , (i) If is satisfiable, then is satisfiable, and (ii) if is at most satisfiable, then less than fraction of constraints of can be satisfied. Moreover, our reduction allows us to “compress” either the the lefthandside or the righthandside vertices to be arbitrarily small. This label cover instance is a starting point for all our problems. To derive our result for Clique, we would need the lefthandside to be arbitrarily small, while for DomSet, we would need the small righthandside.
The lefthandside vertex compression is similar to the randomized graph product [BermanS92] and, in fact, the reduction itself has been studied before [Zuck96, Zuck96unapprox] but in a very different regime of parameters. For a more detailed discussion, please refer to Subsection 4.2.
Once the inapproximability results for label cover problems with small lefthandside and righthandside vertex set are established, we can simply reduce it to Clique and DomSet using the standard reductions from [FGLSS96] and [Feige98] respectively.
Besides the above results for Clique and DomSet, we also show that no nontrivial FPTapproximation algorithm exists for a few other problems, including Maximum Biclique, the problem of finding maximum subgraphs with hereditary properties (e.g., maximum planar induced subgraph) and Maximum Induced Matching in bipartite graphs. Previously only the exact versions of these problems were only known to be hard [Lin15, KhotR00, MoserS09]. Additionally, we rule out FPTapproximation algorithm for Densest Subgraph, although this ratio does not yet match the trivial approximation algorithm. Finally, we remark that, while our result for maximum subgraphs with hereditary properties follows from a reduction from Clique, the FPT inapproximability of other problems are shown not through the label cover problems, but instead from a modification of the hardness of approximation of Densest Subgraph in [Man17].
Previous Works.
Our results are based on the method of compressing (or reducing the size of) the optimal solution, which was first introduced by Chen, Huang, Kanj and Xia in [ChenHKX04] (the journal version appears in [ChenHKX06]). Assuming ETH, they showed that finding both Clique and DomSet cannot be solved in time , where is the number of vertices in an input graph. Later, Pătrascu and Williams [PatrascuW10] applied similar techniques to sharpen the running time lower bound of DomSet to , for any constant , assuming the Strong Exponential Time Hypothesis (SETH). The technique of compressing the optimal solution was also used in hardness of approximation by Hajiaghayi, Khandekar and Kortsarz in [HajiaghayiKK13] and by Bonnet, Lampis and Paschos in [BonnetE0P15]. Our techniques can be seen as introducing gap amplification to the reductions in [ChenHKX06]. We emphasize that while [ChenHKX06],[PatrascuW10],[HajiaghayiKK13] and [BonnetE0P15] (and also the reductions in this paper) are all based on the technique of compressing the optimal solution, Hajiaghayi et al. [HajiaghayiKK13] compress the optimal solution after reducing SAT to the designated problems, i.e., Clique and DomSet. [ChenHKX06], [PatrascuW10], [BonnetE0P15] and our reductions, on the other hand, compress the optimal solution of SAT prior to feeding it to the standard reductions (with small adjustment). While this difference does not affect the reduction for Clique, it has a huge effect on DomSet. Specifically, compressing the optimal solution at the postreduction results in a huge blowup because the blowup in the first step (i.e., from SAT to DomSet) becomes exponential after compressing the optimal solution. Our proof for Clique and the one in [HajiaghayiKK13] bear a similarity in that both apply graph product to amplify approximation hardness. The key different is that we use randomized graph product instead of the deterministic graph product used in [HajiaghayiKK13].
Very recently, Chen and Lin [ChenL16] showed that DomSet admits no constant approximation algorithm unless . Their hardness result was derived from the seminal result of Lin [Lin15], which shows that the Maximum Intersection problem (a.k.a, Oneside GapBiclique) has no FPT approximation algorithm. Furthermore, they showed that, when assuming ETH, their result can be strengthened to rule out FPTapproximation algorithm, for any constant . The result of Chen and Lin follows from the W[1]hardness of Biclique [Lin15] and the proof of the ETHhardness of Clique [ChenHKX04]. Note that while Chen and Lin did not discuss the size of the optimal solution in their paper, the method of compressing the optimal solution was implicitly used there. This is due to the runningtime lower bound of Clique that they quoted from [ChenHKX04].
Our method for proving the FPT inapproximability of DomSet is similar to that in [PatrascuW10]. However, the original construction in [PatrascuW10] does not require a “partition system”. This is because Pătrascu and Williams reduction starts from SAT, which can be casted as DomSet. In our construction, the reduction starts from an instance of the Constraint Satisfaction problem (CSP) that is more general than SAT (because of the gapamplification step) and hence requires the construction of a partition system. (Note that the partition system has been used in standard hardness reductions for DomSet [LundY94, Feige98].)
We remark that our proof does not imply FPTinapproximability for DomSet under ETH whereas Chen and Lin were able to prove the inapproximability result under ETH because their reduction can be applied directly to SAT via the result in [ChenHKX06]. If ones introduced the GapETH to the previous works, then the proofs in [ChenHKX06, HajiaghayiKK13, BonnetE0P15] yield the constant FPTinapproximability of Clique, and the proof in [ChenHKX06] yields the constant FPTinapproximability of DomSet.
The summaries of previous works on Clique and DomSet are presented in Table 1.
Summary of Works on Clique  
Inapprox Factor  Running Time Lower Bound  Assumption  References 
any constant  ETH + LPCP  [BonnetE0P15]  
ETH  [ChitnisHK13]  
^{4}^{4}4Constant FPTinapproximability of Clique under ETH is claimed in [HajiaghayiKK13] (arXiv version). However, as we investigated, the GapETH is assumed there.  ETH  [HajiaghayiKK13]  
No  GapETH  This paper  
Summary of Works on DomSet  
Inapprox Factor  Running Time Lower Bound  Assumption  References 
ETH  [ChitnisHK13]  
ETH + PGC  [HajiaghayiKK13]  
any constant  (i.e. no FPT)  [ChenL16]  
ETH  [ChenL16]+[ChenHKX06]  
GapETH  This paper 
Other Related Works.
All problems considered in this work are also wellstudied in terms of hardness of approximation beyond the aforementioned parameterized regimes; indeed many techniques used here are borrowed from or inspired by the nonparameterized settings.
Maximum Clique.
Maximum Clique is arguably the first natural combinatorial optimization problem studied in the context of hardness of approximation; in a seminal work of Feige, Goldwasser, Lovász, Safra and Szegedy (henceforth FGLSS)
[FGLSS96], a connection was made between interactive proofs and hardness of approximating Clique. This connection paves the way for later works on Clique and other developments in the field of hardness of approximations; indeed, the FGLSS reduction will serve as part of our proof as well. The FGLSS reduction, together with the PCP theorem [AroraS98, AroraLMSS98] and gap amplification via randomized graph products [BermanS92], immediately implies ratio inapproximability of Clique for some constant under the assumption that NP BPP. Following Feige et al.’s work, there had been a long line of research on approximability of Clique [BellareGLR93, FeigeK00, BellareGS98, BellareS94], which culminated in Håstad’s work [Hastad96]. In [Hastad96], it was shown that Clique cannot be approximated to within a factor of in polynomial time unless NP ZPP; this was later derandomized by Zuckerman who showed a similar hardness under the assumption NP P [Zuckerman07]. Since then, better inapproximability ratios are known [EngebretsenH00, Khot01, KhotP06], with the best ratio being for every (assuming NP BPTIME()) due to Khot and Ponnuswami [KhotP06]. We note here that the best known polynomial time algorithm for Clique achieves approximation for the problem [Feige04].Set Cover. Minimum Set Cover, which is equivalent to Minimum Dominating Set, is also among the first problems studied in hardness of approximation. Lund and Yannakakis proved that, unless NP DTIME(), SetCov cannot be efficiently approximated to within factor of the optimum for some constant [LundY94]. Not long after, Feige [Feige98] both improved the approximation ratio and weaken the assumption by showing an ratio inapproximability for every assuming only that NP DTIME(). Recently, a similar inapproximability has been achieved under the weaker NP P assumption [Moshkovitz15, DinurS14]. Since a simple greedy algorithm is known to yield approximation for SetCov [Chvatal79], the aforementioned hardness result is essentially tight. A common feature in all previous works on hardness of SetCov [LundY94, Feige98, Moshkovitz15] is that the constructions involve composing certain variants of CSPs with partition systems. As touched upon briefly earlier, our construction will also follow this approach; for the exact definition of CSPs and the partition system used in our work, please refer to Subsection 5.2.2.
Maximum Subgraph with Hereditary Properties. The complexity of finding and approximating maximum subgraph with hereditary properties have also been studied since the 1980s [LewisY80, LundY93, feige2005hardness]; specifically, Feige and Kogan showed that, for every nontrivial property (i.e., such that infinite many subgraphs satisfy and infinitely many subgraphs do not satisfy ), the problem is hard to approximate to within factor for every unless NP ZPP [feige2005hardness]. We also note that nontrivial approximation algorithms for the problem are known; for instance, when the property fails for some clique or some independent set, a polynomial time approximation algorithm is known [Halldorsson00].
Maximum Balanced Biclique. While the Maximum Balanced Biclique problem bears a strong resemblance to the Maximum Clique Problem, inapproximability of the latter cannot be directly translated to that of the former; in fact, despite numerous attempts, not even constant factor NPhardness of approximation of the Maximum Balanced Biclique problem is known. Fortunately, under stronger assumptions, hardness of approximation for the problem is known: factor hardness of approximation is known under Feige’s random 3SAT hypothesis [Feige02] or NP BPTIME() [Khot06], and factor hardness of approximation is known under strengthening of the Unique Games Conjecture [BhangaleGHKK16, Man17ICALP]. To the best of our knowledge, no nontrivial approximation algorithm for the problem is known.
Densest Subgraph. The Densest Subgraph problem has received considerable attention from the approximation algorithm community [KP93, FPK01, BCCFV10]; the best known polynomial time algorithm due to Bhaskara et al. [BCCFV10] achieves approximation for every . On the other hand, similar to Biclique, NPhardness of approximating Densest Subgraph, even to some constant ratio, has so far eluded researchers. Nevertheless, in the same works that provide hardness results for Biclique [Feige02, Khot06], DkS is shown to be hard to approximate to some constant factor under random 3SAT hypothesis or NP BPTIME(). Furthermore, factor inapproximability is known under the planted clique hypothesis [AAMMW11] and, under ETH (resp., GapETH), (resp., ) factor inapproximabilities are known [Man17]. (See also [BravermanKRW17] in which a constant ratio ETHhardness of approximating DkS was shown.) In addition to these hardness results, polynomial ratio integrality gaps for strong LP and SDP relaxations of the problem are also known [BCVGZ12, Mthesis, ChlamtacMMV17].
Maximum Induced Matching on Bipartite Graphs. The problem was proved to be NPhard independently by Stockmeyer and Vazirani [StockmeyerV82] and Cameron [Cameron89]. The approximability of the problem was first studied by Duckworth et al. [DuckworthMZ05] who showed that the problem is APXhard, even on bipartite graphs of degree three. Elbassioni et al. [ElbassioniRRS09] then showed that the problem is hard to approximate to within factor for every , unless NP ZPP. Chalermsook et al. [ChalermsookLN13] later improved the ratio to for every .
Organization.
We define basic notations in Section 2. In Section 3, we define the notion of inherently enumerative, which captures the fact that nothing better than enumerating all possibilities can be done. We show that a problem admits no nontrivial FPTapproximation algorithm by showing that it is inherently enumerative. In Section 4, we define and prove results about our intermediate problems on label cover instances. Finally, in Section 5 we derive results for Clique, DomSet, and other problems.
2 Preliminaries
We use standard terminology. For any graph , we denote by and the vertex and edge sets of , respectively. For each vertex , we denote the set of its neighbors by ; when the graph is clear from the context, we sometimes drop it from the notation. A clique of is a complete subgraph of . Sometime we refer to a clique as a subset such that there is an edge joining every pair of vertices in . A biclique of is a balanced complete bipartite subgraph of (i.e., the graph ). By biclique, we mean the graph (i.e., the number of vertices in each partition is ). An independent set of is a subset of vertices such there is no edge joining any pair of vertices in . A dominating set of is a subset of vertices such that every vertex in is either in or has a neighbor in . The clique number (resp., independent number) of is the size of the largest clique (resp., independent set) in . The biclique number of is the largest integer such that contains as a subgraph. The domination number of is defined similarly as the size of the smallest dominating set in . The clique, independent and domination numbers of are usually denoted by , and , respectively. However, in this paper, we will refer to these numbers by . Additionally, we denote the biclique number of by
2.1 FPT Approximation
Let us start by formalizing the the notation of optimization problems; here we follow the notation due to Chen et al. [ChenGG06]. An optimization problem is defined by three components: (1) for each input instance of , a set of valid solutions of denoted by , (2) for each instance of and each , the cost of with respect to denoted by , and (3) the goal of the problem which specifies whether is a minimization or maximization problem. Throughout this work, we will assume that can be computed in time . Finally, we denote by the optimal value of each instance , i.e. where is taken over .
We now continue on to define parameterized approximation algorithms. While our discussion so far has been on optimization problems, we will instead work with “gap versions” of these problems. Roughly speaking, for a maximization problem , the gap version of takes in an additional input and the goal is to decide whether or . As we will elaborate below, the gap versions are weaker (i.e. easier) than the optimization versions and, hence, our impossibility results for gap versions translate to those of optimization versions as well.
Definition 2.1 (FPT gap approximation).
For any optimization problem and any computable function , an algorithm , which takes as input an instance of and a positive integer , is said to be an FPT gap approximation algorithm for if the following conditions hold on every input :

runs in time for some computable function .

If , must output 1 if and output 0 if .
If , must output 1 if and output 0 if .
is said to be FPT gap approximable if there is an FPT gap approximation algorithm for .
Next, we formalize the concept of totally FPT inapproximable, which encapsulates the nonexistence of nontrivial FPT approximations discussed earlier in the introduction.
Definition 2.2.
A minimization problem is said to be totally FPT inapproximable if, for every computable function , is not FPT gap approximable.
A maximization problem is said to be totally FPT inapproximable if, for every computable function such that (i.e. ), is not FPT gap approximable.
With the exception of Densest Subgraph, every problem considered in this work will be shown to be totally FPT inapproximable. To this end, we remark that totally FPT inapproximable as defined above through gap problems imply the nonexistence of nontrivial FPT approximation algorithm that was discussed in the introduction. These implications are stated more precisely in the two propositions below; their proofs are given in Appendix A.
Proposition 2.3.
Let be any minimization problem. Then, (1) implies (2) where (1) and (2) are as defined below.

[(1)]

is totally FPT inapproximable.

For all computable functions and , there is no algorithm that, on every instance of , runs in time and outputs a solution such that .
Proposition 2.4.
Let be any maximization problem. Then, (1) implies (2) where (1) and (2) are as defined below.

[(1)]

is totally FPT inapproximable.

For all computable functions and such that and is nondecreasing, there is no algorithm that, on every instance of , runs in time and outputs a solution such that .
2.2 List of Problems
We will now list the problems studied in this work. While all the problems here can be defined in terms of optimization problems as defined the previous subsection, we will omit the terms and whenever they are clear from the context.
The Maximum Clique Problem (Clique). In Clique, we are given a graph together with an integer , and the goal is to decide whether has a clique of size . The maximization version of Clique, called MaxClique, asks to compute the maximum size of a clique in . We will abuse Clique to mean the MaxClique problem, and we will denote by the clique number of , which is the value of the optimal solution to Clique.
The problem that is (computationally) equivalent to Clique is the maximum independent set problem (MIS) which asks to compute the size of the maximum independent set in . The two problems are equivalent since any clique in is an independent set in the complement graph .
The Minimum Dominating Set Problem (DomSet). In DomSet, we are given a graph together with an integer , and the goal is to decide whether has a dominating set of size . The minimization version of DomSet is called the DomSet, which asks to compute the size of the minimum dominating set in .
The problem that is equivalent to DomSet is the minimum set cover problem (SetCov): Given a universe of elements and a collection of subsets , the goal is to find the minimum number of subsets of whose union equals . It is a standard fact that DomSet is equivalent to SetCov. See Appendix D for more detail.
Maximum Induced Subgraph with Hereditary Properties: A property is simply a subset of all graphs. We say that is a hereditary property if whenever , all induced subgraphs of are in . The Maximum Induced Subgraph problem with Property asks for a maximum cardinality set such that . Here denotes the subgraph of induced on . Notice that both Clique and MIS belong to this class of problems. For more discussions on problems that belong to this class, see Appendix D.
Maximum Induced Matching on Bipartite Graphs: An induced matching of a graph is a subset of edges such that there is no cross edge, i.e., for all . The induced matching number of graph is simply the maximum value of among all induced matchings ’s of . In this work, we will be interested in the problem of approximating in bipartite graphs; this is because, for general graphs, the problem is as hard to approximate as Clique. (See Appendix D for more details.)
Maximum Balanced Biclique (Biclique). In Biclique, we are given a bipartite graph together with an integer . The goal is to decide whether contains a complete bipartite subgraph (biclique) with vertices on each side. In other words, we are asked to decide whether contains as a subgraph. The maximization version of Biclique, called Maximum Balanced Biclique, asks to compute the maximum size of a balanced biclique in .
Densest Subgraph (DkS). In the Densest Subgraph problem, we are given an integer and a graph . The goal is to find a subset of vertices that induces maximum number of edges. For convenience, we define density of an induced subgraph to be and we define the optimal density of DkS to be .
2.3 Gap Exponential Time Hypothesis
Our results are based on the Gap Exponential Time Hypothesis (GapETH). Before we state the hypothesis, let us recall the definition of 3SAT. In SAT, we are given a CNF formula in which each clause consists of at most literals, and the goal is to decide whether is satisfiable.
Max SAT is a maximization version of SAT which asks to compute the maximum number of clauses in that can be simultaneously satisfied. We will abuse SAT to mean Max SAT, and for a formula , we use to denote the maximum number of clauses satisfied by any assignment.
The Gap Exponential Time Hypothesis can now be stated in terms of SAT as follows.
Conjecture 2.5 ((randomized) Gap ExponentialTime Hypothesis (GapETH) [Dinur16, ManR16]).
For some constant , no algorithm can, given a SAT formula on variables and
clauses, distinguishes between the following cases correctly with probability
in time:
and

.
Note that the case where (that is, the algorithm only needs to distinguish between the cases that and ) is known as ETH [IPZ01]. Another related conjecture is the strengthened version of ETH is called the Strong ExponentialTime Hypothesis (SETH) [IP01ETH]: for any , there is an integer such that there is no time algorithm for SAT. GapETH of course implies ETH, but, to the best of our knowledge, no formal relationship is known between GapETH and SETH. While GapETH may seem strong due to the gap between the two cases, there are evidences suggesting that it may indeed be true, or, at the very least, refuting it is beyond the reach of our current techniques. We discuss some of these evidences in Appendix F.
While GapETH as stated above rules out not only deterministic but also randomized algorithms, the deterministic version of GapETH suffices for some of our results, including inapproximability of Clique and DomSet. The reduction for DomSet as stated below will already be deterministic, but the reduction for Clique will be randomized. However, it can be easily derandomized and we sketch the idea behind this in in Subsection 4.2.1. Note that, on the other hand, we do not know how to derandomize some of our other results, including those of Biclique and DkS.
3 FPT Inapproximability via Inherently Enumerative Concept
Throughout the paper, we will prove FPT inapproximability through the concept of inherently enumerative problems, which will be formalized shortly.
To motivate the concept, note that all problems considered in this paper admit an exact algorithm that runs in time^{5}^{5}5Recall that hides terms that are polynomial in the input size. ; For instance, to find a clique of size in , one can enumerate all possibilities^{6}^{6}6A faster algorithm runs in time can be done by a reduction to matrix multiplication.. For many W[1]hard problems (e.g. Clique), this running time is nearly the best possible assuming ETH: Any algorithm that finds a clique in time would break ETH. In the light of such result, it is natural to ask the following question.
Assume that , can we find a clique of size in time ?
In other words, can we exploit a prior knowledge that there is a clique of size much larger than to help us find a clique faster? Roughly speaking, we will show later that, assuming GapETH, the answer of this question is also negative, even when is replaced by any constant independent of . This is encapsulated in the inherently enumerative concept as defined below.
Definition 3.1 (Inherently Enumerative).
A problem is said to be inherently enumerative if there exist constants such that, for any integers , no algorithm can decide, on every input instance of , whether (i) or (ii) in time^{7}^{7}7 here and in Definition 3.2 hides any multiplicative term that is a function of and . .
While we will show that Clique and DomSet are inherently enumerative, we cannot do the same for some other problems, such as Biclique. Even for the exact version of Biclique, the best running time lower bound known is only [Lin15] assuming ETH. In order to succinctly categorize such lower bounds, we define a similar but weaker notation of weakly inherently enumerative:
Definition 3.2 (Weakly Inherently Enumerative).
For any function (i.e. ), a problem is said to be weakly inherently enumerative if there exists a constant such that, for any integers , no algorithm can decide, on every input instance of , whether (i) or (ii) in time .
is said to be weakly inherently enumerative if it is weakly inherently enumerative for some .
It follows from the definitions that any inherently enumerative problem is also weakly inherently enumerative. As stated earlier, we will prove total FPT inapproximability through inherently enumerative; the proposition below formally establishes a connection between the two.
Proposition 3.3.
If is weakly inherently enumerative, then is totally FPT inapproximable.
Proof.
We first consider maximization problems. We will prove the contrapositive of the statement. Assume that a maximization problem is not totally FPT inapproximable, i.e., admits an FPT gap approximation algorithm for some computable function such that . Suppose that the running time of on every input is for some constant and some function . We will show that is not weakly inherently enumerative.
Let be any constant and let be any function such that . Let be the smallest integer such that and and let be the smallest integer such that . Note that and exists since and .
Given any instance of . From the definition of FPT gap approximation algorithms (Definition 2.1) and from the fact that , on the input can distinguish between and in time. Hence, is not weakly inherently enumerative, concluding our proof for maximization problems.
For any minimization problem , assume again that is not totally FPT inapproximable, i.e., admits an FPT gap approximation algorithm for some computable function . Suppose that the running time of on every input is for some constant .
Let be any constant and let be any function such that . Let be the smallest integer such that and and let .
Given any instance of . From definition of FPT gap approximation algorithms and from , on the input can distinguish between and in time. Hence, is not weakly inherently enumerative. ∎
An important tool in almost any branch of complexity theory, including parameterized complexity, is a notion of reductions. For the purpose of facilitating proofs of totally FPT inapproximability, we define the following reduction, which we call FPT gap reductions.
Definition 3.4 (FPT gap reduction).
For any functions , a problem is said to be FPT gap reducible to a problem if there exists an algorithm which takes in an instance of and integers and produce an instance of such that the following conditions hold.

runs in time for some computable function .

For every positive integer , if , then .

For every positive integer , if , then .
It is not hard to see that FPT gap reduction indeed preserves totally FPT inapproximability, as formalized in Proposition 3.5 below. The proof of the proposition can be found in Appendix B.
Proposition 3.5.
If a problem is (i) FPT gap reducible to for some computable nondecreasing functions , and (ii) totally FPT inapproximable, then is also totally FPT inapproximable.
As stated earlier, we mainly work with inherently enumerative concepts instead of working directly with totally FPT inapproximability; indeed, we will never use the above proposition and we alternatively use FPT gap reductions to prove that problems are weakly inherently enumeratives. For this purpose, we will need the following proposition.
Proposition 3.6.
If a problem is (i) FPT gap reducible to and (ii) weakly inherently enumerative for some , then is weakly inherently enumerative.
Proof.
We assume that (i) holds, and will show that if the “then” part does not hold, then (ii) also does not hold. Recall from Definition 3.4 that (i) implies that there exists such that the reduction from (with parameters and ) to takes time and always output an instance of size at most on every input instance . Now assume that the “then” part does not hold, in particular is not weakly inherently enumerative. We will show the following claim which says that (ii) does not hold (by Definition 3.2).
Claim 3.7.
For every , there exists and an time algorithm that can, on every input instance of , distinguish between and
We now prove the claim. Consider any . Since , there exists such that and , for all . From the assumption that is not weakly inherently enumerative, there exist such that there is an time algorithm that can, on every input instance of , distinguish between and .
Let , and let be the smallest integer such that and ; Note that exists since , and that . We use and the reduction to build an algorithm as follows. On input , algorithm runs the reduction on and the previously defined . Let us call the output of the reduction . then runs on input and outputs accordingly; i.e. if says that , then outputs , and, otherwise, if says that , then outputs .
Now we show that can distinguish whether or as desired by the claim: From our choice of , if , then . Similarly, from our choice of , if , then . Since can distinguish between the two cases, can distinguish between the two cases as well.
The total running time of is (the first term is for running the reduction). Since of size at most , , and and depend only on and , the running time can be bounded by as desired. ∎
4 Covering Problems on Label Cover Instances
In this section, we give intermediate results for the lower bounds on the running time of approximating variants of the label cover problem, which will be the source of our inapproximability results for Clique and DomSet.
4.1 Problems and Results
Label cover instance:
A label cover instance consists of , where

is a bipartite graph between vertex sets and and an edge set ,

and are sets of alphabets to be assigned to vertices in and , respectively, and

is a set of constraints .
We say that (or ) has the projection property if for every edge (where and ) and every , there is exactly one such that .
We will define two combinatorial optimization problems on an instance of the label cover problem. These two problems are defined on the same instance as the standard label cover problem. We will briefly discuss how our problems differ from the standard one.
MaxCover Problem:
A labeling of the graph, is a pair of mappings and . We say that a labeling covers edge if . We say that a labeling covers a vertex if it covers every edge incident to . For any label cover instance , let denote the maximum number of vertices in that can be covered by a labeling; i.e.
The goal of the MaxCover problem is to compute . We remark that the standard label cover problem (e.g. [WilliamsonShmoyBook]) would try to maximize the number of covered edges, as opposed to our MaxCover problem, which seeks to maximize the number of covered vertices.
MinLabel Problem:
A multilabeling of the graph, is a pair of mappings and . We say that covers an edge , if there exists such that . For any label cover instance , let denote the minimum number of labels needed to assign to vertices in in order to cover all vertices in ; i.e.
where the minimization is over multilabelings that covers every edge in .
We emphasize that we can assign multiple labels to nodes in while each node in must be assigned a unique label. Note that MinLab is different from the problem known in the literature as MinRep (e.g. [CharikarHK11]); in particular, in MinRep we can assign multiple labels to all nodes.
Results.
First, note that checking whether or not, for any , can be done by the following algorithms.

It can be done^{8}^{8}8Recall that we use to hide factors polynomial in the input size. in time: First, enumerate all possible subsets of and all possible labelings on vertices in . Once we fix the labeling on , we only need polynomial time to check whether we can label other vertices so that all vertices in are covered.

It can be done in time: Enumerate all possible labelings on . After is fixed, we can find labeling on that maximizes the number of vertices covered in in polynomial time.
ETH can be restated as that these algorithms are the best possible when , and has the projection property. GapETH asserts further that this is the case even to distinguish between and .
Theorem 4.1.
GapETH (creftypecap 2.5) is equivalent to the following statement. There exist constants such that no algorithm can take a label cover instance and can distinguish between the following cases in time:

, and

.
This holds even when , and has the projection property.
The proof of Theorem 4.1 is standard. To avoid distracting the readers, we provide the sketch of the proof in Appendix E.
We will show that Theorem 4.1 can be extended to several cases, which will be useful later. First, consider when the first (time) algorithm is faster than the second. We show that in this case this algorithm is essentially the best even for , and this holds even when we know that .
For convenience, in the statements of Theorems 4.4, 4.3 and 4.2 below, we will use the notation to denote the size of the label cover instance; in particular, . Furthermore, recall that the notation denotes any multiplicative factor that depends only on and .
Theorem 4.2 (MaxCov with Small ).
Assuming GapETH, there exist constants such that, for any positive integers , no algorithm can take a label cover instance with and distinguish between the following cases in time:

and

.
This holds even when and has the projection property.
We emphasize that it is important for applications in later sections that . In fact, the main challenge in proving the theorem above is to prove it true for that is arbitrarily small compared to .
Secondly, consider when the second (time) algorithm is faster; in particular when . In this case, we cannot make the soundness (i.e. parameter in Theorem 4.2) to be arbitrarily small. (Roughly speaking, the first algorithm can become faster otherwise.) Instead, we will show that the second algorithm is essentially the best possible for soundness as small as , for any constant . More importantly, this holds for (thus independent from the input size). This is the key property of this theorem that we need later.
Theorem 4.3 (MaxCov with Small ).
Assuming GapETH, there exist constants such that, for any positive integer and any , no algorithm can take a label cover instance with and distinguish between the following cases in time:

and

.
This holds even when .
We remark that the above label cover instance does not have the projection property.
In our final result, we turn to computing . Since if and only if , a statement similar to Theorem 4.1 intuitively holds for distinguishing between and ; i.e. we need time. In the following theorem, we show that this gap can be substantially amplified, while maintaining the property that (thus independent from the input size).
Theorem 4.4 (MinLab Hardness).
Assuming GapETH, there exist constants such that, for any positive integers , no algorithm can take a label cover instance with , and distinguish between the following cases in time:

and

.
This holds even when .
The rest of this section is devoted to proving Theorems 4.4, 4.3 and 4.2.
4.2 Proof of Theorem 4.2
The proof proceeds by compressing the left vertex set of a label cover instance from Theorem 4.1. More specifically, each new left vertex will be a subset of left vertices in the original instance. In the construction below, these subsets will just be random subsets of the original vertex set of a certain size; however, the only property of random subsets we will need is that they form a disperser. To clarify our proof, let us start by stating the definition of dispersers here. Note that, even though dispersers are often described in graph or distribution terminologies in literatures (e.g. [Vadhanbook]), it is more convenient for us to describe it in terms of subsets.
Definition 4.5.
For any positive integers and any constant , an disperser is a collection of subsets each of size such that the union of any different subsets from the collection has size at least . In other words, for any , we have .
The idea of using dispersers to amplify gap in hardness of approximation bears a strong resemblance to the classical randomized graph product technique [BermanS92]. Indeed, similar approaches have been used before, both implicitly (e.g. [BellareGS98]) and explicitly (e.g. [Zuck96, Zuck96unapprox, Zuckerman07]). In fact, even the reduction we use below has been studied before by Zuckerman [Zuck96, Zuck96unapprox]!
What differentiates our proof from previous works is the setting of parameters. Since the reduction size (specifically, the left alphabet size ) blows up exponentially in and previous results aim to prove NPhardness of approximating Clique, are chosen to be small (i.e. ). On the other hand, we will choose our to be since we would like to only prove a running time lower bound of the form . Interestingly, dispersers for our regime of parameters are easier to construct deterministically and we will sketch the construction in Subsection 4.2.1. Note that this construction immediately implies derandomization of our reduction.
The exact dependency of parameters can be found in the claim below, which also states that random subsets will be a disperser for such choice of parameters with high probability. Here and throughout the proof, and should be thought of as constants where ; these are the same as the ones in the statement of Theorem 4.2.
Claim 4.6.
For any positive integers and any constant , let and let be element subsets of drawn uniformly independently at random. If , then is an disperser with probability at least .
Proof.
When , the statement is obviously true; thus, we assume w.l.o.g. that . Consider any indices such that . We will first compute the probability that and then take the union bound over all such ’s.
Observe that if and only if there exists a set of size less than