1 Introduction
The study of errorcorrecting codes gives rise to many interesting computational problems. One of the most fundamental among these is the problem of computing the distance of a linear code. In this problem, which is commonly referred to as the Minimum Distance Problem (), we are given as input a generator matrix of a binary^{1}^{1}1Note that can be defined over larger fields as well; we discuss more about this in Section 7. linear code and an integer . The goal is to determine whether the code has distance at most . Recall that the distance of a linear code is where denote the 0norm (aka the Hamming norm).
The study of this problem dates back to at least 1978 when Berlekamp [BMvT78] conjectured that it is hard. This conjecture remained open for almost two decades until it was positively resolved by Vardy [Var97a, Var97b]. Later, Dumer [DMS03] strengthened this intractability result by showed that, even approximately computing the minimum distance of the code is hard. Specifically, they showed that, unless , no polynomial time algorithm can distinguish between a code with distance at most and one whose distance is greater than for any constant . Furthermore, under stronger assumptions, the ratio can be improved to superconstants and even almost polynomial. Dumer ’s result has been subsequently derandomized by Cheng and Wan [CW12] and further simplified by Austrin and Khot [AK14] and Micciancio [Mic14].
While the aforementioned intractability results rule out not only efficient algorithms but also efficient approximation algorithms for , there is another popular technique in coping with hardness of problems which is not yet ruled out by the known results: parameterization.
In parameterized problems, part of the input is an integer that is designated as the parameter of the problem, and the goal is now not to find a polynomial time algorithm but a fixed parameter tractable (FPT) algorithm. This is an algorithm whose running time can be upper bounded by some (computable) function of the parameter in addition to some polynomial in the input length. Specifically, for , its parameterized variant^{2}^{2}2Throughout Sections 1 and 2, for a computational problem , we denote its parameterized variant by , where is the parameter of the problem.  has as the parameter and the question is whether there exists an algorithm that can decide if the code generated by has distance at most in time where can be any computable function that depends only on .
The parameterized complexity of  was first questioned by Downey [DFVW99]^{3}^{3}3 is formulated slightly differently in [DFVW99]. There, the input contains a paritycheck matrix instead of the generator matrix, but, since we can efficiently compute one given the other, the two formulations are equivalent.^{4}^{4}4 is commonly referred to in the area of parameterized complexity as the Even Set problem due to its graph theoretic interpretation (see [DFVW99]). who showed that parameterized variants of several other codingtheoretic problems, including the Nearest Codeword Problem and the Nearest Vector Problem^{5}^{5}5The Nearest Vector Problem is also referred to in the literature as the Closest Vector Problem. which we will discuss in more details in Section 1.1.1, are hard. Thereby, assuming the widely believed hypothesis, these problems are rendered intractable from the parameterized perspective. Unfortunately, Downey fell short of proving such hardness for  and left it as an open problem:
Is  fixed parameter tractable?
Although almost two decades have passed, the above question remains unresolved to this day, despite receiving significant attention from the community. In particular, the problem was listed as an open question in the seminal book of Downey and Fellows [DF99] and has been reiterated numerous times over the years [DGMS07, FGMS12, GKS12, DF13, CFJ14, CFK15, BGGS16, CFHW17, Maj17]. In fact, in their second book [DF13], Downey and Fellows even include this problem as one of the six^{6}^{6}6So far, two of the six problems have been resolved: that of parameterized complexity of Biclique [Lin15] and that of parameterized approximability of Dominating Set [KLM18]. “most infamous” open questions in the area of Parameterized Complexity.
Another question posted in Downey ’s work [DFVW99] that remains open is the parameterized Shortest Vector Problem () in lattices. The input of  (in the norm) is an integer and a matrix representing the basis of a lattice, and we want to determine whether the shortest (nonzero) vector in the lattice has length at most , i.e., whether . Again, is the parameter of the problem. It should also be noted here that, similar to [DFVW99], we require the basis of the lattice to be integervalue, which is sometimes not enforced in literature (e.g. [vEB81, Ajt98]). This is because, if is allowed to be any matrix in , then parameterization is meaningless because we can simply scale down by a large multiplicative factor.
The (nonparameterized) Shortest Vector Problem () has been intensively studied, motivated partly due to the fact that both algorithms and hardness results for the problem have numerous applications. Specifically, the celebrated LLL algorithm for [LLL82] can be used to factor rational polynomials, and to solve integer programming (parameterized by the number of unknowns) [Len83] and many other computational numbertheoretic problems (see e.g. [NV10]). Furthermore, the hardness of (approximating) has been used as the basis of several cryptographic constructions [Ajt98, AD97, Reg03, Reg05]. Since these topics are out of scope of our paper, we refer the interested readers to the following surveys for more details: [Reg06, MR09, NV10, Reg10].
On the computational hardness side of the problem, van EmdeBoas [vEB81] was the first to show that is hard for the norm, but left open the question of whether on the norm for is hard. It was not until a decade and a half later that Ajtai [Ajt96] showed, under a randomized reduction, that for the norm is also hard; in fact, Ajtai’s hardness result holds not only for exact algorithms but also for approximation algorithms as well. The term in the inapproximability ratio was then improved in a subsequent work of Cai and Nerurkar [CN99]. Finally, Micciancio [Mic00] managed to achieve a factor that is bounded away from one. Specifically, Micciancio [Mic00] showed (again under randomized reductions) that on the norm is hard to approximate to within a factor of for every . Khot [Kho05] later improved the ratio to any constant, and even to under a stronger assumption. Haviv and Regev [HR07] subsequently simplified the gap amplification step of Khot and, in the process, improved the ratio to almost polynomial. We note that both Khot’s and HavivRegev reductions are also randomized and it is still open to find a deterministic hardness reduction for in the norms for (see [Mic12]); we emphasize here that such a reduction is not known even for the exact (not approximate) version of the problem. For the norm, the following stronger result due to Dinur is known [Din02]: in the norm is hard to approximate to within factor (under a deterministic reduction).
Very recently, finegrained studies of have been initiated [BGS17, AS18]. The authors of [BGS17, AS18] showed that for any norm cannot be solved (or even approximated to some constant strictly greater than one) in subexponential time assuming the existence of a certain family of lattices^{7}^{7}7This additional assumption is only needed for . For , their hardness is conditional only on GapETH. and the (randomized) Gap Exponential Time Hypothesis (GapETH) [Din16, MR16], which states that no randomized subexponential time algorithm can distinguish between a satisfiable 3CNF formula and one which is only 0.99satisfiable. (See Hypothesis 3.4.1.)
As with , Downey [DFVW99] were the first to question the parameterized tractability of  (for the norm). Once again, Downey and Fellows included  as one of the open problems in both of their books [DF99, DF13], albeit, in their second book,  was in the “tough customers” list instead of the “most infamous” list that  belonged to. And again, as with Open Question 1, this question remains unresolved to this day:
Is  fixed parameter tractable?
1.1 Our Results
The main result of this paper is a resolution to the previously mentioned Open Question 1 and 1: more specifically, we prove that  and  (on norm for any ) do not admit any FPT algorithm, assuming the aforementioned (randomized) GapETH (Hypothesis 3.4.1). In fact, our result is slightly stronger than stated here in a couple of ways:

First, our result rules out not only exact FPT algorithms but also FPT approximation algorithms as well.

Second, our result works even under the socalled Parameterized Inapproximability Hypothesis (PIH) [LRSZ17], which asserts that no (randomized) FPT algorithm can distinguish between a satisfiable 2CSP instance and one which is only 0.99satisfiable, where the parameter is the number of variables (See Hypothesis 3.4). It is known (and simple to see) that GapETH implies PIH; please refer to Section 3.4, for more details regarding the two assumptions.
With this in mind, we can state our results starting with the parameterized intractability of , more concretely (but still informally), as follows:
[Informal; see Theorem 5] Assuming PIH, for any and any computable function , no time algorithm, on input , can distinguish between

the distance of the code generated by is at most , and,

the distance of the code generated by is more than .
Notice that our above result rules out FPT approximation algorithms with any constant approximation ratio for . In contrast, we can only prove FPT inapproximability with some constant ratio for  in norm for , with the exception of for which the inapproximability factor in our result can be amplified to any constant. These are stated more precisely below.
[Informal; see Theorem 6] For any , there exists a constant such that, assuming PIH, for any computable function , no time algorithm, on input , can distinguish between

the norm of the shortest vector of the lattice generated by is , and,

the norm of the shortest vector of the lattice generated by is .
[Informal; see Theorem 6] Assuming PIH, for any computable function and constant , no time algorithm, on input , can distinguish between

the norm of the shortest vector of the lattice generated by is , and,

the norm of the shortest vector of the lattice generated by is .
We remark that our results do not yield hardness for in the norm and this remains an interesting open question. Section 7 contains discussion on this problem. We also note that, for Theorem 6 and onwards, we are only concerned with ; this is because, for , the problem is hard to approximate even when [vEB81]!
1.1.1 Nearest Codeword Problem and Nearest Vector Problem
As we shall see in Section 2, our proof proceeds by first showing FPT hardness of approximation of the nonhomogeneous variants of  and  called the Nearest Codeword Problem () and the Nearest Vector Problem () respectively. For both  and , we are given a target vector (in and , respectively) in addition to , and the goal is to find whether there is any (in and , respectively) such that the (Hamming and , respectively) norm of is at most .
As an intermediate step of our proof, we show that the  and  problems are hard to approximate^{8}^{8}8While our  result only applies for , it is not hard to see that our intermediate reduction for  actually applies for any finite field too. (see Theorem 4 and Theorem 6.1 respectively). This should be compared to [DFVW99], in which the authors show that both problems are hard. The distinction here is that our result rules out not only exact algorithms but also approximation algorithms, at the expense of the stronger assumption than that of [DFVW99]. Indeed, if one could somehow show that  and  are hard to approximate (to some constant strictly greater than one), then our reduction would imply hardness of  and  (under randomized reductions). Unfortunately, no such hardness of approximation of  and  is known yet.
We end this section by remarking that the computational complexity of both (nonparameterized) and are also thoroughly studied (see e.g. [Mic01, DKRS03, Ste93, ABSS97, GMSS99] in addition to the references for and ), and indeed the inapproximability results of these two problems form the basis of hardness of approximation for and .
1.2 Organization of the paper
In the next section, we give an overview of our reductions and proofs. After that, in Section 3, we define additional notations and preliminaries needed to fully formalize our proofs. In Section 4 we show the constant inapproximability of . Next, in Section 5, we establish the constant inapproximability of . Section 6 provides the constant inapproximability of  and . Finally, in Section 7, we conclude with a few open questions and research directions.
2 Proof Overview
In the nonparameterized setting, all the aforementioned inapproximability results for both and are shown in two steps: first, one proves the inapproximability of their inhomogeneous counterparts (i.e. and ), and then reduces them to and . We follow this general outline. That is, we first show, via relatively simple reductions from PIH, that both  and  are hard to approximate. Then, we reduce  and  to  and  respectively. In this second step, we employ Dumer ’s reduction [DMS03] for  and Khot’s reduction [Kho05] for . While the latter works almost immediately in the parameterized regime, there are several technical challenges in adapting Dumer ’s reduction to our setting. The remainder of this section is devoted to presenting all of our reductions and to highlight such technical challenges and changes in comparison with the nonparameterized settings.
The starting point of all the hardness results in this paper is GapETH (Hypothesis 3.4.1). As mentioned earlier, it is wellknown that GapETH implies PIH (Hypothesis 3.4), i.e., PIH is weaker than GapETH. Hence, for the rest of this section, we may start from PIH instead of GapETH.
2.1 Parameterized Intractability of  from PIH
We start this subsection by describing the Dumer ’s (henceforth DMS) reduction [DMS03]. The starting point of the DMS reduction is the hardness of approximating to any constant factor [ABSS97]. Let us recall that in we are given a matrix , an integer , and a target vector , and the goal is to determine whether there is any such that is at most . Arora et al. [ABSS97] shows that for any constant , it is hard to distinguish the case when there exists such that from the case when for all we have that . Dumer at al. introduce the notion of “locally dense codes” to enable a gadget reduction from to . Informally, a locally dense code is a linear code with minimum distance admitting a ball centered at of radius^{9}^{9}9Note that for the ball to contain more than a single codeword, we must have . and containing a large (exponential in the dimension) number of codewords. Moreover, for the gadget reduction to to go through, we require not only the knowledge of the code, but also the center
and a linear transformation
used to index the codewords in , i.e., maps onto a smaller subspace. Given an instance of , and a locally dense code whose parameters (such as dimension and distance) we will fix later, Dumer et al. build the following matrix:(1) 
where are some appropriately chosen positive integers. If there exists such that then consider such that (we choose the parameters of , in particular the dimensions of and such that all these computations are valid). Let , and note that . In other words, if is a YES instance of then is a YES instance of . On the other hand if we had that for all , the norm of is more than for some constant^{10}^{10}10Note that in the described reduction, we need the inapproximability of to a factor greater than two, even to just reduce to the exact version of . , then it is possible to show that for all we have that for any . The proof is based on a case analysis of the last coordinate of . If that coordinate is 0, then, since is a code of distance , we have ; if that coordinate is 1, then the assumption that is a NO instance of implies that . Note that this gives an inapproximability for of ratio
; this gap is then further amplified by a simple tensoring procedure.
We note that Dumer at al. were not able to find a deterministic construction of locally dense code with all of the above described properties. Specifically, they gave an efficient deterministic construction of a code , but only gave a randomized algorithm that finds a linear transformation and a center w.h.p. Therefore, their hardness result relies on the assumption that , instead of the more standard assumption. Later, Cheng and Wan [CW12] and Micciancio [Mic14] provided constructions for such (families of) locally dense codes with an explicit center, and thus showed the constant ratio inapproximability of under the assumption of .
Trying to follow the DMS reduction in order to show the parameterized intractability of , we face the following three immediate obstacles. First, there is no inapproximability result known for , for any constant factor greater than 1. Note that to use the DMS reduction, we need the parameterized inapproximability of , for an approximation factor which is greater than two. Second, the construction of locally dense codes of Dumer et al. only works when the distance is linear in the block length (which is a function of the size of the input). However, we need codes whose distance are bounded above by a function of the parameter of the problem (and not depend on the input size). This is because the DMS reduction converts an instance of  to an instance of , and for this reduction to be an FPT reduction, we need to be a function only depending on , i.e., , the distance of the code (which is at most ), must be a function only of . Third, recall that the DMS reduction needs to identify the vectors in the ball with all the potential solutions of . Notice that the number of vectors in the ball is at most but the number of potential solutions of  is exponential in (i.e. all ). However, this is impossible since is bounded above by a function of !
We overcome the first obstacle by proving the constant inapproximability of  under PIH. Specifically, assuming PIH, we first show the parameterized inapproximability of  for some constant factor greater than 1, and then boost the gap using a composition operator (selfrecursively). Note that in order to follow the DMS reduction, we need the inapproximability of  for some constant factor greater than 2; in other words, the gap amplification for  is necessary, even if we are not interested in showing the inapproximability of  for all constant factors.
We overcome the third obstacle by introducing an intermediate problem in the DMS reduction, which we call the sparse nearest codeword problem. The sparse nearest codeword problem is a promise problem which differs from  in only one way: in the YES case, we want to find (instead of the entire space ), such that . In other words, we only allow sparse as a solution. We show that  can be reduced to the sparse nearest codeword problem.
Finally, we overcome the second obstacle by introducing a variant of locally dense codes, which we call sparse covering codes. Roughly speaking, we show that any code which nears the spherepacking bound (aka Hamming bound) in the high rate regime is a sparse covering code. Then we follow the DMS reduction with the new ingredient of sparse covering codes (replacing locally dense codes) to reduce the sparse nearest codeword problem to .
We remark here that overcoming the second and third obstacles are our main technical contributions. In particular, our result on sparse covering codes might be of independent interest.
The full reduction goes through several intermediate steps, which we will describe in more detail in the coming paragraphs. The highlevel summary of these steps is also provided in Figure 1. Throughout this section, for any gap problem, if we do not specify the gap in the subscript, then it implies that the gap can be any arbitrary constant. For every , we denote by the gap problem where we have to determine if a given 2CSP instance , i.e., a graph and a set of constraints over an alphabet set , has an assignment to its vertices that satisfies all the constraints or if every assignment violates more than fraction of the constraints. Here each is simply the set of all that satisfy the constraint. The parameter of the problem is . PIH asserts that there exists some constant such that no randomized FPT algorithm can solve . (See Hypothesis 3.4 for a more formal statement.)
Reducing to . We start by showing the parameterized inapproximability of  for some constant ratio. Instead of working with , we work with its equivalent formulation (by converting the generator matrix given as input into a paritycheck matrix) which in the literature is referred to as the maximum likelihood decoding problem^{11}^{11}11The two formulations are equivalent but we use different names for them to avoid confusion when we use Sparse Nearest Codeword Problem later on.. We define the gap version of this problem (i.e., a promise problem), denoted by (for some constant ) as follows: on input , distinguish between the YES case where there exists such that , and the NO case where for all we have . It is good to keep in mind that this is equivalent to asking whether there exist columns of whose sum is equal to or whether any columns of do not sum up to .
Now, we will sketch the reduction from an instance of to an instance of . The matrix will have columns and rows. The first columns of are labelled with , and the remaining columns of are labeled by where and .
Before we continue with our description of , let us note that, in the YES case where there is an assignment that satisfies every constraint, our intended solution for our instance is to pick the column for every and the column for every . Notice that columns are picked, and indeed we set . Moreover, we set the first coordinates of to be one and the rest to be zero.
We also identify the first rows of with , the next rows of with , and the remaining rows of with . Figure 2 provides an illustration of the matrix . The rows of will be designed to serve the following purposes: the first rows will ensure that, for each , at least one column of the form is picked, the next rows will ensure that, for each , at least one column of the form is picked, and finally the remaining rows will “check” that the constraint is indeed satisfied.
Specifically, each row for has only nonzero entries: those in column for all . Since our target vector has , we indeed have that at least one column of the form must be selected for every . Similarly, each row for has nonzero entries: those in column for all . Again, these make sure that at least one column of the form must be picked for every .
Finally, we will define the entries of the last rows. To do so, let us recall that, in the YES case, we pick the columns for all and for all . The goal of these remaining rows is to not only accept such a solution but also prevent any solution that picks columns and where or . In other words, these rows serve as a “consistency checker” of the solution. Specifically, the rows of the form will force and to be equal whereas the rows of the form will force and to be equal. For convenience, we will only define the entries for the rows; the rows can be defined similarly. Each row has only one nonzero entry within the first rows: the one in the column. For the remaining columns, the entry in the row and the column is nonzero if and only if and .
It should be clear from the definition that our intended solution for the YES case is indeed a valid solution because, for each row, the two nonzero entries from the columns and cancel each other out. On the other hand, for the NO case, the main observation is that, for each edge , if only one column of the form , one of the form and one of the form are picked, then the assignment corresponding to the picked columns satisfy the constraint . In particular, it is easy to argue that, if we can pick columns that sum up to , then all but fraction of all constraints fulfill the previous conditions, meaning that we can find an assignment that satisfies fraction of the constraints. Thus, we have also proved the soundness of the reduction.
Gap Amplification for . We have sketched the proof of the hardness of for some constant , assuming PIH. The next step is to amplify the gap and arrive at the hardness for for every constant . To do so, we define an operator over every pair of instances of with the following property: if two instances and are both YES instances, then is a YES instance for where . On the other hand, if both and are NO instances, then is a NO instance for . Hence, we can apply repeatedly to the instance from the previous step (with itself) and amplify the gap to be any arbitrarily large constant. The definition of , while simple, is slightly tedious to formalized and we defer it to Section 4.2.
Reducing to . Now we introduce the sparse nearest codeword problem that we had briefly talked about. We define the gap version of this problem, denoted by (for some constant ) as follows: on input , distinguish between the YES case where there exists such that , and the NO case where for all (in the entire space), we have . We highlight that the difference between  and is that, in the YES case of the latter, we are promised that . We sketch below the reduction from an instance of to an instance of . Given , let
Notice that for any (in the entire space), we have
and thus both the completeness and soundness of the reduction easily follow.
Sparse Covering Codes. Before reducing to we need to introduce in more detail (but still informally) the notion of sparse covering codes that we previously mentioned.
A sparse covering code (SCC) is a linear code of block length with minimum distance admitting a ball centered at of radius and containing a large (i.e., about , where ) number of codewords. Moreover, for the reduction to  to go through, we require not only the knowledge of the code, but also the center and a linear transformation used to index the codewords in , i.e., needs to contains the ball of radius centered at . Similar to how Dumer et al. only managed to show the probabilistic existence of the center, we too cannot find an explicit for the SCCs that we construct, but instead provide an efficiently samplable distribution such that, for any
, the probability (over
sampled from the distribution) that is nonnegligible. This is what makes our reduction from to randomized. We will not elaborate more on this issue here, but focus on the (probabilistic) construction of such codes. For convenience, we will assume throughout this overview that is much smaller than , i.e., .Recall that the spherepacking bound (aka Hamming bound) states that a binary code of block length and distance can have at most codewords; this is simply because the balls of radius at the codewords do not intersect. Our main theorem regarding the existence of sparse covering code is that any code that is “near” the spherepacking bound is a sparse covering code with . Here “near” means that the number of codewords must be at least divided by for some function that depends only on . (Equivalently, this means that the message length must be at least .) The BCH code over binary alphabet is an example of a code satisfying such a condition.
While we will not sketch the proof of the existence theorem here, we note that the general idea is to set and the distribution over in such a way that the probability that lies in is at least the probability that a random point in is within distance of some codeword. The latter is nonnegligible from our assumption that nears the spherepacking bound.
Finally, we remark that our proof here is completely different from the DMS proof of existence of locally dense codes. Specifically, DMS uses a grouptheoretic argument to show that, when a code exceeds the Gilbert–Varshamov bound, there must be a center such that contains many codewords. Then, they pick a random linear map and show that w.h.p. is the entire space. Note that this second step does not use any structure of ; their argument is simply that, for any sufficiently large subset , a random linear map maps to an entire space w.h.p. However, such an argument fails for us, due to the fact that, in SCC, we want to cover a ball rather than the whole space, and it is not hard to see that there are very large subsets such that no linear map satisfies . A simple example of this is when is a subspace of ; in this case, even when is as large as , no desired linear map exists.
Reducing to . Next, we prove the hardness of for all constant , assuming PIH, using a gadget constructed from sparse covering codes.
Given an instance of for some and a sparse covering code we build an instance of where , by following the DMS reduction (which was previously described, and in particular see (1)). If there exists such that then consider such that . Note that the existence of such a is guaranteed by the definition of SCC. Consider , and note that . In other words, as in the DMS reduction, if is a YES instance of , then is a YES instance of . On the other hand, similar to the DMS reduction, if we had that for all , then for all . The parameterized intractability of is obtained by setting in the above reduction.
Gap Amplification for . It is well known that the distance of the tensor product of two linear codes is the product of the distances of the individual codes (see Proposition 5.3 for a formal statement). We can use this proposition to reduce to for any . In particular, we can obtain, for any constant , the intractability of starting from by just recursively tensoring the input code times.
2.2 Parameterized Intractability of  from PIH
We begin this subsection by briefly describing Khot’s reduction. The starting point of Khot’s reduction is the hardness of approximating in every norm to any constant factor [ABSS97]. Let us recall that in in the norm, we are given a matrix , an integer , and a target vector , and the goal is to determine whether there is any such that^{12}^{12}12Previously, we use instead of . However, from the fixed parameter perspective, these two versions are equivalent since the parameter is only raised to the th power, and is a constant in our setting. is at most . The result of Arora et al. [ABSS97] states that for any constant , it is hard to distinguish the case when there exists such that from the case when for all (integral) we have that . Khot’s reduction proceeds in four steps. First, he constructs a gadget lattice called the “BCH Lattice” using BCH Codes. Next, he reduces in the norm (where ) to an instance of on an intermediate lattice by using the BCH Lattice. This intermediate lattice has the following property. For any YES instance of the intermediate lattice contains multiple copies of the witness of the YES instance; For any NO instance of there are also many “annoying vectors” (but far less than the total number of YES instance witnesses) which look like witnesses of a YES instance. However, since the annoying vectors are outnumbered, Khot reduces this intermediate lattice to a proper instance, by randomly picking a sublattice via a random homogeneous linear constraint on the coordinates of the lattice vectors (this annihilates all the annoying vectors while retaining at least one witness for the YES instance). Thus he obtains some constant factor hardness for . Finally, the gap is amplified via “Augmented Tensor Product”. It is important to note that Khot’s reduction is randomized, and thus his result of inapproximability of is based on .
Trying to follow Khot’s reduction, in order to show the parameterized intractability of , we face only one obstacle: there is no known parameterized inapproximability of  for any constant factor greater than 1. Let us denote by for any constant the gap version of  in the norm. Recall that in we are given a matrix , a target vector , and a parameter , and we would like to distinguish the case when there exists such that from the case when for all we have that . As it turns out, our reduction from to (with arbitrary constant gap), having and as intermediate steps, can be translated to show the constant inapproximability of (under PIH) in a straightforward manner. We will not elaborate on this part of the proof any further here and defer the detailed proof to Appendix B.
Once we have established the constant parameterized inapproximability of , we follow Khot’s reduction, and everything goes through as it is to establish the inapproximability for some factor of the gap version of  in the norm (where ). We denote by for some constant the the gap version of  (in the norm) where we are given a matrix and a parameter , and we would like to distinguish the case when there exists a nonzero such that from the case when for all we have that . Let . Following Khot’s reduction, we obtain the inapproximability of (under PIH). To obtain inapproximability of for all constant ratios, we use the tensor product of lattices; the argument needed here is slightly more subtle than the similar step in because, unlike distances of codes, the norm of the shortest vector of the tensor product of two lattices is not necessarily equal to the product of the norm of the shortest vector of each lattice. Fortunately, Khot’s construction is tailored so that the resulting lattice is “wellbehaved” under tensoring [Kho05, HR07], and gap amplification is indeed possible for such instances.
We remark here that, for the (nonparameterized) inapproximability of SVP, the techniques of [Kho05, HR07] allow one to successfully amplify gaps for norm where as well. Unfortunately, this does not work in our settings, as it requires the distance to be dependent on which is not possible for us since is the parameter of the problem.
Summarizing, in Figure 3, we provide the proof outline of our reduction from GapETH to with some constant gap, for every (with the additional gap amplification to constant inapproximability for ).
Comments
There are no comments yet.