Fast Witness Counting

We study the witness-counting problem: given a set of vectors V in the d-dimensional vector space over F_2, a target vector t, and an integer k, count all ways to sum-up exactly k different vectors from V to reach t. The problem is well-known in coding theory and received considerable attention in complexity theory. Recently, it appeared in the context of hardware monitoring. Our contribution is an algorithm for witness counting that is optimal in the sense of fine-grained complexity. It runs in time O^*(2^d) with only a logarithmic dependence on m=|V|. The algorithm makes use of the Walsh-Hadamard transform to compute convolutions over F_2^d. The transform, however, overcounts the solutions. Inspired by the inclusion-exclusion principle, we introduce correction terms. The correction leads to a recurrence that we show how to solve efficiently. The correction terms are obtained from equivalence relations over F_2^d. We complement our upper bound with two lower bounds on the problem. The first relies on # ETH and prohibits an 2^o(d)-time algorithm. The second bound states the non-existence of a polynomial kernel for the decision version of the problem.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

07/28/2020

Counting Short Vector Pairs by Inner Product and Relations to the Permanent

Given as input two n-element sets 𝒜,ℬ⊆{0,1}^d with d=clog n≤(log n)^2/(l...
10/06/2021

Tight bounds for counting colorings and connected edge sets parameterized by cutwidth

We study the fine-grained complexity of counting the number of colorings...
06/02/2020

The Fine-Grained and Parallel Complexity of Andersen's Pointer Analysis

Pointer analysis is one of the fundamental problems in static program an...
06/02/2020

The Fine-Grained Complexity of Andersen's Pointer Analysis

Pointer analysis is one of the fundamental problems in static program an...
07/14/2017

Fine-grained reductions from approximate counting to decision

The main problems in fine-grained complexity are CNF-SAT, the Orthogonal...
12/15/2017

Counting Solutions of a Polynomial System Locally and Exactly

We propose a symbolic-numeric algorithm to count the number of solutions...
07/14/2018

Counting Integral Points in Polytopes via Numerical Analysis of Contour Integration

In this paper, we address the problem of counting integer points in a ra...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We address the witness-counting problem (Witness Counting) defined as follows. Given a finite set of -dimensional vectors over the field of characteristic two, a target vector , and a number , determine with

The set consists of so-called witnesses, -tuples of pairwise-distinct vectors in that sum up to .

Witness Counting generalizes well-known algorithmic problems from coding theory. Prominent examples of such are , , , and . These problems arise from a decoding task. Consider a word received from a noisy channel. Due to the noise, the word may contain errors and differ from the codeword that was actually sent. As a receiver, we are interested in the actual codeword, and it is our task to reconstruct it. Usually, the number of errors in the received word is bounded by a measure called the Hamming weight. Hence, we need to decide the existence of a codeword close enough to the received word wrt. the given Hamming weight.

All four problems mentioned above are variants of , where the target vector is zero or solutions are allowed to contain at most vectors. The Hamming weight is always modeled by the parameter . These problems are algorithmically hard. A series of papers [3, 18, 45, 27, 32] studies their complexity and shows results ranging from -completeness of all problems to -hardness of and . Vardy provides a survey [44].

From an algorithmic perspective, much effort was invested into finding randomized procedures for decoding. One of the first decoding algorithms was introduced by Prange in [41]. The author

Authors Runtime
Prange [41]
Stern [42]
May et al. [40]
Becker et al. [2]

proposes a technique called Information Set Decoding (ISD). It uses linear algebra (random permutations) to reduce the search space of potential codewords. Since then, ISD was combined with other search techniques, prominently the representation technique for from [33]. This led to improved runtimes, an overview of which is given in the table to the right. To be precise, the table refers to the decision version of , checking whether . Due to the randomization, these algorithms are not suitable for witness counting. Moreover, all runtimes depend exponentially on . This means they are intractable on instances where the set of vectors and hence tends to be large.

Such instances arise in the context of a new logging procedure in hardware monitoring [39]. There, a signal is traced on an interval of clock cycles. Each clock cycle is assigned a bitvector in uniquely identifying it. The vectors get collected in the set . Roughly, the logging procedure adds up all vectors of clock cycles, where the traced signal flips from to or vice versa. The result is the target vector . Moreover, the procedure records the precise number of changes. In this setting, a witness is a reconstruction of the traced signal. The characteristic of the problem is that the size of is typically large while is small. This is due to the fact that the interval of clock cycles is comparably long, while a change in the signal only happens rarely.

The logging mechanism is used in failure analysis. Once a failure occurred, the value of the logged target vector gets stored. Subsequently, all traces leading to the vector have to be reconstructed. This is achieved by a precise satisfiability-modulo-theories analysis or a simulation in hardware. The expensive step in this analysis is to ensure completeness: finding (roughly) all witnesses without knowing them. Providing the number of witnesses (in an approximate model), allows us to judge the (degree of) completeness of the reconstruction.

Hence, there is a need for an algorithm that determines the number of witnesses and has a low dependence on . A simple approach is a dynamic programming that assumes an order on the set with . We compute a table for , , and . An entry is the number of ways to write as a sum of different vectors from , where the vectors are increasingly ordered and the last one seen is . We have that . The entries can be computed by the recurrence . In total, the table has many entries. Computing an entry takes additions. Hence, the needed arithmetic operations to fill the table are . Suppose the set is large, . Then the dynamic programming roughly takes operations. Hence, even the quadratic dependence on in the number of operations is prohibitive.

Our main result is an algorithm solving Witness Counting in arithmetic operations. Surprisingly, the size of the underlying set of vectors does not contribute to the number of arithmetic operations at all. It only appears as a logarithmic factor in the runtime. A similar phenomenon appeared before in the context of counting covers and partitions via inclusion-exclusion [13]. Our algorithm can also be applied to solve , , , and . In particular, when solutions are allowed to contain less than vectors, it is sufficient to slightly change our subroutine for (see below).

The idea behind our algorithm is to divide the task of counting witnesses, in a way that resembles inclusion-exclusion. First, we count witness candidates, -tuples of vectors that add up to the target vector but may repeat entries:

Then, we determine the number of failures, candidates that indeed repeat an entry:

Counting witnesses now amounts to counting candidates and counting failures:

(1)

We address the problem of counting candidates by computing a convolution operation over functions. Doing this efficiently requires a trick known as Walsh-Hadamard transform [1, 38]. It turns the convolution operation into a component-wise vector product.

We address the problem of counting failures by means of two factorizations. We first factorize the set of witnesses into usage patterns. A usage pattern is a partition of the set of positions that indicates where vectors in repeat. The precise vectors are abstracted away. In a second step, we exploit the fact that we compute over

. We further factorize the usage patterns according to the parity of their equivalence classes. Two patterns are considered equivalent if they have the same number of classes of odd and the same number of classes of even cardinality. With these factorizations, the number of failures admits a recurrence to the number of witnesses for a smaller parameter

. Altogether, we arrive at a dynamic programming over the parameter to which the candidate count contributes only an additive factor.

We complement our algorithm by two lower bounds. The first one shows that Witness Counting cannot be solved in time unless fails. Here, is a counting version of the exponential-time hypothesis, a standard lower-bound assumption in fine-grained complexity [34, 24]. The result shows that our algorithm is optimal. The second lower bound shows that the decision version of does not admit a polynomial kernel. Both lower bounds are obtained by reductions from perfect matching in hypergraphs.

Related Work.

We already discussed the related work in coding theory. A key tool in our algorithm is the convolution over . Convolution-based methods [29, 21] have seen great success in parameterized complexity. Björklund et al. were the first to see the potential of the subset convolution in this context [8]. They gave an -time algorithm and applied it to partitioning problems and to a variant of . Their algorithm is based on fast Möbius and zeta transforms. The computation of the latter goes back to Yates [47]. In [9], subset convolution was applied to compute the Tutte polynomial of a graph. Different variants of were solved in [43] and [35] via subset convolution. Moreover, the technique was used as a subroutine in Cut & Count [22]. The same paper presents an algorithm for the XOR product, a convolution operation over , for . By applying the algorithm for , one can lift the techniques presented in this paper to instances over . However, one cannot generalize to arbitrary since the algorithm for the XOR product would suffer from rounding errors [22]. An application of subset convolution in automata theory is given in [19].

Also other transform-based methods [38, 1] can be found in algorithms. In [23]

, fast Fourier transform was instantiated to derive an algorithm for the

problem. Based on Yates algorithm and a space-efficient zeta transform, the domatic and the chromatic number of graphs were computed [10, 12]. In [46], the fast Walsh-Hadamard transform was applied to solve . Polynomial-space algorithms for a variety of problems were constructed in [37] using transforms. The authors of [14] consider efficient multiplication in Möbius algebras in general.

The methods in this paper are furthermore inspired by the inclusion-exclusion principle. It was first used by Björklund et al. in [7], and independently by Koivistio in [36] for counting covers and partitions. Also this technique was used in various algorithms and is particularly helpful when counting solutions [13, 6, 11, 5, 4].

2 Parameterized Complexity

Our goal is to identify the influence of the parameters and on the problem . Parameterized complexity provides a framework to do so. We introduce the basics following [28, 26]. A parameterized problem is a language , where is a finite alphabet. The problem is called fixed-parameter tractable if there is a deterministic algorithm that decides in time . Here, is any computable function that only depends on the parameter . It is common to denote the runtime of the algorithm by to emphasize on the dominant factor. The class of all fixed-parameter-tractable problems is denoted by . The precise value of is crucial. Finding upper and lower bounds for is usually referred to as fine-grained complexity.

There is also a hardness theory. Not all -complete problems are likely to be , for instance the problem of finding a clique of size . Despite extensive effort, no algorithm was found that solves the problem in time . In fact, the problem is known to be hard for the complexity class . All problems with this property are unlikely to be .

3 Witness Counting

Our main result is an optimal algorithm for with logarithmic dependence on .

Theorem 1.

Let . Witness Counting can be solved with arithmetic operations and in time , if is the time to multiply two -bit numbers.

The best known runtime for multiplication is [31]. In the theorem, we assume that the set

is given in terms of its characteristic function

. If should have a different representation, we can compute in look-ups without changing the statement.

Actually, given , , , our algorithm determines the solution to all instances with the same set of vectors and the same target but . As explained in the introduction, it relies on Equation (1) to decompose the witness-counting problem into counting candidates and failures.

3.1 Candidate Counting

We express the number of candidates in terms of a convolution of functions. The definition is as follows [1, 22]. The convolution of two functions is the function which maps to the sum . Note that the sum of and is computed over . The convolution operator is associative and for functions , we have

(2)

For counting the candidates, we take all functions to to be the characteristic function of the given set of vectors , . With Equation (2), correctness is immediate.

Fact 2.

.

Note that the convolution cannot check whether the summed-up vectors are different. This is why we need the correction term .

It remains to compute the convolution. To do so efficiently, we cannot just iterate over all summands in Equation (2). Instead, we apply the Walsh-Hadamard transform ([1]

. We defer the definition for the moment and stick with the properties. The transform turns a convolution into a pointwise product of integer vectors of size

. Moreover, it is self-inverse up to a factor.

Theorem 3 ([1]).

and .

Given the theorem, we can compute the convolution in Equation (2) via the Walsh-Hadamard transform of , followed by multiplications of vectors of size , followed by another transform and a division. An algorithm called the fast Walsh-Hadamard transform computes the transform in arithmetic operations [1, 38]

. Together with the multiplications, we arrive at the following complexity estimate. Note that it refers to

for all .

Proposition 4.

for all can be computed in arithmetic operations.

The reason we determine the intermediary values is that the overall algorithm for computing is a dynamic programming which may access them. To compute only the value , two transforms and an iterated multiplication with operations would suffice. Hence, the parameter would contribute only logarithmically to the complexity. Instead, we need to apply multiplications and transforms.

The Walsh-Hadamard transform [1] is essential in the above algorithm. It is based on the Hadamard Matrix defined recursively by and for by

The Walsh-Hadamard transform of is defined by . In this product, is seen as a vector in . The fast Walsh-Hadamard transform is a dynamic programming algorithm to compute . It is based on the recursive structure of and takes arithmetic operations. That the transform is self-inverse is based on the fact that the Hadamard Matrix has a simple inverse, .

3.2 Failure Counting

We divide the task of counting failures along the usage pattern implemented by the failures. Usage pattern are defined via partitions. An unordered partition of a set is a collection of non-empty and disjoint subsets that together cover in that . We use for the set of all unordered partitions of . We call a partition trivial if , which means the classes consist of single elements. In our development, we also use ordered partitions of , tuples satisfying the same non-emptiness, disjointness, and covering constraints.

Recall that a failure is a tuple where for some . A failure induces an equivalence on the set of positions that tracks the usage of vectors: if . The usage pattern of the failure is the unordered partition of induced by . We use the following function to extract the usage pattern of a failure:

Note that by definition no failure maps to a trivial partition. There is at least one non-trivial class in . This explains the index in the following disjoint union:

Fact 5.

.

The number of failures is thus the sum over . There are, however, too many unordered partitions as can be iterated over efficiently. We factorize the set of unordered partitions, exploiting that we compute over . The field of characteristic has the property that . As a consequence, the vectors from whose indices belong to classes of even size do not contribute to the target vector. Indeed, consider a class with an even number of indices, . By definition of the equivalence, and hence, the vectors cancel out. Similarly, the vectors whose indices belong to classes of odd size only contribute a single vector to the target.

The discussion motivates the following definitions. Given a partition , we use to denote the number of classes in that have an even cardinality. Similarly, we let denote the number of classes of odd cardinality in . The parity-counting function maps a partition to this pair of values:

Note that factorizes the set of partitions. The following lemma is key to our development. It shows that, as far as counting is concerned, is insensitive to this factorization.

Lemma 6.

Let with . Recall that . We have

Proof.

Consider with . In a first step, we characterize in terms of a set of functions that is easier to count. The observation is this. Every failure is uniquely determined by choosing a vector for each class in . Formally, an instantiation of the partition is an injective function

Injectivity follows from the definition of , requiring the vectors associated to different classes to be different. That the sum of the vectors has to be the target vector  is by the definition of failures. As we compute over , only the vectors associated to classes of odd cardinality contribute to the target vector. We use to refer to the set of all instantiation functions.

In turn, every failure induces an instantiation . To a class with , we associate the vector . The definition of ensures independence of the representative. Combined with the previous paragraph, we have a bijection between and .

By sorting the partitions, instantiations can be seen as pairs . The first component is an -tuple of different vectors from that sum-up to . Phrased differently, is a solution to , the witness-counting problem where the number is reduced to . The second component is an -tuple of distinct vectors that do not occur in . For the first component, there are many choices. For each first component, there are many choices left for the second component. ∎

Combining Fact 5 and Lemma 6, we arrive at a formula for counting failed candidates. The inequality is again due to the fact that failures only induce non-trivial partitions.

Proposition 7.

.

The proposition yields a recurrence to determine the number of witnesses . We return to this in a moment when we discuss the overall algorithm. A factor that is local to the failure count is , the number of partitions with a given parity count. It can be determined with a dynamic programming that runs in polynomial time.

Lemma 8.

Computing all with needs arithmetic operations.

Proof.

Given a -element set, we show how to compute the number of ordered partitions with -many classes of even and -many classes of odd cardinality. Let represent this number. The corresponding number of unordered partitions can simply be obtained by a division:

The number satisfies the following recurrence, where is a function that evaluates to if the given condition holds and to otherwise:

The equation can be understood as a recursion that explicitly builds-up a partition. It adds to the current partition a new class with elements. Depending on whether is even or odd, the construction continues with the parameters adjusted accordingly. The bases cases are immediate.

The algorithm first tabulates the binomials using operations. Indeed, as there are quadratically many pairs each of which requiring at most multiplications. The computation is then a dynamic programming that fills a table of size . For each entry, we sum over at most numbers, where we can look-up earlier computed entries and the binomial. Hence, we need operations for a single entry. This results in arithmetic operations to fill the whole table. ∎

3.3 Overall Algorithm

With Equation (1) and Proposition 7, we obtain

The overall algorithm for computing is thus a dynamic programming over . It accesses powerful look-up tables that we initialize in a first phase. We compute for all using the Walsh-Hadamard transform. As stated in Proposition 4, this means arithmetic operations. Moreover, we tabulate for all . According to Lemma 8, this costs arithmetic operations. We also precompute for all . The factorials cancel out. For given and , we thus have at most multiplications. Altogether, also that table can be filled with operations.

The dynamic programming takes iterations. In each iteration, we have a sum over at most numbers. Since we have tabulated for each all the values needed, we can evaluate the sum in operations. Hence, the overall number of operations in the dynamic programming is . Together with the initialization, this yields the announced operations.

The numbers over which we compute, both in the initialization and in the dynamic programming, are all positive and bounded from above by , the number of candidates. This number is at most . Hence, the arithmetic operations are executed on -bit numbers. We estimate the cost of an operation as that of a multiplication. Actually, the Walsh-Hadamard transform only uses additions and subtractions and is therefore slightly cheaper.

4 Lower Bounds

We present two lower bounds for . The first lower bound shows that the existence of a -time algorithm would contradict , a version of the exponential-time hypothesis for counting problems. The second lower bound shows that the decision version of does not admit a polynomial kernel unless . Both bounds are based on a reduction from , the problem of determining whether an -uniform hypergraph admits a perfect matching. We introduce the needed notions.

A hypergraph is a pair , where is a finite set of vertices and is a set of edges. Edges in a hypergraph connect a number of vertices. Formally, the set of edges is a collection of subsets of vertices, . A hypergraph is -uniform if every edge connects exactly vertices, for all . Note that a -uniform hypergraph is just an undirected graph.

A perfect matching of an -uniform hypergraph is an independent set of edges that covers . To be precise, it is a subset such that all vertices are contained in an edge in and no two edges in share a vertex. Note that a perfect matching consists of exactly many edges and thus only exists if is divisible by . We use to denote the set of perfect matchings of . For fixed , the problem is the following: given an -uniform hypergraph , decide whether there exists a perfect matching of .

We show a polynomial-time reduction from the problem to the decision variant of . It works for any . The reduction implies a complexity and a kernel lower bound. For the complexity lower bound, we establish a relationship between the number of perfect matchings in an undirected graph and the number of witnesses. Since perfect matchings cannot be counted in assuming , we obtain a lower bound for . For the kernel lower bound, we use the fact that does not admit a polynomial compression of a certain size. For both results, it is important that the reduction yields a parameter , the dimension in , linear in the size of the given vertex set .

Lemma 9.

For any , there is a polynomial-time reduction from to the decision version of . Moreover, the parameter is linear in .

For the construction, let be an -uniform hypergraph. Assume the vertices in are ordered. We construct an instance of over , where . To this end, let . We define to be the bitvector representation of , with if and only if . All these vectors are collected in the set . Since we need to cover all vertices in , we set . The trick is to define , the number of vectors to select, such that the corresponding edges are forced to be pairwise disjoint. We set . If this is no natural number, we clearly have a no-instance. With entries per vector, the only way to cover by vectors is to avoid overlapping entries. The following lemma states correctness of the reduction. It actually shows that the reduction is parsimonious up to a factorial. The factorial appears since witnesses are ordered while perfect matchings are not.

Lemma 10.

.

Proof.

We will reuse the map from above. Note that it is a bijection between and . Let be a perfect matching of . Since for , we have . Hence, . The vector is the target , which means is a witness. Of course, any reordering of is a witness as well. Hence, any perfect matching yields the existence of witnesses.

Now let be a witness. Since , the set is an edge in . We define . This is a perfect matching. That all vertices are covered is by the choice of . Disjointness of the edges follows from the choice of . ∎

4.1 Lower Bound on the Runtime

We prove the announced lower bound on the runtime for . It shows that the algorithm presented in this paper is optimal. The bound is based on the , introduced in [24]. This variant of the exponential-time hypothesis assumes that the number of satisfying assignments of a -CNF formula over variables cannot be counted in time .

Theorem 11.

does not admit an -time algorithm, unless fails.

Our theorem is based on a result shown by Curticapean in [20]. There, was used to prove the existence of a -time algorithm for counting perfect matchings in undirected graphs highly unlikely. Since undirected graphs are -uniform hypergraphs, the problem corresponds to the counting variant of which we will denote by .

Theorem 12 ([20]).

cannot be solved in time, unless fails.

Suppose we had an algorithm for with runtime . Given a graph , we could apply the reduction from Lemma 9 to get, in polynomial time, an instance of with . Then we could apply our algorithm for counting witnesses. With Lemma 10, this yields a -time algorithm for .

4.2 Lower Bound on the Kernel

We present the lower bound on the kernel size for the decision variant of . From an algorithmic point of view, this is interesting since kernels characterize the number of hard instances of a problem. In fact, it can be shown that a kernel for a problem exists if and only if the problem is  [21]. For many problems, the search for small kernels is ongoing research, but not all problems admit such. This led to an approach that tries to disprove the existence of kernels of a certain size, see [15, 17, 16, 30]. For the next theorem, we apply techniques developed in that line of work.

Theorem 13.

Deciding does not admit a poly. kernel unless .

A key tool in the search for kernel lower bounds are polynomial compressions. Let be a parameterized and be any unparameterized problem. A polynomial compression of into is an algorithm that takes an instance of , runs in polynomial time in and , and returns an instance of such that: (1) if and only if , and (2) for a polynomial . The polynomial is referred to as the size of the compression.

A kernelization is similar to a compression with two differences: it maps to itself and is more relaxed on the size. The former means that instances of always get mapped to instances of . The latter means that the size is not restricted to be a polynomial. We allow for any computable function which depends only on the parameter . In the special case where is a polynomial, we call the kernelization a polynomial kernel. If a polynomial compression or a kernelization exists, we say that admits a polynomial compression/kernelization.

The proof of Theorem 13 uses a lower bound on polynomial compressions for . We combine the result with the reduction from Lemma 9 and derive the wanted kernel lower bound for the decision variant of .

Theorem 14 ([25, 21]).

Let . For any , parameterized by does not admit a polynomial compression of size , unless .

For the proof of Theorem 13, assume there is a polynomial kernel for deciding . This means there is an algorithm that takes an instance over and maps it to an instance . The algorithm runs in polynomial time in and , and the size of is bounded by a polynomial in : , for a constant .

To derive a contradiction, consider the problem . Let be an input to the problem. By Lemma 9, we get, in polynomial time, an instance of the decision variant of with parameter . If we apply algorithm to the instance, we get an instance . As mentioned above, the size of the instance is bounded by .

Putting things together, we get a polynomial compression for of size . But this contradicts Theorem 14 and concludes the proof.

5 Conclusion

We studied the witness-counting problem: given a set of vectors in , a target vector , and an integer , the task is to count all ways in which different vectors from can be summed-up to the target vector . The problem generalizes fundamental questions from coding theory and has applications in hardware monitoring.

Our contribution is an efficient algorithm that runs in time . Crucially, it only has a logarithmic dependence on the number of vectors in . On a high-level, the algorithm can be understood as a convolution the precision of which is improved by means of inclusion-exclusion-like correction terms — an approach that may have applications beyond this paper. The algorithm as it is will generalize to vectors over but beyond will face rounding errors.

We also showed optimality: there is no algorithm solving the problem in time unless fails. Furthermore, the problem of checking the existence of a witness does not admit a polynomial kernel, unless . Both lower bounds rely on a reduction from , the problem of finding a perfect matching in an -uniform hypergraph.

References

  • [1] N. Ahmed and K. R. Rao. Orthogonal Transforms for Digital Signal Processing. Springer, 1975.
  • [2] A. Becker, A. Joux, A. May, and A. Meurer. Decoding random binary linear codes in : How 1 + 1 = 0 improves information set decoding. In EUROCRYPT, volume 7237 of LNCS, pages 520–536. Springer, 2012.
  • [3] E. R. Berlekamp, R. J. McEliece, and H. C. A. van Tilborg. On the inherent intractability of certain coding problems. IEEE Trans. Information Theory, 24(3):384–386, 1978.
  • [4] A. Björklund. Determinant sums for undirected hamiltonicity. In FOCS, pages 173–182. IEEE, 2010.
  • [5] A. Björklund. Exact covers via determinants. In STACS, volume 5 of LIPIcs, pages 95–106. Schloss Dagstuhl, 2010.
  • [6] A. Björklund and T. Husfeldt. Exact algorithms for exact satisfiability and number of perfect matchings. In ICALP, volume 4051 of LNCS, pages 548–559. Springer, 2006.
  • [7] A. Björklund and T. Husfeldt. Inclusion–exclusion algorithms for counting set partitions. In FOCS, pages 575–582. IEEE, 2006.
  • [8] A. Björklund, T. Husfeldt, P. Kaski, and M. Koivisto. Fourier meets möbius: Fast subset convolution. In STOC, pages 67–74. ACM, 2007.
  • [9] A. Björklund, T. Husfeldt, P. Kaski, and M. Koivisto. Computing the tutte polynomial in vertex-exponential time. In FOCS, pages 677–686. IEEE, 2008.
  • [10] A. Björklund, T. Husfeldt, P. Kaski, and M. Koivisto. Trimmed moebius inversion and graphs of bounded degree. In STACS, volume 1 of LIPIcs, pages 85–96. Schloss Dagstuhl, 2008.
  • [11] A. Björklund, T. Husfeldt, P. Kaski, and M. Koivisto. Counting paths and packings in halves. In ESA, volume 5757 of LNCS, pages 578–586. Springer, 2009.
  • [12] A. Björklund, T. Husfeldt, P. Kaski, and M. Koivisto. Covering and packing in linear space. In ICALP, volume 6198 of LNCS, pages 727–737. Springer, 2010.
  • [13] A. Björklund, T. Husfeldt, and M. Koivisto. Set partitioning via inclusion-exclusion. SIAM J. Comput., 39(2):546–563, 2009.
  • [14] A. Björklund, M. Koivisto, T. Husfeldt, J. Nederlof, P. Kaski, and P. Parviainen. Fast zeta transforms for lattices with few irreducibles. In SODA, pages 1436–1444. SIAM, 2012.
  • [15] H. L. Bodlaender. Lower bounds for kernelization. In IPEC, volume 8894 of LNCS. Springer, 2014.
  • [16] H. L. Bodlaender, R. G. Downey, M. R. Fellows, and D. Hermelin. On problems without polynomial kernels. J. Comput. Syst. Sci., 75(8):423–434, 2009.
  • [17] H. L. Bodlaender, B. M. P. Jansen, and S. Kratsch. Kernelization lower bounds by cross-composition. SIAM J. Discrete Math., 28(1):277–305, 2014.
  • [18] J. Bruck and M. Naor. The hardness of decoding linear codes with preprocessing. IEEE Trans. Information Theory, 36(2):381–385, 1990.
  • [19] P. Chini, J. Kolberg, A. Krebs, R. Meyer, and P. Saivasan. On the complexity of bounded context switching. In ESA, volume 87 of LIPIcs, pages 27:1–27:15. Schloss Dagstuhl, 2017.
  • [20] R. Curticapean.

    Block interpolation: A framework for tight exponential-time counting complexity.

    In ICALP, volume 9134 of LNCS, pages 380–392. Springer, 2015.
  • [21] M. Cygan, F. V. Fomin, Ł Kowalik, D. Lokshtanov, D. Marx, M. Pilipczuk, M. Pilipczuk, and S. Saurabh. Parameterized algorithms. Springer, 2015.
  • [22] M. Cygan, J. Nederlof, M. Pilipczuk, M. Pilipczuk, J. M. M. van Rooij, and J. O. Wojtaszczyk. Solving connectivity problems parameterized by treewidth in single exponential time. In FOCS, pages 150–159. IEEE, 2011.
  • [23] M. Cygan and M. Pilipczuk. Exact and approximate bandwidth. In ICALP, volume 5555 of LNCS, pages 304–315. Springer, 2009.
  • [24] H. Dell, T. Husfeldt, D. Marx, N. Taslaman, and M. Wahlen. Exponential time complexity of the permanent and the tutte polynomial. ACM Trans. Algorithms, 10(4):21:1–21:32, 2014.
  • [25] H. Dell and D. Marx. Kernelization of packing problems. In SODA, pages 68–81. SIAM, 2012.
  • [26] R. G. Downey and M. R. Fellows. Fundamentals of Parameterized Complexity. Springer, 2013.
  • [27] R. G. Downey, M. R. Fellows, A. Vardy, and G. Whittle. The parametrized complexity of some fundamental problems in coding theory. SIAM J. Comput., 29(2):545–570, 1999.
  • [28] J. Flum and M. Grohe. Parameterized Complexity Theory. Springer, 2006.
  • [29] F. V. Fomin and D. Kratsch. Exact Exponential Algorithms. Springer, 2010.
  • [30] L. Fortnow and R. Santhanam. Infeasibility of instance compression and succinct pcps for NP. J. Comput. Syst. Sci., 77(1):91–106, 2011.
  • [31] M. Fürer. Faster integer multiplication. In STOC, pages 57–66. ACM, 2007.
  • [32] V. Guruswami and A. Vardy. Maximum-likelihood decoding of reed-solomon codes is np-hard. IEEE Trans. Information Theory, 51(7):2249–2256, 2005.
  • [33] N. Howgrave-Graham and A. Joux. New generic algorithms for hard knapsacks. In EUROCRYPT, volume 6110 of LNCS, pages 235–256. Springer, 2010.
  • [34] R. Impagliazzo and R. Paturi. On the complexity of k-sat. J. Comput. Syst. Sci., 62(2):367–375, 2001.
  • [35] J. Jeong, S. H. Sæther, and J. A. Telle. Maximum matching width: New characterizations and a fast algorithm for dominating set. In IPEC, volume 43 of LIPIcs, pages 212–223. Schloss Dagstuhl, 2015.
  • [36] M. Koivisto. An O(2) algorithm for graph coloring and other partitioning problems via inclusion–exclusion. In FOCS, pages 583–590. IEEE, 2006.
  • [37] D. Lokshtanov and J. Nederlof. Saving space by algebraization. In STOC, pages 321–330. ACM, 2010.
  • [38] D. K. Maslen and D. N. Rockmore. Generalized fft’s- A survey of some recent results. In Groups and Computation, volume 28 of DIMACS, pages 183–238. DIMACS/AMS, 1995.
  • [39] R. Massoud, J. Stoppe, D. Große, and R. Drechsler. Semi-formal cycle-accurate temporal execution traces reconstruction. In FORMATS, volume 10419 of LNCS, pages 335–351. Springer, 2017.
  • [40] A. May, A. Meurer, and E. Thomae. Decoding random linear codes in . In ASIACRYPT, volume 7073 of LNCS, pages 107–124. Springer, 2011.
  • [41] E. Prange. The use of information sets in decoding cyclic codes. IRE Trans. Information Theory, 8(5):5–9, 1962.
  • [42] J. Stern. A method for finding codewords of small weight. In Coding Theory and Applications, volume 388 of LNCS, pages 106–113. Springer, 1988.
  • [43] J. M. M. van Rooij, H. L. Bodlaender, and P. Rossmanith. Dynamic programming on tree decompositions using generalised fast subset convolution. In ESA, volume 5757 of LNCS, pages 566–577. Springer, 2009.
  • [44] A. Vardy. Algorithmic complexity in coding theory and the minimum distance problem. In STOC, pages 92–109. ACM, 1997.
  • [45] A. Vardy. The intractability of computing the minimum distance of a code. IEEE Trans. Information Theory, 43(6):1757–1766, 1997.
  • [46] R. Williams. Finding paths of length k in O(2) time. Inf. Process. Lett., 109(6):315–318, 2009.
  • [47] F. Yates. The design and analysis of factorial experiments. Imperial Bureau of Soil Science, 1937.