# Top-k-Convolution and the Quest for Near-Linear Output-Sensitive Subset Sum

In the classical Subset Sum problem we are given a set X and a target t, and the task is to decide whether there exists a subset of X which sums to t. A recent line of research has resulted in Õ(t)-time algorithms, which are (near-)optimal under popular complexity-theoretic assumptions. On the other hand, the standard dynamic programming algorithm runs in time O(n · |𝒮(X,t)|), where 𝒮(X,t) is the set of all subset sums of X that are smaller than t. Furthermore, all known pseudopolynomial algorithms actually solve a stronger task, since they actually compute the whole set 𝒮(X,t). As the aforementioned two running times are incomparable, in this paper we ask whether one can achieve the best of both worlds: running time Õ(|𝒮(X,t)|). In particular, we ask whether 𝒮(X,t) can be computed in near-linear time in the output-size. Using a diverse toolkit containing techniques such as color coding, sparse recovery, and sumset estimates, we make considerable progress towards this question and design an algorithm running in time Õ(|𝒮(X,t)|^4/3). Central to our approach is the study of top-k-convolution, a natural problem of independent interest: given sparse polynomials with non-negative coefficients, compute the lowest k non-zero monomials of their product. We design an algorithm running in time Õ(k^4/3), by a combination of sparse convolution and sumset estimates considered in Additive Combinatorics. Moreover, we provide evidence that going beyond some of the barriers we have faced requires either an algorithmic breakthrough or possibly new techniques from Additive Combinatorics on how to pass from information on restricted sumsets to information on unrestricted sumsets.

READ FULL TEXT