1 Introduction
The classical SchwartzZippel Lemma (due to Ore [Ore22], Schwartz [Sch80], Zippel [Zip79] and DeMillo & Lipton [DL78]) states that if is a field, and is a nonzero polynomial of degree , and is an arbitrary finite subset of , then the number of points on the grid ^{1}^{1}1We use “grids” and “product sets” interchangeably (see also Remark 1.2). where is zero is upper bounded by . A higher order multiplicity version of this lemma (due to Dvir, Kopparty, Saraf and Sudan [DKSS13]) states the number of points on the grid where is zero with multiplicity^{2}^{2}2This means that all the partial derivatives of of order at most are zero at this point. See Section 3 for a formal definition. at least is upper bounded by .^{3}^{3}3This bound is only interesting when so that is less than the trivial bound of .
This innately basic statement about low degree polynomials has had innumerable applications in both theoretical computer science and discrete mathematics and has by now become a part of the standard toolkit when working with low degree polynomials [Sar11, Gut16]. Despite this, the following natural algorithmic version of this problem remains open.
Algorithmic SZ question.
Let be a field, and , , be as above. Design an efficient algorithm that takes as input an arbitrary function and finds a polynomial of degree at most (if one exists) such that the function defined as
differs from on less than fraction of points on .
The aforementioned multiplicity SchwartzZippel lemma (henceforth, referred to as the multiplicity SZ lemma for brevity) assures us that if there is a polynomial such that differs from on less than fraction of points, then it must be unique! Thus, in some sense, the above question is essentially asking for an algorithmic version of the multiplicity SZ lemma.
Although a seemingly natural problem, especially given the ubiquitous presence of the SZ lemma in computer science, this question continues to remain open for even bivariate polynomials! In fact, even the case, which corresponds to an algorithmic version of the classical SZ lemma (without multiplicities) was only very recently resolved in a beautiful work of Kim and Kopparty [KK17]. Unfortunately, their algorithm does not seem to extend to the case of , and they mention this as one of the open problems.
In this work, we make some progress towards answering the algorithmic SZ question. In particular, we design an efficient deterministic algorithm for this problem when the field has characteristic zero or larger than the degree , the dimension is an arbitrary constant and the multiplicity parameter is a sufficiently large constant. In fact, in this setting we prove a stronger result, which we now informally state (see Theorem 1.1 for a formal statement).
Main result.
Let be an arbitrary constant, be a positive constant and be a large enough positive integer. Over fields of characteristic zero or characteristic larger than , there is a deterministic polynomial algorithm that on input outputs all degree polynomials such that differs from the input function on less than fraction of points on the grid .
We note that the fraction of errors that can be tolerated in the above result is , which is significantly larger than the error parameter in the algorithmic SZ question. Therefore, we no longer have the guarantee of a unique solution such that the function which is close to . In fact, for this error regime, it is not even clear that the number of candidate solutions is polynomially bounded. The algorithm stated in the main result outputs all such candidate solutions, and in particular, shows that their number is polynomially bounded (for constant ). This fraction of errors is the best one can hope for since there are functions (for instance, the all zero’s function) which have superpolynomially many polynomials of degree which are close to . (see Appendix A).
In the language of error correcting codes, the algorithmic SZ question is the question of designing efficient unique decoding algorithms for multivariate multiplicity codes over arbitrary product sets when the error is at most half the minimum distance, and main result gives an efficient algorithm for the possibly harder problem of list decoding these codes from relative error , where is the distance of the code, provided that the field has characteristic larger than or zero, is a constant and is large enough. In the next section, we define some of these notions, state and discuss the results and the prior work in this language.
1.1 Multiplicity codes
Polynomial based error correcting codes, such as the ReedSolomon codes and ReedMuller codes, are a very important family of codes in coding theory both in theory and practice. Multiplicity codes are a natural generalization of ReedMuller codes wherein at each evaluation point, one not only gives the evaluation of the polynomial , but also all its derivatives up to a certain order.
Formally, let be a field, a positive integer, an arbitrary subset of the field , the degree parameter and the ambient dimension. The codewords of the variate order multiplicity code of degree polynomials over on the grid is obtained by evaluating a variate polynomial of total degree at most , along with all its derivatives of order less than at all points in the grid . Thus, a codeword corresponding to the polynomial of total degree at most can be viewed as a function where and
where is the Hasse derivative of the polynomial with respect to . The version of these multiplicity codes corresponds to the classical ReedSolomon codes (univariate case, ) and ReedMuller codes (multivariate setting, ). The distance of these codes is , which follows from the multiplicity SZ Lemma mentioned earlier in the introduction.
Univariate multiplicity codes were first studied by Rosenbloom & Tsfasman [RT97] and Nielsen [Nie01]. Multiplicity codes for general and were introduced by Kopparty, Saraf and Yekhanin [KSY14] in the context of local decoding. Subsequently, Kopparty [Kop15] and Guruswami & Wang [GW13] independently proved that the univariate multiplicity codes over prime fields (or more generally over fields whose characteristic is larger than the degree of the underlying polynomials) achieve “listdecoding capacity”. In the same work, Kopparty [Kop15] proved that multivariate multiplicity codes were list decodable up to the Johnson bound.
We remark that in the case of univariate multiplicity codes (both ReedSolomon and larger order multiplicity codes), the decoding algorithms work for all choices of the set . However, all decoding algorithms for the multivariate setting (both ReedMuller and larger order multiplicity codes) work only when the underlying set has a nice algebraic structure (eg., ) or when the degree is very small (cf, the ReedMuller listdecoding algorithm of Sudan [Sud97] and its multiplicity variant due to Guruswami & Sudan [GS99]). The only exception to this is the unique decoding algorithm of Kim and Kopparty [KK17] of ReedMuller codes over product sets.
1.2 Our results
Below we state and contrast our results on the problem of decoding multivariate multiplicity codes (over grids) from a fraction of errors for any constant where is the distance of the code. Our first result is as follows.
Theorem 1.1 (List decoding of multivariate multiplicity codes with polynomial list size).
For every and integer , there exists an integer such that for all , degree parameter , fields of size and characteristic larger than , and any set where , the following holds.
For variate order multiplicity code of degree polynomials over on the grid , there is an efficient algorithm which when given a received word , outputs all code words with agreement at least with , where is the relative distance of this code.
Remark 1.2.
A general product set in is of the form , where each is a subset of . For the ease of notation, we always work with product sets which are grids for some even though all of our results hold for general product sets.
As indicated before, this is the best one can hope for with respect to polynomial time listdecoding algorithms for multiplicity codes since there are superpolynomially many codewords with minimum distance (see Appendix A). Till recently, it was not known if multivariate multiplicity codes were list decodable beyond the Johnson bound (even for the case ). For the case of grids , where is an arbitrary set, even unique decoding algorithms were not known. We note that the above result does not yield a listdecoding algorithm for all multiplicities, but only for large enough multiplicities (based on the dimension and the error parameter ).
Kopparty, RonZewi, Saraf and Wootters [KRSW18] showed how to reduce the size of the list for univariate multiplicity codes from polynomial to constant (dependent only on the error parameter ). We use similar ideas, albeit in the multivariate setting, to reduce the list size in Theorem 1.1 to constant (dependent only on the error parameter and the dimension ).
Theorem 1.3 (List decoding of multivariate multiplicity codes with constant list size).
For every and integer , there exists an integer such that for all , degree parameter , fields of size and characteristic larger than , and any set where , the following holds.
For variate order multiplicity code of degree polynomials over on the grid , there is a randomized algorithm which requires operations over the field and which when given a received word , outputs all code words with agreement at least with , where is the relative distance of this code.
Moreover, the number of such codewords is at most .
Remark 1.4.
We remark that by taking a slightly different view of the list decoding algorithm Theorem 1.1 and Theorem 1.3, the upper bound on the number of field operations needed in Theorem 1.1 and Theorem 1.3 can be improved to . We sketch this view in Section 4.7 and note the runtime analysis in 4.8.
The above two results are a generalization (and imply) the corresponding theorems for the univariate setting due to Kopparty [Kop15] and Guruswami & Wang [GW13] and Kopparty, RonZewi, Saraf & Wootters [KRSW18]. We remark that Kopparty, RonZewi , Saraf and Wootters [KRSW18] in the recent improvement to their earlier work prove a similar listdecoding algorithm for multivariate multiplicity codes as Theorem 1.3 for the case when . Though their listdecoding algorithm does not extend to products sets, it has the added advantage that it is local.
As noted earlier the only previous algorithmic method for decoding polynomialbased codes over product sets was that of Kim and Kopparty [KK17]. We describe the ideas in our algorithm shortly (in Section 2), but stress here that our approach is very different from that of Kim and Kopparty. Their work may be viewed as an algorithmic version of the inductive proof of the SZ lemma, and indeed recovers the SZ lemma as a consequence. Their work uses algorithmic aspects of algebraic decoding as a black box (to solve univariate cases). Our work, in contrast, only relies on the multiplicity SZ lemma as a black box. Instead, we open up the "algebraic decoding” black box and make significant changes there, thus adding to the toolkit available to deal with polynomial evaluations over product sets.
1.3 Further discussion and open problems
Our result falls short of completely resolving the algorithmic SZ question in two respects; though it works for all dimensions it only works when the multiplicity parameter is large enough and when the characteristic of the field is either zero or larger than the degree parameter. Making improvements on any of these fronts is an interesting open problem.
 All multiplicities:

The algorithms presented in this paper decode all the way up to distance if the multiplicity parameter is large enough. However, for small multiplicities, even the unique decoding problem is open. For , the result due to Kim and Kopparty [KK17] addresses the unique decoding question, but the listdecoding question for product sets is open.
 Fields of small characteristic:

All known proofs of listdecoding multiplicity codes beyond the Johnson bound (both algorithmic and combinatorial) require the field to be of zero characteristic or large enough characteristic. The problem of listdecoding multiplicity codes over small characteristic beyond the Johnson bound is open even for the univariate setting. As pointed to us by Swastik Kopparty, this problem of listdecoding univariate multiplicity codes over fields of small characteristic beyond the Johnson bound is intimately related to listdecoding ReedSolomon codes beyond the Johnson bound.
For a more detailed discussion of multiplicity codes and related open problems, we refer the reader to the excellent survey by Kopparty [Kop14].
Organization
The rest of this paper is organized as follows. We begin with an overview of our proofs in Section 2 followed by some preliminaries (involving Hasse derivatives, their properties, multiplicity codes) in Section 3. We then describe and analyze the listdecoding algorithm for multivariate multiplicity codes in Section 4, thus proving Theorem 1.1. In Section 5, we then show how to further reduce the listsize to a constant, thus proving Theorem 1.3. In Section 6, we prove some properties of subspace restriction of multivariate multiplicity codes needed in Section 5. In Appendix A, we show that there are superpolynomially many minimumweight codewords, thus proving the tightness of Theorems 1.3 and 1.1 with respect to listdecoding radius.
2 Proof overview
In this section, we first describe some of the hurdles in extending the univariate algorithms of Kopparty [Kop15] and Guruswami & Wang [GW13] to the multivariate setting, especially for product sets and then given a detailed overview of the proofs of Theorem 1.1 and Theorem 1.3.
2.1 Background and motivation for our algorithm
To explain our algorithm, it will be convenient to recall the general polynomial method framework underlying the listdecoding algorithms in the univariate setting due to Kopparty [Kop15] and Guruswami & Wang [GW13]. . Let be the received word and
 Step 1: Algebraic Explanation.

Find a polynomial of appropriate degree constraints that “explains” the received word .
 Step 2: contains the close codewords.

Show that every lowdegree polynomial whose encoding agrees with in more than fraction of points satisfies the following condition.
 Step 3: Reconstruction step.

Recover every polynomial that satisfies the above condition.
The main (and only) difference between the listdecoding algorithms of Kopparty [Kop15] and Guruswami & Wang [GW13] is that Guruswami and Wang show that it suffices to work with a polynomial which is linear in the variables, more precisely, of the form , while Kopparty allows for larger degrees in the variables. As a result, Kopparty performs the recovery step by solving a differential equation while Guruswami and Wang observe that dueto the simple structure of , the solution can be obtained by solving a linear system of equations.
How is multivariate listdecoding performed? There are by now two standard approaches. Inspired by the PellikaanWu [PW04] observation that ReedMuller codes are a subcode of ReedSolomon codes over an extension field, Kopparty performs a similar reduction of the multivariate multiplicity code to a univariate multiplicity code over an extension field. Another approach is to solve the multivariate case by solving the univariate subproblem on various lines in the space. However, both these approaches work only if the set or has some special algebraic structure.
For our proof, we take an alternate approach and always work in the multivariate setting without resorting to a reduction to the univariate setting. As we shall see, our approach has some advantages over that of Kopparty [Kop15], both in quantitative terms, since the algorithm can tolerate a larger number of errors, and in qualitative terms, since the underlying set of evaluation points does not have to be an algebraically nice subset of as in [Kop15]; evaluations on an arbitrary grid suffice for the algorithm to work.
To extend the univariate listdecoding algorithm outlined above to the multivariate setting, we adopt the following approach. We consider a new set of formal variables and instead of directly working with the information about partial derivatives in the received word, we think of the partial derivatives of the same order as being glued together using monomials in . With this reorganized (and somewhat mysterious) view of the partial derivatives, we follow the outline of the univariate setting as described above. We find a polynomial with coefficients from the field of fractions instead of just
in the interpolation step to explain the received word
. Thus, in this instance, the linear system in the interpolation step is over the field . We then argue that contains information about all the codewords that are close to the received word, and eventually solve to recover all the codewords close to the received word. This might seem rather strange to begin with, but these ideas of gluing together the partial derivatives and working over the field immediately generalize the univariate list decoding algorithm to the multivariate setting. Working with this field of fractions comes with its costs; it makes some of the steps costly and in particular, the recovery step far more elaborate than that in the GuruswamiWang setting. However, this recovery step happens to be a special case of similar step in the recent work of Guo, Kumar, Saptharishi and Solomon [GKSS19] and we adapt their algorithm to our setting.As a first attempt, a more standard way to generalize the algorithms of Kopparty [Kop15] and Guruswami & Wang [GW13] to the multivariate setting would have been to work with the partial derivatives directly. And, while this approach seems alright for the interpolation step, it seems hard to work with when we try to solve the resulting equation to recover all the close enough codewords. In particular, it isn’t even clear in this set up that the number of solutions of the algebraic explanation (and hence, the number of close enough codewords) is polynomially bounded. This mysterious step of gluing together derivatives of the same order in a reversible manner (in the sense that we can read off the individual derivatives from the glued term) gets around this problem, and makes it viable to prove a polynomial upper bound on the number of solutions, and eventually solve the equation to recover all the close enough codewords.
Given this background, we now give a more detailed outline of our algorithm below.
2.2 Theorem 1.1 : Multivariate listdecoding algorithm with polynomialsized lists
Viewing the encoding as a formal power series
Multiplicity codes are described by saying that the encoding of a polynomial consists of the evaluation of all partial derivatives of of order at most at every point in the appropriate evaluation set, e.g. the grid . For our algorithm, we think of these partial derivatives of as being rearranged on the basis of the order of the derivatives as follows. We take a fresh set of formal variables and define the following differential operators.
where denotes the Hasse derivative^{4}^{4}4Since we have both and variables, we use the notation to denote the Hasse derivative wrt variable to explicitly indicate which variable the derivative is being taken of the polynomial with respect to .
Let be an tuple of polynomials defined as follows.
We view the encoding for as giving us the evaluation of the tuple as varies in . Note that for every fixing of to some , is in . Thus, the alphabet size is still large. Clearly, this is just a change of viewpoint, as we can go from the original encoding to this and back efficiently, and at this point it is unclear that this change of perspective would be useful.
Finding an equation satisfied by all close enough codewords
Let be a received word. We view as a function , where , as discussed in the previous step. The goal of the decoding step is to find all the polynomials of degree at most , whose encoding is close enough to .
As a first step towards this, we find a nonzero polynomial of the form
which explains the received word , i.e., for every , , and satisfies some appropriate degree constraints. Here is a parameter. For technical reasons, we also end up imposing some more constraints on in terms of its partial derivatives, the details of which can be found in Section 4.3. Each of these constraints can be viewed as a homogeneous linear equation in the coefficients of over the field . We choose the degree of to be large enough to ensure that this system has more variables than constraints, and therefore, has a nonzero solution.
This step is the interpolation step which shows up in any standard application of the polynomial method, and our set up is closest and a natural generalization of the set up in the list decoding algorithm of Guruswami and Wang [GW13] for univariate multiplicity codes.
The key property of the polynomial thus obtained is that for every degree polynomial whose encoding is close enough to ,
To see this, we note that from the upper bound on the degree of and the fact that has degree at most , the polynomial is of not too high degree in . Moreover, from the constraints imposed on during interpolation, it follows that at every where the encoding of and agree, vanishes with high multiplicity. Thus, if the parameters are favorably set, it follows that has too many zeroes of high multiplicity on a grid, and hence by the multiplicity SchwartzZippel emma (see Lemma 3.4), must be identically zero.
We note that this is the only place in the proof where we use anything about the structure of the set of evaluation points, i.e., the set of evaluation points is a grid.
Solving the equation to recover all close enough codewords
As the final step of our algorithm, we try to recover all polynomials of degree at most such that
can be viewed as a partial differential equation of order
and degree one, and we construct all candidate solutions via the method of power series. We start by trying all possible choices of field elements for coefficients of monomials of degree at most in , and iteratively recover the remaining coefficients of by reconstructing one homogeneous component at a time. Moreover, we observe that for each choice of the initial coefficients, there is a unique lift to a degree polynomial. Thus, the number of solutions is upper bounded by the number of initial choices, which is at most .We note that this is one place where working with as opposed to having an equation in the individual partial derivatives of is of crucial help. Even though the equation is a partial differential equation of high order in , the fact that these derivatives appear in a structured form via the operators helps us prove a polynomial upper bound on the number of such solutions and solve for . Without this additional structure, it is unclear if one can prove a polynomial upper bound on the number of solutions of the corresponding equation.
This reconstruction step is a multivariate generalization of similar reconstruction steps in the list decoding algorithms of Kopparty [Kop15] and Guruswami & Wang [GW13] for univariate multiplicity codes. Interestingly, this is also a special case of a similar reconstruction procedure in the work of Guo, Kumar, Saptharishi and Solomon [GKSS19], where the polynomial could potentially be of higher degree in variables, and is given to us via an arithmetic circuit of small size and degree and the goal is to show that all (low degree) polynomials , satisfying have small circuits. In contrast, we are working with which is linear in and we have access to the coefficient representation of this polynomial, and construct the solutions in the monomial representation. As a consequence, the details of this step are much simpler here, when compared to that in [GKSS19].
In this step of our algorithm viewing the encoding in terms of the differential operators turns out to be useful. The iterative reconstruction outlined above crucially uses the fact that for any homogeneous polynomial of degree , is a homogeneous polynomial in the variables of degree exactly . The other property that we use from is that given for any homogeneous polynomial , we can uniquely read off all the partial derivatives of order of , and via a folklore observation of Euler, uniquely reconstruct the polynomial itself (see Lemma 4.4).
Finally, we note that the precise way of gluing together the partial derivatives of order in the definition of the operator is not absolutely crucial here, and as is evident in Lemma 4.4, many other candidates would have satisfied the necessary properties.
The details of this step are in Section 4.5, and essentially complete the proof of Theorem 1.1.
2.3 Theorem 1.3: Reducing the list size to a constant
In Section 5, we combine our proof of Theorem 1.1 with the techniques in the recent work of Kopparty, RonZewi, Saraf and Wootters [KRSW18] to show that the list size in the decoding algorithm in Theorem 1.1 can be reduced to a constant.
The key to this step is the observation that since is linear in the variables, the solutions of the equation form an affine subspace of polynomials. The reconstruction algorithm in Section 4.5 in fact gives us an affine subspace of polynomials of degree at most which consists of all the solutions of .
This is precisely the setting in the work of Kopparty, RonZewi, Saraf and Wootters [KRSW18] in the context of folded ReedSolomon codes and univariate multiplicity codes, and we essentially apply their ideas off the shelf, and combine them with our proof of Theorem 1.1 to reduce the list size to a constant.
In general, this idea of solving to recover a subspace, and then using the ideas in [KRSW18] to recover codewords in the subspace which are close to the received word has the added advantage that it can be applied over all fields. As an immediate consequence, we get an analog of Theorem 1.1 over infinite fields like rationals as well.
3 Preliminaries
3.1 Notation
We use the following notation.

is the field we work over, and we assume the characteristic of to be either zero or larger than the degree parameter of the message space.

We use bold letters to denote tuples of variables (i.e., , , for and respectively).

We work with polynomials which are in general members of . We denote monomials in and by (), () respectively where . The degree of the monomial is .

For we say iff for all we have . Also, we use to denote .

For a natural number , denotes the set .
3.2 Hasse derivatives
Throughout the paper we work with Hasse derivatives: we interchangeably use the term partial derivatives.
Definition 3.1 (Hasse Derivative).
For a polynomial the Hasse derivative of type is the coefficient of in the polynomial . We denote this by or
We state some basic properties of Hasse Derivatives below. Some of these are taken from [DKSS13, Proposition 4].
Proposition 3.2 (Basic Properties of Hasse Derivatives).
Let and consider .

.

If is a homogeneous polynomial of degree then is homogeneous polynomial of degree .

If then .

Hasse derivatives compose in the following manner:

Product rule for Hasse derivatives:
3.3 Multiplicity code
We now define the notion of multiplicity of a polynomial at a point . The multiplicity of at the origin is iff is the highest integer such that no monomial of total degree less than appears in the coefficient representation of . We formalize this below using Hasse derivatives.
Definition 3.3 (multiplicity).
A polynomial is said to have multiplicity at a point , denoted by , iff is the largest integer such that for all with we have . If no such exists then .
Dvir, Kopparty, Saraf and Sudan proved the following higher order multiplicity version of the classical SchwartzZippel lemma.
Lemma 3.4 (multiplicity SZ lemma [Dkss13, Lemma 2.7]).
Let be any field and let be an arbitrary subset of . Then, for any nonzero variate polynomial of degree at most ,
The above lemma implies the classical SZ lemma, which states that two distinct variate polynomials of degree cannot agree everywhere on a grid for any set of size larger than trivially. This in particular tells us that the grid serves as hitting set for polynomials of degree at most provided .
As mentioned before, a multiplicity code over a grid consists of evaluations of the message polynomial along with its derivatives of various orders (up to ), at the points of the grid.
Definition 3.5 (multiplicity code).
Let , , a field and a nonempty finite subset. The variate order multiplicity code of degree polynomials over on the grid is defined as follows.
Let . Note that . The code is over alphabet and has length (where the coordinates are indexed by elements of ).
The code is an linear map from the space of degree polynomials in to . The encoding of at a point is given by:
Remark 3.6.

The distance of the code is exactly and the rate of the of the code is .

As mentioned in the introduction we can also view the encoding by clubbing partial derivatives of the same degree. Thus, the encoding of at a point is where .

We think of , and as constants, but much larger than and is much larger than . The precise tradeoffs will be alluded to when we need to set parameters in our proofs.
3.4 Computing over polynomial rings
In this section, we state a few basic results that show how to perform algebraic operations over polynomial rings.
The following lemma, proved via an easy application of polynomial interpolation, lets us construct the coefficient representation of a polynomial given an arithmetic circuit for it.
Lemma 3.7.
Let . There exists a deterministic algorithm that takes as input an arithmetic circuit of size that computes a variate polynomial of degree at most and outputs the coefficient vector of in at most field operations over
Proof.
From Lemma 3.4, we know that no two degree polynomials can agree everywhere on a grid of size larger than . So, we pick an arbitrary subset of of size and evaluate the circuit at all points on the grid . This requires at most field operations. Now, given these evaluations, we set up a linear system in the coefficients of where for every in the grid, we have a constraint of the form . We know that this system has a solution. Furthermore, from Lemma 3.4, we know that this system has a unique solution.
Solving this system gives us the coefficient vector of and requires at most additional field operations. ∎
The next lemma tells us how to perform linear algebra over the polynomial ring .
Lemma 3.8 (linear algebra over polynomial rings).
Let be a matrix such that each of its entries is a polynomial of degree at most in the variables and . Then, there is a deterministic algorithm which takes as input the coefficient vectors of the entries of and outputs a nonzero vector in time such that . Moreover, every entry in is a polynomial of degree at most .
Proof.
As a first step, we reduce this to the problem of solving a linear system of the form , where and have entries in of degree at most , and is a square matrix of dimension at most , which is nonsingular. At this point, we can just apply Cramer’s rule to find a solution of this system.
Since , the rank of over is at most . Thus, there is a square submatrix of such that is a nonzero polynomial of degree at most in . For a hitting set of polynomials of degree at most on variables over , we consider the set of matrices . From the guarantees of the hitting set, we know that there is a such that is of rank equal to . Let be such that the rank of over is maximum among all matrices in the set . Moreover, let be a submatrix of such that equals . From Lemma 3.4, there is an explicit hitting set of size at most . Thus, we can find of rank equal to the rank of with at most field operations over . Without loss of generality, let us assume that is the top left submatrix of of size . Clearly, the st column of is linearly dependent on the first columns of over the field . In other words, the linear system given by
where , has a solution in . Moreover, for every solution of this system, where , the dimensional vector is in the kernel of . Also, since is a homogeneous linear system, for any nonzero polynomial , continues to be a nonzero vector in the kernel of .
Since is nonsingular, is a solution to this system. Moreover, by Cramer’s rule, , where is the adjugate matrix of and is its determinant. Since, every entry of is a polynomial in of degree at most , we get a solution of the form where each is a polynomial in of degree at most . By getting rid of the denominators by scaling by , we get that the nonzero dimensional vector is in the kernel of .
Moreover, using the fact that the determinant polynomial has a polynomial size efficiently constructible circuit, and Lemma 3.7, we can output this vector, with each entry being a list of coefficients in in time via an efficient deterministic algorithm. ∎
4 List decoding the multivariate multiplicity code
In this section, we prove Theorem 1.1. We follow the outline of the proof described in Section 2. We start with the interpolation step.
4.1 Viewing the encoding as a formal power series
The message space is the space of variate polynomials of degree at most over . In the standard encoding, we have access to evaluations of the polynomial and all its derivatives of order up to on all points on a grid .
For our proof, it will be helpful to group the derivatives of the same order together.
Definition 4.1.
Let be a polynomial. Then, for any , is defined as
So, we have a distinct monomial in attached to each of the derivatives. The precise form of the monomial in is not important, and all that we will use is that these monomials are linearly independent over the underlying field, don’t have very high degree and there aren’t too many variables in .
Now, we think of the encoding of as giving us the evaluation of the tuple of polynomials as takes values in .
Note that is a homogeneous polynomial of degree at equal to in .
4.2 The operator
We will need to compute the Hasse derivative of with respect to , i.e., . From the definition of , we have
The key point to note is that the Hasse derivative of with respect to can be read off the coefficients of .
This motivates the following definition. Consider a tuple , where for each , is a homogeneous polynomial of degree in . For any , and such that , we define
Thus, for , we have
4.3 Interpolation step
Let be the received word, Thus, we are given a collection of tuples of polynomials for every , where each is a homogeneous polynomial of degree in . From the earlier definition of , given such a , we have for every and with .
Lemma 4.2.
Let and be constants. For every natural number , and , there is a nonzero polynomial such that

For every , the degree of each is at most .

For every and every such that , , where
Here, means that dominates coordinate wise.
Moreover, the coefficients of are polynomials in of degree at most , and such a can be deterministically constructed by using at most operations over the field .
Proof.
We start by showing the existence of a polynomial with the appropriate degree constraints, followed by an analysis of the running time.
Existence of .
We view the above constraints as a system of linear equations over the field , where the variables are the coefficients of . The number of homogeneous linear constraints is and the number of variables is .
By using the fact that is much smaller than , and a crude approximation of the binomial coefficients, we have and . Plugging in the value of , we get , which is clearly greater than the number of constraints. Hence, there is a nonzero solution, where the coefficients of the polynomial are from the field , i.e., are rational functions in .
Next we analyze the degree of these coefficients and show that we can recover such a efficiently, with the appropriate degree bounds.
The running time.
For the running time, we recall that each is a polynomial of degree at most in the variables. As a consequence, observe that the linear system we have for the coefficients of is of the form , where is a matrix with dimension at most over the ring , and every entry of is a polynomial in of degree at most . From Lemma 3.8, we get that we can find a nonzero solution in using at most field operations over . Moreover, each of the coordinates of this output vector is a polynomial of degree at most in . ∎
Going forward, we work with the polynomial and the degree parameter as set in Lemma 4.2.
4.4 Close enough codewords satisfy the equation
We now show that for every polynomial of degree at most whose encoding is close enough to the received word , satisfies the equation in some sense.
Lemma 4.3.
If is a degree polynomial such that the number of which satisfy
is at least , then is identically zero as a polynomial in .
Proof.
Define the polynomial as follows
is a polynomial in of degree at most over the field . Whenever satisfies that
Comments
There are no comments yet.