Codes from symmetric polynomials

We define and study a class of Reed-Muller type error-correcting codes obtained from elementary symmetric functions in finitely many variables. We determine the code parameters and higher weight spectra in the simplest cases.



There are no comments yet.


page 1

page 2

page 3

page 4


Error correcting codes from sub-exceeding fonction

In this paper, we present error-correcting codes which are the results o...

Polynomial traces and elementary symmetric functions in the latent roots of a non-central Wishart matrix

Hypergeometric functions and zonal polynomials are the tools usually add...

LCD Codes from tridiagonal Toeplitz matrice

Double Toeplitz (DT) codes are codes with a generator matrix of the form...

Improving Content-Invariance in Gated Autoencoders for 2D and 3D Object Rotation

Content-invariance in mapping codes learned by GAEs is a useful feature ...

Optimal q-Ary Error Correcting/All Unidirectional Error Detecting Codes

Codes that can correct up to t symmetric errors and detect all unidirect...

Regular subgroups with large intersection

In this paper we study the relationships between the elementary abelian ...

On symmetric higher-dimensional automata and bisimilarity

It is shown that a higher-dimensional automaton is hhp-bisimilar to the ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Over the last decades, good examples of error-correcting codes have been constructed using algebraic geometric techniques. The codes constructed this way are linear codes over a given finite field

, where each member of a finite dimensional vector space of functions, say

, are evaluated at a finite set of points, say , all lying in an affine space or a projective space over the same field . Examples are simplex codes, Reed-Muller codes ([9]), algebraic-geometric codes with a curve (Goppa codes) ([4]) or a higher dimensional variety ([2]), Grassmann codes ([8]

), and codes where the points in question represent synmmetric or skew-symmetric matrices (


Having defined such codes, it is imperative that one looks for their parameters such as dimensions, minimum distance, weight distributions, generalized Hamming weights etc. These questions are often related to question that are interesting from the perspective of algebraic geometry, number theory and various branches of discrete mathematics. For instance, one checks easily that the minimum distance of a code defined using methods described above is equivalent to determining the maximum possible number of zeroes that a function in (that does not vanish identically in ) may have in .

In this paper we study a class of codes that are motivated from the Reed-Muller codes. While defining a Reed-Muller code, one evaluates the set of all reduced polynomials of degrees bounded above by a given quantity on the whole of affine space. Instead, here we consider a subspace of the set of all symmetric polynomials and evaluate them on points from affine spaces that have pairwise distinct coordinates. As it turns out, the relative minimum distance of our codes is same as that of Reed-Muller codes. However, relative dimension of the code is not as good. To this end, we introduce a modified family of codes that has the same relative minimum distance, but a better rate. We also show that, like the Reed-Muller codes, the new codes are also generated by minimum weight codewords. This property, in particular, makes the duals of the new codes useful.

We remark that this study has been motivated by a study of properties of Rational Normal Curves as an arc and possibility of extension of Reed-Solomon codes to an MDS code of higher length. However, we do not address these issues as they are beyond the scope of this paper.

This article is organized as follows: In Section 2, we study the number of points over finite with pairwise distinct coordinates satisfying multivariate symmetric polynomials that are linear combinations of elementary symmetric polynomials over a finite field. In Section 3, we introduce the new family of codes and study their properties, such as their dimension, minimum weight and minimum weight codewords. In Section 4, we derive upper bounds on the generalized Hamming weights of the codes. In Section 5, we work with the codes that occur from symmetric polynomials in two variables and prove several results including their generalized Hamming weights, weight distributions and higher weight spectra. In Section 6, we specially concentrate on trivariate symmetric polynomials over a field with elements for the sake of illustrating the difficulties in obtaining the parameters in higher dimensions.

2. Symmetric polynomials and their distinguished zeroes

Let be a field. In most cases, we shall restrict our attention to the case when , i.e. is a finite field with elements where is a prime power. For a positive integer and a nonnegative integer , we denote by the -th elementary symmetric polynomial in variables . It is well known that any symmetric polynomial can be written as an algebraic expression in . However, in this article we are interested in symmetric polynomials that are -linear combinations of elementary symmetric polynomials. We denote by the -linear subspace generated by the elementary symmetric polynomials . Note that .

For a given polynomial , we denote by the set of zeroes of in , the -dimensional affine space over . A point is said to be distinguished if whenever . In this paper, we are interested in the distinguished zeroes of symmetric polynomials described in the last paragraph. For ease of reference, we shall denote by the set of all distinguished points of . Also, given a polynomial , we denote by the set of distinguished zeroes of in .

Next we introduce a combinatorial notation for ease of reading. For positive integers we denote by the number of possible arrangements of objects taken from distinct objects. More precisely,

It follows trivially that . We are interested in analyzing the number of distinguished zeroes of a symmetric polynomial that is a linear combinations of the elementary symmetric polynomials on certain finite grids in . Before we state our main result towards this direction, let us state a few remarks on such polynomials. Let be given by


where . It can be verified readily that


For simplicity, we shall write



We may readily observe that a polynomial as in equation (1

) can be classified in two types:

Type I: and are linearly dependent. In this case, there exists such that

If , then is a constant polynomial. On the other hand, if , then

As a consequence, if is of Type I, then .

Type II: and are linearly independent. It is not hard to verify that in this case is absolutely irreducible, i.e. is irreducible in an algebraic closure of .

Note that, if we identify a nonzero polynomial as in (1) with the point in a projective space of dimension over the field , then the polynomials of Type I correspond to (upto multiplication by a nonzero element of ) the -rational points of the rational normal curve in . We are now ready to state our first main result of this article.

Theorem 2.1.

Let be a positive integer and be a finite subset of with . If is a nonzero symmetric polynomial as in (1), then


This bound is attained if and only if is a nonconstant Type I polynomial given by

for some and . Moreover, if is non-zero and not of the above type, then


We prove the inequality (4) by induction on . Suppose that . Then and the assertion follows trivially. Suppose that the assertion is true for all where and . We distinguish two cases:

Case 1: is of type I. In this case, we may write

for some . Note that if and only if and for some . Consequently,

Case II: is of type II. Write as in equation (3). Since and are linearly independent, for every , the polynomial is a nonzero symmetric polynomial that is a linear combination of the elemetary symmetric polynomials in variables. are linearly independent elements of Using induction hypothesis, we obtain,

This completes the proof. ∎

We now apply the result to the particular case when to get the following corollary.

Corollary 2.2.

Let be as in (1). If and , then . Moreover, the equality holds if and only if is of Type I.


Follows trivially from Theorem 2.1. ∎

Having known the maximum number of distinguished zeroes of a polynomial as in equation (1), it is important to address the following questions.

Question 2.3.

Given as in (1), what are the possible number of distinguished zeroes in that may admit?

One can readily note that is always divisible by . Furthermore, if is a nonzero constant polynomial, then it has no zeroes. If is a zero polynomial then it has distinguished zeroes. Moreover, thanks to Corollary 2.2, if is nonzero and of Type I, then it has distinguished zeroes. We remark that the above question is equivalent to the question of determination of the weight distribution of the code defined in Section 3. In general, it is a hard question to answer. Here we completely work out the case when and leave the general question open for further research.

Theorem 2.4.


be odd,

, and be given by . If , then.


If , then . Conversely, it is clear from Corollary 2.2 that if , then . So we may assume that , i.e. . We distinguish the proof into several cases:

  1. Suppose . If , then is a nonzero constant polynomial which does not have any zeroes. So we may assume that . Then the polynomial has distinguished zeroes.

  2. Suppose . We may write

    By using the change of coordinates and , and we get a new polynomial

    It is clear that there is a one-one correspondence between the set of distinguished zeroes of and . This leads us to analyzing the distinguished zeroes of the polynomial . Note that the number of distinguished zeroes of depends of the quantity .

    1. Suppose . Then the polynomial has exactly many distinguished zeroes.

    2. Suppose and is a square in . Note that the polynomial has zeroes and out of them two are nondistinguished. Consequently, such a polynomial have distinguished zeroes.

    3. Suppose and is not a square in . In this case, all the zeroes of are distinguished. As a consequence, the number of distinguished zeroes of such a polynomial is .

This completes the proof. ∎

Remark 2.5.

It is not very difficult to count the number of polynomials that have and distinguished zeroes. It is trivial to see that there are nonzero constant polynomials admitting no zeroes and exactly one polynomial, namely the zero polynomial, admitting distinguished zeroes. In order to count the number of polynomials , or equivalently, the tuples satisfying the conditions and is a nonzero square in , we note that there are possible values for , and for each of these choices, the choices of (namely choices for a nonzero value of and choices for ) determines uniquely. This results in a total of many polynomials admitting zeroes. The computation of the other possible number of polynomials with given number of distinguished zeroes are left to the reader. The complete picture is depicted in the Table 1.

Number of distinguished zeroes Number of polynomials
Table 1. Number of polynomials with given number of distinguished zeroes when is odd

We remark that, in the particular case when , then the nonzero constant polynomials as well as the polynomials satisfying the conditions and a nonzero square in admit no distinguished zeroes. We now study the case when is even. The proof is essentially similar, but the difference lies in the fact that every element of is a square in . We include the complete proof for the ease of the reader.

Theorem 2.6.

Let be even, , and be given by . If , then.


If , then . As in Proposition 2.4, it is clear from Corollary 2.2 that if , then . So we may assume that , i.e. . We again distinguish the proof into several cases:

  1. Suppose . If , then is a nonzero constant polynomial which does not have any zeroes. So we may assume that .

    1. If , then all the zeroes of are distinguished. Consequently, .

    2. Then the zeroes of the polynomial are not distinguished. Thus .

  2. Suppose . As in Proposition 2.4, after a suitable change of coordinates, we get a polynomial

    with .

    1. Suppose . Then the polynomial has exactly many distnguished zeroes.

    2. Suppose . Since is even, is a square in . Note that the polynomial has zeroes and out of them only one is nondistinguished. Consequently, such a polynomial have distinguished zeroes.

This completes the proof. ∎

Number of distinguished zeroes Number of polynomials
Table 2. Number of polynomials with given number of distinguished zeroes when is even

Again, it is not very difficult to compute the number of polynomials that admits a given number of distinguished zeroes in the case when is even. We leave the explicit computations to the readers, but present the data in Table 2.

3. Reed-Muller type codes from symmetric polynomials

Throughout this section, we will denote by a finite field with elements where is a power of a prime number. As in Section 2, we denote by the vector space consisting of all symmetric polynomials as in (1). As noted before, is a vector space of dimension over . Let .

Definition 3.1.

We fix an ordering of elements in . Define an evaluation map

It is readily seen that is a linear map and consequently the image, of is a code.

We discuss some properties of this code in the following proposition:

Proposition 3.2.

If , then the code is a nondegenerate code, where , and . Furthermore, the code is generated by minimum weight codewords.


The statement on the length of the code is trivial, while the fact that the code is nondegenerate follows readily by observing that . To show that is of dimension , it is enough to show that the map is injective. To this end, let with . Then . But from Corollary 2.2, we see that, if , then . Since , we have . This implies . Consequently, the map is injective. The assertion on the minimum distance follows from Corollary 2.2. Moreover, it is clear from the last assertion of Corollary 2.2 that the minimum weight codewords of are given by where is a Type I polynomial. Thus, to show that is generated by minimum weight codewords, it is now enough to prove that is spanned by a set of Type I polynomials. Since , we may choose that are distinct. For each , we define

Since are distinct, it follows from the Vandermonde determinant formula that are linearly independent. Since , they span the vector space . This completes the proof. ∎

Remark 3.3.

We note that the relative minimum distance of is the same as that of the generalized Reed-Muller codes of order .

The code is made by evaluating each of the functions in at the points of . But the points of constitute a disjoint union of -orbits, each of cardinality , where the symmetric group in letters acts freely by permuting the coordinates. This motivates us in defining a code of smaller length, namely, by constructing a smaller evaluation set, say , consisting of one point from each of the orbits mentioned above. Again we fix an ordering of the elements in the set , say , where .

We now consider the restriction of the evaluation map, still denoted by :

Let denote the image of under the map . The following proposition follows readily from Proposition 3.2.

Proposition 3.4.

If , then is a nondegenerate linear code where , and .


The assertions on length and dimension is readily obtained as in the case with Proposition 3.2. The assertion on minimum distance is deduced from Corollary 2.2 and the fact that the weight of any codeword is given by . ∎

4. Generalized Hamming weights

Ever since their introduction by V. Wei in [10], the computation of generalized Hamming weights of several codes have been in the center of interest of many mathematicians and coding theorists. The study of generalized Hamming weights of several evaluation codes has paved the way for a lot of research articles such as [1, 5, 6] among others.

In this section, we derive some natural upper bounds on the generalized Hamming weights of the codes and . At the outset, we remark that it is enough to derive any parameters related to the Hamming weight of codewords for one of the codes. Since, the codes are somewhat more natural to work with, we choose to restrict our attention to them.

Proposition 4.1.

Fix positive integers and denote by the -th generalized Hamming weight of . We have


Since , there exist distinct elements . For , we consider the polynomials

Note that are linearly independent and as a consequence span an dimensional subspace, say of . It follows that

where, as usual, for any subspace ,

Now, an element if and only if for each , there exists such that . A simple counting argument now completes the proof. ∎

Remark 4.2.

We note that the determination of the -th generalized Hamming weight of (resp. ) is equivalent to computing the maximum number of common zeroes of linearly independent elements of in (resp. ). It follows trivially that . The following corollary is now immediate:

Corollary 4.3.

The following proposition shows that the bounds obtained in Proposition 4.1 is exact for the largest two values of .

Proposition 4.4.

We have

  1. .


Part (a) follows trivially since and hence is a nondegenerate code. We prove the part (b) for the code .

A generator matrix for is a parity check matrix for its dual code. Such a matrix can be formed by setting the value of at point number in , for some fixed order of the points in . Another way to put it is that the value of at a chosen point in orbit number of in , for some fixed order of the orbits in . Any two columns of this matrix are equal if and only if they are equal up to a non-zero, multiplicative constant. This is because their first entries are both The last observation immediately shows that no column of is zero. Moreover any two columns are different. This is because the elementary, symmetric functions separate orbits of on . (If

then the are unique up to order, since is a UFD). Hence no two two columns are parallel vectors either (i.e. no two columns are equivalent up to a non-zero multiplicative constant). Hence the minimum distance of the dual code of is at least . By Wei duality

This completes the proof. ∎

Propositions 3.4 and 4.4 give all generalized Hamming weights for and for the case . If , then fills the whole ambient space , and everything is trivial. It is a challenge, though, to give good results in the intermediate cases .

5. The case .

As it is clear from the work done in previous sections, we are interested in computing the basic parameters such as length, dimension, minimum distance, generalized Hamming weights and the weight distributions for the codes and . In this section, we completely determine these parameters for the codes when . To begin with, we derive from Proposition 3.2 that is an code, where

Furthermore, it follows from Propositions 3.4 and 4.4 that

where denote the first, second and third generalized Hamming weights for the code . We now proceed to determine the weight distribution for the code . To this end we introduce the following notation:

Definition 5.1.

Let and be integers satisfying and . Define

  1. the number of codewords of of Hamming weight .

  2. the number of -dimensional subcodes of of support weight .

Let be a codeword. Then for some . It follows that is a codeword of Hamming weight if and only if . One can now readily compute the values of from Tables 1 and 2 for all values of . We have the following results:

Proposition 5.2.

If is odd, and , then we have

We remark that for , we have and .

Proposition 5.3.

If is even, and , then we have

We now turn our attention towards computing -s for all values of and for the code . To this end, we have the following result:

Proposition 5.4.

For and we have


The assertions concerning the cases when and are clear. To prove the claims concerning the cases when , we must analyze the possible number of distinguished points on the intersection of two curves given by such that and are linearly independent. Suppose that

We claim that and have no common factors. To see this, first note that, is not a nonzero constant multiple of since they are linearly independent. However, if has a factor of degree one, then for some . The fact that and have a common factor, now readily implies that for some . This is a contradiction. Now the projective closures of the zero sets and are given by homogeneous polynomials and of degree , namely

By Bezout’s theorem, the projective curves given by and intersect at exactly points over the algebraic closure, counting multiplicities. We also observe that they have two points on the line in common, namely and . Hence they have at most points in common in the affine space To this end, we observe that if