Squares of Matrix-product Codes

The component-wise or Schur product C*C' of two linear error correcting codes C and C' over certain finite field is the linear code spanned by all component-wise products of a codeword in C with a codeword in C'. When C=C', we call the product the square of C and denote it C^*2. Motivated by several applications of squares of linear codes in the area of cryptography, in this paper we study squares of so-called matrix-product codes, a general construction that allows to obtain new longer codes from several "constituent" codes. We show that in many cases we can relate the square of a matrix-product code to the squares and products of their constituent codes, which allow us to give bounds or even determine its minimum distance. We consider the well-known (u,u+v)-construction, or Plotkin sum (which is a special case of a matrix-product code) and determine which parameters we can obtain when the constituent codes are certain cyclic codes. In addition, we use the same techniques to study the squares of other matrix-product codes, for example when the defining matrix is Vandermonde (where the minimum distance is in a certain maximal with respect to matrix-product codes).

Authors

• 1 publication
• 4 publications
• 7 publications
12/12/2021

Multivariate Goppa codes

In this paper, we introduce multivariate Goppa codes, which contain as a...
12/31/2020

Quantum error-correcting codes from matrix-product codes related to quasi-orthogonal matrices and quasi-unitary matrices

Matrix-product codes over finite fields are an important class of long l...
10/28/2019

LCD Matrix-Product Codes over Commutative Rings

Given a commutative ring R with identity, a matrix A∈ M_s× l(R), and R-l...
03/03/2018

Matrix-product structure of constacyclic codes over finite chain rings F_p^m[u]/〈 u^e〉

Let m,e be positive integers, p a prime number, F_p^m be a finite field ...
07/30/2019

High dimensional affine codes whose square has a designed minimum distance

Given a linear code C, its square code C^(2) is the span of all componen...
05/31/2021

Sum-rank product codes and bounds on the minimum distance

The tensor product of one code endowed with the Hamming metric and one e...
12/21/2017

Extended Product and Integrated Interleaved Codes

A new class of codes, Extended Product (EPC) Codes, consisting of a prod...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Component-wise or Schur products of linear error-correcting codes have been studied for different purposes during the last decades, from efficient decoding to applications in several different areas within cryptography. Given two linear (over some finite field ) codes of the same length we define the component-wise product of the codes to be the span over of all component-wise products , where .

One of the first applications where component-wise products of codes became relevant concerned error decoding via the notion of error locating pairs [DK94, P92]. An error locating pair for a code is a pair where , and the number of errors the pair is able to correct depends on the dimensions and minimum distances of the codes and their duals. More precisely, it is required that and if we should be able to locate errors.

Later on, the use of component-wise products found several applications in the area of cryptography. For example, some attacks to variants of the McEliece cryptosystem (which relies on the assumption that it is hard to decode a general linear code) use the fact that the dimension of the product tends to be much larger when is a random code than when has certain algebraic structure, which can be used to identify algebraic patterns in certain subcodes of the code defining the cryptosystem, see for instance [COT17, PM17]. A different cryptographic problem where products of codes are useful is private information retrieval, where a client can retrieve data from a set of servers allocating a coded database in such a way that the servers do not learn what data the client has accessed. In [FGHK17] a private information retrieval protocol based on error-correcting codes was shown, where it is desirable to use two linear codes and such that , , and are simultaneously high.

In this work, however, we are more interested in the application of products of codes to the area of secure multiparty computation. The goal of secure multiparty computation is to design protocols which can be used in the situation where a number of parties, each holding some private input, want to jointly compute the output of a function on those inputs, without at any point needing that any party reveals his/her input to anybody. A central component in secure computation protocols is secure multiplication, which different protocols realize in different ways. Several of these protocols require to use an error correcting code whose square has large minimum distance while there are additional conditions on which vary across the different protocols.

For example a well known class of secure computation protocols [BGW88, CCD88, CDM00] relies on the concept of strongly multiplicative secret-sharing scheme formalized in [CDM00]. Such secret-sharing schemes can be constructed from linear codes where the amount of colluding cheating parties that the protocol can tolerate is , where is the dual code to . These two minimum distances are therefore desired to be simultaneously high. For more information about secret sharing and multiparty computation, see for instance [CDN15].

Other more recent protocols have the less stringent requirement that and are simultaneously large. This is the case of the MiniMac [DZ13] protocol, a secure computation protocol to evaluate boolean circuits, and its successor Tinytables [DNNR16]

. In those protocols, the cheating parties have certain probability of being able to disrupt the computation, but this probability is bounded by

, meaning that a high distance on the square will give a higher security. On the other hand, a large relative dimension, or rate, of will reduce the communication cost, so it is desirable to optimize both parameters. A very similar phenomenon occurs in recent work about commitment schemes, which are a building block of many multiparty computation protocols; in fact, when these schemes have a number of additional homomorphic properties and in addition can be composed securely, we can base the entire secure computation protocol on them [FPY18]. Efficient commitment schemes with such properties were constructed in [CDD18] based on binary linear codes, where multiplicative homomorphic properties require again to have a relatively large (see [CDD18, section 4]) and the rate of the code is also desired to be large to reduce the communication overhead.

These applications show the importance of finding linear codes where the minimum distance of the square is large relative to the length of the codes and where some other parameter (in some cases , in others ) is also relatively large. Moreover, it is especially interesting for the applications that the codes are binary, or at least be defined over small fields.

Powers of codes, and more generally products, have been studied in several works such as [C17, CCMZ15, MZ15, R13b, R13a, R15] from different perspectives. In [R13b] an analogous of the Singleton for to and was established, and in [MZ15] it is shown that Reed Solomon codes are essentially the only codes which attain this bound unless some of the parameters are very restricted. However, Reed Solomon codes come with the drawback that the field size must be larger than or equal to the length of the codes. Therefore, finding asymptotically good codes over a fixed small field has also been studied, where in this case asymptotically good means that both and grows linearly with the length of the code . In [R13a] the existence of such a family over the binary field was shown, based on recent results on algebraic function fields. However, it seems like most families of codes do not have this property: in fact, despite the well known fact that random linear codes will, with high probability, be over the Gilbert Varshamov bound, and hence are asymptotically good in the classical sense, this is not the case when we impose the additional restriction that is linear in the length, as it is shown in [CCMZ15]. The main result in [CCMZ15] implies that for a family of random linear codes either the code or the square will be asymptotically bad.

The asymptotical construction from [R13a], despite being very interesting from the theoretical point of view, has the drawbacks that the asymptotics kick in relatively late and moreover, the construction relies on algebraic geometry, which makes it computationally expensive to construct such codes. Motivated by the aforementioned applications to cryptography, [C17] focuses on codes with fixed lengths (but still considerably larger than the size of the field), and constructs cyclic codes with relatively large dimension and minimum distance of their squares. In particular, the parameters of some of these codes are explicitly computed in the binary case.

This provides a limited constellation of parameters that we know that are achievable for the tuple consisting of length of , and . It is then interesting to study what other parameters can be attained, and a natural way to do so is to study how the square operation behaves under known procedures in coding theory that allow to construct new codes from existing codes.

One such construction is matrix-product codes, where several codes can be combined into a new longer code. Matrix-product codes, formalized in [BN01], is a generalization of some previously known code constructions, such as for example the -construction, also known as Plotkin sum. Matrix-product codes have been studied in several works, including [BN01, HLR09, HR10, OS02].

1.1 Results and outline

In this work, we study squares of matrix-product codes. We show that in several cases, the square of a matrix-product code can be also written as a matrix-product code. This allows us to determine new achievable parameters for the squares of codes.

More concretely, we start by introducing matrix-product codes and products of codes in Section 2. Afterwards, we determine the product of two codes when both codes are constructed using the -construction in Section 3. In Section 4, we restrict ourselves to squares of codes and exemplify what parameters we can achieve using cyclic codes in the -construction in order to compare the parameters with the codes from [C17].

At last, in Section 5, we consider other constructions of matrix-product codes. In particular, we consider the case where the defining matrix is Vandermonde, which is especially relevant because such matrix-product codes achieve the best possible minimum distance that one can hope for with this matrix-product strategy. We show that the squares of these codes are again matrix-product codes, and if the constituent codes of the original matrix-product code are denoted , then the ones for the square are all of the form for some . This is especially helpful for determining the parameters if the ’s are for example algebraic geometric codes. We remark that this property also holds for the other constructions we study in this paper, but only when the ’s are nested. Finally, we also study the squares of a matrix-product construction from [BN01] where we can apply the same proof techniques as we have in the other constructions.

2 Preliminaries

Let be the finite field with elements. A linear code is a subspace of . When has dimension , we will call it an code. A generator matrix for a code is a matrix consisting of

basis vectors for

as the rows. The Hamming weight of , denoted , is the number of nonzero entries in and the Hamming distance between is given by . By the linearity of the minimum Hamming distance taken over all pairs of distinct elements in is the same as the minimum Hamming weight taking over all non-zero elements in , and therefore we define the minimum distance of to be . If it is known that (respectively if we know that ) then we call an code (resp. ). We denote by the dual code to , i.e., the vector space given by all elements such that for every , and are orthogonal with respect to the standard inner product in . If is an code then is an code.

We recall the definition and basic properties of matrix-product codes (following [BN01]) and squares of codes.

Definition 2.1 (Matrix-product code):

Let be linear codes and let be a matrix with rank (implying ). Then we define the matrix-product code , as the set of all matrix products , where .

We call the defining matrix and the ’s the constituent codes.

We can consider a codeword , in a matrix-product code, as a matrix of the form

 c=⎡⎢ ⎢⎣c11a11+c12a21+⋯+c1sas1⋯c11a1l+c12a2l+⋯+c1sasl⋮⋱⋮cn1a11+cn2a21+⋯+cnsas1⋯cn1a1l+cn2a2l+⋯+cnsasl⎤⎥ ⎥⎦, (1)

using the same notation for the ’s as in the definition. Reading the entries in this matrix in a column-major order, we can also consider as a vector of the form

 c=(s∑i=1ai1ci,…,s∑i=1ailci)∈Fnlq. (2)

We sum up some known facts about matrix-product codes in the following proposition.

Proposition 2.2:

Let be linear codes with generator matrices , respectively. Furthermore, let be a matrix with rank and let . Then is an linear code and a generator matrix of is given by

 G=⎡⎢ ⎢⎣a11G1⋯a1lG1⋮⋱⋮as1Gs⋯aslGs⎤⎥ ⎥⎦.

We now turn our attention to the minimum distance of . Denote by the matrix consisting of the first rows of and let be the linear code spanned by the rows in . From [OS02], we have the following result on the minimum distance.

Proposition 2.3:

We are making the same assumptions as in Proposition 2.2, and write and . Then the minimum distance of the matrix-product code satisfies

 d(C)≥min{D1d1,D2d2,…,Dsds}. (3)

The following corollary is from [HLR09].

Corollary 2.4:

If we additionally assume that , equality occurs in the bound in (3).

The dual of a matrix-product code is also a matrix-product code, if we make some assumptions on the matrix , as it was noted in [BN01].

Proposition 2.5:

Let be a matrix product code. If is an invertible square matrix then

 C⊥=[C⊥1,C⊥2,…,C⊥s](A−1)T

Additionally, if is the matrix given by

 J=⎡⎢ ⎢ ⎢ ⎢⎣0⋯010⋯10⋮\iddots⋮⋮1⋯00⎤⎥ ⎥ ⎥ ⎥⎦

the dual can be described as

 C⊥=[C⊥s,C⊥s−1,…,C⊥1](J(A−1)T).

Notice that with regard to Proposition 2.3 the last expression is often more useful since will often decrease when increases.

Now, we turn our attention to products and squares of codes. We denote by the component-wise product of two vectors. That is, if and , then . With this definition in mind, we define the product of two linear codes.

Definition 2.6 (Component-wise (Schur) products and squares of codes):

Given two linear codes we define their component-wise product, denoted by , as

 C∗C′=⟨{c∗c′∣c∈C,c′∈C′}⟩.

The square of a code is .

First note that the length of the product is the same as the length of the original codes. Regarding the other parameters (dimension and minimum distance) we enumerate some known results only in the case of the squares since this will be our primary focus.

If

 G=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣g1g2⋮gk⎤⎥ ⎥ ⎥ ⎥ ⎥⎦

is a generator matrix for , then is a generating set for . However, it might not be a basis since some of the vectors might be linearly dependent. If additionally, a submatrix consisting of columns of is the identity, the set consists of linearly independent vectors. Since there is always a generator matrix satisfying this, this implies , where , and .

In most cases, however, is much smaller than . For example, the Singleton bound for squares [R13b] states that (which is much restrictive than the Singleton bound for , which states that ). Additionally, the codes for which have been characterized in [MZ15], where it was shown that essentially only Reed-Solomon codes, certain direct sums of self-dual codes, and some degenerate codes have this property. Furthermore, it is shown in [CCMZ15] that taking a random code with dimension the dimension of will with high probability be . Therefore, often and hence typically .

3 The (u,u+v)-Construction

In this section, we will consider one of the most well-known matrix-product codes, namely the -construction. We obtain this construction when we let

 A=[1101]∈F2×2q (4)

be the defining matrix. Note that if then and . This can easily be deduced from Propositions 2.3 and 2.5 by constructing .

In the following theorem, we will determine the product of two codes and when both codes come from the -construction. We will use the notation to denote the smallest linear code containing both and .

Theorem 3.1:

Let be linear codes. Furthermore, let be as in (4) and denote by and by . Then

 C∗C′=[C1∗C′1,C1∗C′2+C2∗C′1+C2∗C′2]A.

Proof:

Let be generator matrices for respectively. By Proposition 2.2, we have that

 G=[G1G10G2],G′=[G′1G′10G′2]

are generator matrices for and respectively. A generator matrix for can be obtained by making the componentwise products of all the rows in with all the rows in and afterwards removing all linearly dependent rows. We denote by the matrix consisting of all componentwise products of rows in with rows in . Then

 G∗G′=⎡⎢ ⎢ ⎢ ⎢⎣G1∗G′1G1∗G′10G1∗G′20G2∗G′10G2∗G′2⎤⎥ ⎥ ⎥ ⎥⎦.

The set of rows in is a generating set for . Hence, by removing linearly dependent rows we obtain a generator matrix of the form

 ~G=[~G1~G10~G2],

where is a generator matrix for , and for . By using Proposition 2.2 once again, we see that is a generator matrix for the code proving the theorem.

The following corollary consider the square of a code from the -construction, and in the remaining of the paper the focus will be on squares.

Corollary 3.2:

Let be linear codes. Furthermore, let be as in (4) and denote by . Then

 C∗2=[C∗21,(C1+C2)∗C2]A,

and we have that

 d(C∗2)≥min{2d(C∗21),d((C1+C2)∗C2)}. (5)

 C∗2=[C∗21,C1∗C2]A,

and we have that

 d(C∗2)=min{2d(C∗21),d(C1∗C2)}. (6)

Proof:

The results follows by setting in Theorem 3.1 implying , and we obtain that

 C∗2=[C1∗C1,C1∗C2+C2∗C1+C2∗C2]A=[C∗21,(C1+C2)∗C2]A.

If we have . The bound in (5) follows directly from Proposition 2.3, and (6) follows by Corollary 2.4.

4 Constructions from Binary Cyclic Codes

In this section, we exemplify what parameters we can achieve for and when we use the -construction together with cyclic codes as constituent codes. We start by presenting some basics of cyclic codes.

Cyclic codes are linear codes which are invariant under cyclic shifts. That is, if is a codeword then is as well. We will assume that . A cyclic code of length over is isomorphic to an ideal in generated by a polynomial , where . The isomorphism is given by

 c0+c1x+…+cn−1xn−1↦(c0,c1,…,cn−1),

and we notice that a cyclic shift is represented by multiplying by . The cyclic code generated by has dimension . To bound the minimum distance of the code, we introduce the -cyclotomic cosets modulo .

Definition 4.1 (q-cyclotomic coset modulo n):

Let . Then the -cyclotomic coset modulo of is given by

 [a]={aqjmodn∣j≥0}.

Now let and for , meaning that is a primitive -th root of unity in an algebraic closure of . Since every root of must be of the form for some . This leads to the following definition which turns out to be useful in describing the parameters of a cyclic code.

Definition 4.2 (Defining and generating set):

Denote by and . Then we call the defining set and the generating set of the cyclic code generated by .

We remark that , implying that is the dimension of the cyclic code generated by . We note that and must be a union of -cyclotomic costes modulo . Now we define the amplitude of as

 Amp(I)=min{i∈Z∣∃c∈Z/nZ such that I⊆{c,c+1,…,c+i−1}}.

As a consequence of the BCH-bound, see for example [C17], we have that the minimum distance of the code generated by is greater than or equal to .

Hence, we see that both the dimension and minimum distance depend on , and since is uniquely determined by , we will use the notation to describe the cyclic code generated by . To summarize, we have that is a cyclic linear code with parameters

 [n,|I|,≥n−Amp(I)+1]q. (7)

We consider cyclic codes for the -construction, and therefore we will need the following proposition.

Proposition 4.3:

Let and be unions of -cyclotomic cosets, and let and be the corresponding cyclic codes. Then

 C(I1)+C(I2) =C(I1∪I2) C(I1)∗C(I2) =C(I1+I2),

where .

We obtain this result by describing the cyclic codes as a subfield subcode of an evaluation code and generalizing Theorem 3.3 in [C17]. The proof of this proposition is very similar to the one in [C17] and can be found in Appendix A. The proposition implies the following corollary.

Corollary 4.4:

Let and be unions of -cyclotomic cosets, and let and be the corresponding cyclic codes. Then is an

 [n,|I1+I2|,≥n−Amp(I1+I2)+1]q

cyclic code.

Now, let and be two cyclic codes of length , and let

 C=[C(I1),C(I2)][1101]. (8)

Then is a

 [2n,|I1|+|I2|,≥min{2(n−Amp(I1)+1),n−Amp(I2)+1}]

linear code. This is in fact a quasi-cyclic code of index , see for instance [HLR09, LF01]. By combining Corollary 3.2 with Proposition 4.3, we obtain that

 (9)

And from Propositions 2.2 and 2.3, and Corollary 4.4, we obtain that

 dim(C∗2)=|I1+I1|+|I2+(I1∪I2)|,

and

 d(C∗2)≥min{2(n−Amp(I1+I1)+1),n−Amp(I2+(I1∪I2))+1)}. (10)

Therefore, it is of interest to find and such that the cardinalities of these sets are relatively large, implying a large dimension of , while at the same time and are relatively small, implying a large minimum distance on the square.

To exemplify what parameters we can obtain we will use some specific cyclic codes from [C17] based on the notion of -restricted weights of cyclotomic cosets introduced in the same article. Let for some and for a number let be its -ary representation, i.e. , where . Then for an the -restricted weight is defined as

 w(s)q(t)=maxi∈{0,1,…,r−1}s−1∑j=0ti+j.

We will not go into details about these -restricted weights but we refer the reader to [C17] for more information. However, we remark that [C17] proves that this weight notion satisfies if , and that whenever and are in the same cyclotomic coset. The latter implies that we can talk about the -restricted weight of a cyclotomic coset.

Let denote the union of all cyclotomic cosets modulo with -restricted weights lower than or equal to . Then we can define the code

 C=[C(Wr,s,m1),C(Wr,s,m2)][1101],

where we let . From (9) we conclude that

 C∗2=[C(Wr,s,m1+Wr,s,m1),C(Wr,s,m1+Wr,s,m2)][1101] (11)

since . It is noted in [C17] that does not hold in general, but the inclusion holds. However, we are able to determine the exact dimension for in (11) by computing for . Additionally, when computed these, we can bound the minimum distance directly from (10). This is what we do in Table 1 for the following choices. We present the parameters for and when setting , , , and .

We make a comparison to the cyclic codes from [C17]. They present codes constructed using the -restricted weight with (Table 1 in [C17]) and using the -restricted weight with (Table 2 in [C17]). Let any one of our new codes from Table 1 have parameters

 (n,dim(C),d(C∗2))=(n,k,d∗).

First we compare to Table 1 from [C17], where there always exists a code with length , , and . Hence, our new codes have larger dimension but lower minimum distance for the square compared to these codes, for comparable lengths. On the other hand, in Table 2 from[C17] there is a code with length and (i.e. the dimensions of the codes from [C17] are larger than those in our table). However, the minimum distances of the squares for the codes in [C17] satisfy

 d((C′)∗2) ={d∗+1for r=5,6,8,10,11d∗−5for r=7,9.

Thus, even though the dimension of our codes are lower than the ones from Table 2 in [C17], for and we obtain that .

Therefore, our results on matrix-product codes allow us to obtain codes with a different trade-off between and than those from [C17], where we can obtain a larger distance of the square at the expense of reducing the dimension with respect to one of the tables there, and viceversa with respect to the other.

5 Other Matrix-Product Codes

In this section, we consider squares of some other families of matrix-product codes. We start by determining the square of when is a matrix-product code where the defining matrix is Vandermonde.

Theorem 5.1:

Let be linear codes in . Furthermore, let

 Vq(s)=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣11⋯1α11α12⋯α1q−1⋮⋮⋮αs−11αs−12⋯αs−1q−1⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,

where the ’s are distinct nonzero elements in and is some positive integer. Denote by . Then

 C∗2=[∑i+j=0Ci∗Cj,∑i+j=1Ci∗Cj,…,∑i+j=~s−1Ci∗Cj]Vq(~s)

where and the sums are modulo .

Proof:

Let be generator matrices of respectively and let be a generator matrix for . Using the same notation as in the proof of Theorem 3.1, contains all rows of the form

 (αi+j1Gi∗Gj,…,αi+jqGi∗Gj)

for . Note that if then and hence we can consider modulo . Thus if and we could write the coefficients in front of as for . Removing linearly dependent rows this results in a generator matrix for a matrix-product code of the form

 C∗2=[∑i+j=0Ci∗Cj,∑i+j=1Ci∗Cj,…,∑i+j=~s−1Ci∗Cj]Vq(~s), (12)

where again is considered modulo .

As we will show below, the fact that we obtain codes of the form is especially helpful for determining the parameters of in some cases. We remark that the same phenomenon occurs in the case of the construction but only if the codes are nested.

Note also that (the linear code spanned by the first rows of ) is a Reed Solomon code111A Reed-Solomon code is an MDS code meaning that it achieves the highest possible minimum distance for a given length and dimension. Thus the ’s are maximal and hence we obtain the best possible bound for the minimum distance we can hope for using the matrix-product construction. of dimension and hence we have that , for (we remark that we have renumbered the ’s such that it fits better to the properties of the Vandermonde matrix). Hence, if has length and dimension , then is a

 [(q−1)n,k0+k1+⋯+ks−1,≥mini∈{0,1,⋯,s−1}{(q−i−1)d(Ci)}]q

linear code, and has minimum distance greater than or equal to

 minl∈{0,1,⋯,~s−1}{(q−l−1)d(∑i+j=lCi∗Cj)}. (13)

Even though the expression in (12) may at first sight seem hard to work with, this is not the case if the ’s come from some specific families of codes. For example, Proposition 4.3 tells us that will again be a cyclic code if the ’s are cyclic and we will be able to determine its generating set from the generating sets of the ’s.

Additionally, one could consider the case where the ’s are Reed-Solomon codes or more generally algebraic geometric codes. Let be a formal sum of rational places in a function field over and let where all the ’s and ’s are different. An algebraic geometric code is the evaluation of the elements in the Riemann-Roch space in the places from . It is then known that is contained in and , where . Hence, we can find a lower bound for from (13) using the fact that from the above observations we can find algebraic geometric codes containing where we can control the minimum distance. We exemplify some specific constructions with algebraic geometric codes, more specific Hermitian codes, in the following example.

Example 5.2:

We will not go into details about the Hermitian function field and codes, but we do mention that the Hermitian function field has rational places, where one of these places is the place at infinity. Denote the place at infinity by and the remaining rational places by , for , and let . Then a Hermitian code is given by the algebraic geometric code . This is a code as long as , see for instance [YK92]. Denote by

 C(r,s)=[Cr+s−1,Cr+s−2,…,Cr]Vq2(s),

where and . With such a construction we have that

 (C(r,s))∗2⊆[C2r+2s−2,C2r+2s−3,…,C2r]Vq2(2s−1)=C(2r,2s−1) (14)

from the observations about algebraic geometric codes above the example. Note that implying that all the Hermitian codes in (14) satisfy that their is lower than . Hence,

 d((C(r,s))∗2) ≥mini=0,1…,2s