Squares of Matrix-product Codes

03/13/2019
by   Ignacio Cascudo, et al.
Universidad de Valladolid
0

The component-wise or Schur product C*C' of two linear error correcting codes C and C' over certain finite field is the linear code spanned by all component-wise products of a codeword in C with a codeword in C'. When C=C', we call the product the square of C and denote it C^*2. Motivated by several applications of squares of linear codes in the area of cryptography, in this paper we study squares of so-called matrix-product codes, a general construction that allows to obtain new longer codes from several "constituent" codes. We show that in many cases we can relate the square of a matrix-product code to the squares and products of their constituent codes, which allow us to give bounds or even determine its minimum distance. We consider the well-known (u,u+v)-construction, or Plotkin sum (which is a special case of a matrix-product code) and determine which parameters we can obtain when the constituent codes are certain cyclic codes. In addition, we use the same techniques to study the squares of other matrix-product codes, for example when the defining matrix is Vandermonde (where the minimum distance is in a certain maximal with respect to matrix-product codes).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/12/2021

Multivariate Goppa codes

In this paper, we introduce multivariate Goppa codes, which contain as a...
12/31/2020

Quantum error-correcting codes from matrix-product codes related to quasi-orthogonal matrices and quasi-unitary matrices

Matrix-product codes over finite fields are an important class of long l...
10/28/2019

LCD Matrix-Product Codes over Commutative Rings

Given a commutative ring R with identity, a matrix A∈ M_s× l(R), and R-l...
03/03/2018

Matrix-product structure of constacyclic codes over finite chain rings F_p^m[u]/〈 u^e〉

Let m,e be positive integers, p a prime number, F_p^m be a finite field ...
07/30/2019

High dimensional affine codes whose square has a designed minimum distance

Given a linear code C, its square code C^(2) is the span of all componen...
05/31/2021

Sum-rank product codes and bounds on the minimum distance

The tensor product of one code endowed with the Hamming metric and one e...
12/21/2017

Extended Product and Integrated Interleaved Codes

A new class of codes, Extended Product (EPC) Codes, consisting of a prod...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Component-wise or Schur products of linear error-correcting codes have been studied for different purposes during the last decades, from efficient decoding to applications in several different areas within cryptography. Given two linear (over some finite field ) codes of the same length we define the component-wise product of the codes to be the span over of all component-wise products , where .

One of the first applications where component-wise products of codes became relevant concerned error decoding via the notion of error locating pairs [DK94, P92]. An error locating pair for a code is a pair where , and the number of errors the pair is able to correct depends on the dimensions and minimum distances of the codes and their duals. More precisely, it is required that and if we should be able to locate errors.

Later on, the use of component-wise products found several applications in the area of cryptography. For example, some attacks to variants of the McEliece cryptosystem (which relies on the assumption that it is hard to decode a general linear code) use the fact that the dimension of the product tends to be much larger when is a random code than when has certain algebraic structure, which can be used to identify algebraic patterns in certain subcodes of the code defining the cryptosystem, see for instance [COT17, PM17]. A different cryptographic problem where products of codes are useful is private information retrieval, where a client can retrieve data from a set of servers allocating a coded database in such a way that the servers do not learn what data the client has accessed. In [FGHK17] a private information retrieval protocol based on error-correcting codes was shown, where it is desirable to use two linear codes and such that , , and are simultaneously high.

In this work, however, we are more interested in the application of products of codes to the area of secure multiparty computation. The goal of secure multiparty computation is to design protocols which can be used in the situation where a number of parties, each holding some private input, want to jointly compute the output of a function on those inputs, without at any point needing that any party reveals his/her input to anybody. A central component in secure computation protocols is secure multiplication, which different protocols realize in different ways. Several of these protocols require to use an error correcting code whose square has large minimum distance while there are additional conditions on which vary across the different protocols.

For example a well known class of secure computation protocols [BGW88, CCD88, CDM00] relies on the concept of strongly multiplicative secret-sharing scheme formalized in [CDM00]. Such secret-sharing schemes can be constructed from linear codes where the amount of colluding cheating parties that the protocol can tolerate is , where is the dual code to . These two minimum distances are therefore desired to be simultaneously high. For more information about secret sharing and multiparty computation, see for instance [CDN15].

Other more recent protocols have the less stringent requirement that and are simultaneously large. This is the case of the MiniMac [DZ13] protocol, a secure computation protocol to evaluate boolean circuits, and its successor Tinytables [DNNR16]

. In those protocols, the cheating parties have certain probability of being able to disrupt the computation, but this probability is bounded by

, meaning that a high distance on the square will give a higher security. On the other hand, a large relative dimension, or rate, of will reduce the communication cost, so it is desirable to optimize both parameters. A very similar phenomenon occurs in recent work about commitment schemes, which are a building block of many multiparty computation protocols; in fact, when these schemes have a number of additional homomorphic properties and in addition can be composed securely, we can base the entire secure computation protocol on them [FPY18]. Efficient commitment schemes with such properties were constructed in [CDD18] based on binary linear codes, where multiplicative homomorphic properties require again to have a relatively large (see [CDD18, section 4]) and the rate of the code is also desired to be large to reduce the communication overhead.

These applications show the importance of finding linear codes where the minimum distance of the square is large relative to the length of the codes and where some other parameter (in some cases , in others ) is also relatively large. Moreover, it is especially interesting for the applications that the codes are binary, or at least be defined over small fields.

Powers of codes, and more generally products, have been studied in several works such as [C17, CCMZ15, MZ15, R13b, R13a, R15] from different perspectives. In [R13b] an analogous of the Singleton for to and was established, and in [MZ15] it is shown that Reed Solomon codes are essentially the only codes which attain this bound unless some of the parameters are very restricted. However, Reed Solomon codes come with the drawback that the field size must be larger than or equal to the length of the codes. Therefore, finding asymptotically good codes over a fixed small field has also been studied, where in this case asymptotically good means that both and grows linearly with the length of the code . In [R13a] the existence of such a family over the binary field was shown, based on recent results on algebraic function fields. However, it seems like most families of codes do not have this property: in fact, despite the well known fact that random linear codes will, with high probability, be over the Gilbert Varshamov bound, and hence are asymptotically good in the classical sense, this is not the case when we impose the additional restriction that is linear in the length, as it is shown in [CCMZ15]. The main result in [CCMZ15] implies that for a family of random linear codes either the code or the square will be asymptotically bad.

The asymptotical construction from [R13a], despite being very interesting from the theoretical point of view, has the drawbacks that the asymptotics kick in relatively late and moreover, the construction relies on algebraic geometry, which makes it computationally expensive to construct such codes. Motivated by the aforementioned applications to cryptography, [C17] focuses on codes with fixed lengths (but still considerably larger than the size of the field), and constructs cyclic codes with relatively large dimension and minimum distance of their squares. In particular, the parameters of some of these codes are explicitly computed in the binary case.

This provides a limited constellation of parameters that we know that are achievable for the tuple consisting of length of , and . It is then interesting to study what other parameters can be attained, and a natural way to do so is to study how the square operation behaves under known procedures in coding theory that allow to construct new codes from existing codes.

One such construction is matrix-product codes, where several codes can be combined into a new longer code. Matrix-product codes, formalized in [BN01], is a generalization of some previously known code constructions, such as for example the -construction, also known as Plotkin sum. Matrix-product codes have been studied in several works, including [BN01, HLR09, HR10, OS02].

1.1 Results and outline

In this work, we study squares of matrix-product codes. We show that in several cases, the square of a matrix-product code can be also written as a matrix-product code. This allows us to determine new achievable parameters for the squares of codes.

More concretely, we start by introducing matrix-product codes and products of codes in Section 2. Afterwards, we determine the product of two codes when both codes are constructed using the -construction in Section 3. In Section 4, we restrict ourselves to squares of codes and exemplify what parameters we can achieve using cyclic codes in the -construction in order to compare the parameters with the codes from [C17].

At last, in Section 5, we consider other constructions of matrix-product codes. In particular, we consider the case where the defining matrix is Vandermonde, which is especially relevant because such matrix-product codes achieve the best possible minimum distance that one can hope for with this matrix-product strategy. We show that the squares of these codes are again matrix-product codes, and if the constituent codes of the original matrix-product code are denoted , then the ones for the square are all of the form for some . This is especially helpful for determining the parameters if the ’s are for example algebraic geometric codes. We remark that this property also holds for the other constructions we study in this paper, but only when the ’s are nested. Finally, we also study the squares of a matrix-product construction from [BN01] where we can apply the same proof techniques as we have in the other constructions.

2 Preliminaries

Let be the finite field with elements. A linear code is a subspace of . When has dimension , we will call it an code. A generator matrix for a code is a matrix consisting of

basis vectors for

as the rows. The Hamming weight of , denoted , is the number of nonzero entries in and the Hamming distance between is given by . By the linearity of the minimum Hamming distance taken over all pairs of distinct elements in is the same as the minimum Hamming weight taking over all non-zero elements in , and therefore we define the minimum distance of to be . If it is known that (respectively if we know that ) then we call an code (resp. ). We denote by the dual code to , i.e., the vector space given by all elements such that for every , and are orthogonal with respect to the standard inner product in . If is an code then is an code.

We recall the definition and basic properties of matrix-product codes (following [BN01]) and squares of codes.

Definition 2.1 (Matrix-product code):

Let be linear codes and let be a matrix with rank (implying ). Then we define the matrix-product code , as the set of all matrix products , where .

We call the defining matrix and the ’s the constituent codes.

We can consider a codeword , in a matrix-product code, as a matrix of the form

(1)

using the same notation for the ’s as in the definition. Reading the entries in this matrix in a column-major order, we can also consider as a vector of the form

(2)

We sum up some known facts about matrix-product codes in the following proposition.

Proposition 2.2:

Let be linear codes with generator matrices , respectively. Furthermore, let be a matrix with rank and let . Then is an linear code and a generator matrix of is given by

We now turn our attention to the minimum distance of . Denote by the matrix consisting of the first rows of and let be the linear code spanned by the rows in . From [OS02], we have the following result on the minimum distance.

Proposition 2.3:

We are making the same assumptions as in Proposition 2.2, and write and . Then the minimum distance of the matrix-product code satisfies

(3)

The following corollary is from [HLR09].

Corollary 2.4:

If we additionally assume that , equality occurs in the bound in (3).

The dual of a matrix-product code is also a matrix-product code, if we make some assumptions on the matrix , as it was noted in [BN01].

Proposition 2.5:

Let be a matrix product code. If is an invertible square matrix then

Additionally, if is the matrix given by

the dual can be described as

Notice that with regard to Proposition 2.3 the last expression is often more useful since will often decrease when increases.

Now, we turn our attention to products and squares of codes. We denote by the component-wise product of two vectors. That is, if and , then . With this definition in mind, we define the product of two linear codes.

Definition 2.6 (Component-wise (Schur) products and squares of codes):

Given two linear codes we define their component-wise product, denoted by , as

The square of a code is .

First note that the length of the product is the same as the length of the original codes. Regarding the other parameters (dimension and minimum distance) we enumerate some known results only in the case of the squares since this will be our primary focus.

If

is a generator matrix for , then is a generating set for . However, it might not be a basis since some of the vectors might be linearly dependent. If additionally, a submatrix consisting of columns of is the identity, the set consists of linearly independent vectors. Since there is always a generator matrix satisfying this, this implies , where , and .

In most cases, however, is much smaller than . For example, the Singleton bound for squares [R13b] states that (which is much restrictive than the Singleton bound for , which states that ). Additionally, the codes for which have been characterized in [MZ15], where it was shown that essentially only Reed-Solomon codes, certain direct sums of self-dual codes, and some degenerate codes have this property. Furthermore, it is shown in [CCMZ15] that taking a random code with dimension the dimension of will with high probability be . Therefore, often and hence typically .

3 The -Construction

In this section, we will consider one of the most well-known matrix-product codes, namely the -construction. We obtain this construction when we let

(4)

be the defining matrix. Note that if then and . This can easily be deduced from Propositions 2.3 and 2.5 by constructing .

In the following theorem, we will determine the product of two codes and when both codes come from the -construction. We will use the notation to denote the smallest linear code containing both and .

Theorem 3.1:

Let be linear codes. Furthermore, let be as in (4) and denote by and by . Then

Proof:

Let be generator matrices for respectively. By Proposition 2.2, we have that

are generator matrices for and respectively. A generator matrix for can be obtained by making the componentwise products of all the rows in with all the rows in and afterwards removing all linearly dependent rows. We denote by the matrix consisting of all componentwise products of rows in with rows in . Then

The set of rows in is a generating set for . Hence, by removing linearly dependent rows we obtain a generator matrix of the form

where is a generator matrix for , and for . By using Proposition 2.2 once again, we see that is a generator matrix for the code proving the theorem.

The following corollary consider the square of a code from the -construction, and in the remaining of the paper the focus will be on squares.

Corollary 3.2:

Let be linear codes. Furthermore, let be as in (4) and denote by . Then

and we have that

(5)

Additionally, if we obtain

and we have that

(6)

Proof:

The results follows by setting in Theorem 3.1 implying , and we obtain that

If we have . The bound in (5) follows directly from Proposition 2.3, and (6) follows by Corollary 2.4.

4 Constructions from Binary Cyclic Codes

In this section, we exemplify what parameters we can achieve for and when we use the -construction together with cyclic codes as constituent codes. We start by presenting some basics of cyclic codes.

Cyclic codes are linear codes which are invariant under cyclic shifts. That is, if is a codeword then is as well. We will assume that . A cyclic code of length over is isomorphic to an ideal in generated by a polynomial , where . The isomorphism is given by

and we notice that a cyclic shift is represented by multiplying by . The cyclic code generated by has dimension . To bound the minimum distance of the code, we introduce the -cyclotomic cosets modulo .

Definition 4.1 (-cyclotomic coset modulo ):

Let . Then the -cyclotomic coset modulo of is given by

Now let and for , meaning that is a primitive -th root of unity in an algebraic closure of . Since every root of must be of the form for some . This leads to the following definition which turns out to be useful in describing the parameters of a cyclic code.

Definition 4.2 (Defining and generating set):

Denote by and . Then we call the defining set and the generating set of the cyclic code generated by .

We remark that , implying that is the dimension of the cyclic code generated by . We note that and must be a union of -cyclotomic costes modulo . Now we define the amplitude of as

As a consequence of the BCH-bound, see for example [C17], we have that the minimum distance of the code generated by is greater than or equal to .

Hence, we see that both the dimension and minimum distance depend on , and since is uniquely determined by , we will use the notation to describe the cyclic code generated by . To summarize, we have that is a cyclic linear code with parameters

(7)

We consider cyclic codes for the -construction, and therefore we will need the following proposition.

Proposition 4.3:

Let and be unions of -cyclotomic cosets, and let and be the corresponding cyclic codes. Then

where .

We obtain this result by describing the cyclic codes as a subfield subcode of an evaluation code and generalizing Theorem 3.3 in [C17]. The proof of this proposition is very similar to the one in [C17] and can be found in Appendix A. The proposition implies the following corollary.

Corollary 4.4:

Let and be unions of -cyclotomic cosets, and let and be the corresponding cyclic codes. Then is an

cyclic code.

Now, let and be two cyclic codes of length , and let

(8)

Then is a

linear code. This is in fact a quasi-cyclic code of index , see for instance [HLR09, LF01]. By combining Corollary 3.2 with Proposition 4.3, we obtain that

(9)

And from Propositions 2.2 and 2.3, and Corollary 4.4, we obtain that

and

(10)

Therefore, it is of interest to find and such that the cardinalities of these sets are relatively large, implying a large dimension of , while at the same time and are relatively small, implying a large minimum distance on the square.

To exemplify what parameters we can obtain we will use some specific cyclic codes from [C17] based on the notion of -restricted weights of cyclotomic cosets introduced in the same article. Let for some and for a number let be its -ary representation, i.e. , where . Then for an the -restricted weight is defined as

We will not go into details about these -restricted weights but we refer the reader to [C17] for more information. However, we remark that [C17] proves that this weight notion satisfies if , and that whenever and are in the same cyclotomic coset. The latter implies that we can talk about the -restricted weight of a cyclotomic coset.

Let denote the union of all cyclotomic cosets modulo with -restricted weights lower than or equal to . Then we can define the code

where we let . From (9) we conclude that

(11)

since . It is noted in [C17] that does not hold in general, but the inclusion holds. However, we are able to determine the exact dimension for in (11) by computing for . Additionally, when computed these, we can bound the minimum distance directly from (10). This is what we do in Table 1 for the following choices. We present the parameters for and when setting , , , and .

Table 1: Parameters for and using the -restricted weight

We make a comparison to the cyclic codes from [C17]. They present codes constructed using the -restricted weight with (Table 1 in [C17]) and using the -restricted weight with (Table 2 in [C17]). Let any one of our new codes from Table 1 have parameters

First we compare to Table 1 from [C17], where there always exists a code with length , , and . Hence, our new codes have larger dimension but lower minimum distance for the square compared to these codes, for comparable lengths. On the other hand, in Table 2 from[C17] there is a code with length and (i.e. the dimensions of the codes from [C17] are larger than those in our table). However, the minimum distances of the squares for the codes in [C17] satisfy

Thus, even though the dimension of our codes are lower than the ones from Table 2 in [C17], for and we obtain that .

Therefore, our results on matrix-product codes allow us to obtain codes with a different trade-off between and than those from [C17], where we can obtain a larger distance of the square at the expense of reducing the dimension with respect to one of the tables there, and viceversa with respect to the other.

5 Other Matrix-Product Codes

In this section, we consider squares of some other families of matrix-product codes. We start by determining the square of when is a matrix-product code where the defining matrix is Vandermonde.

Theorem 5.1:

Let be linear codes in . Furthermore, let

where the ’s are distinct nonzero elements in and is some positive integer. Denote by . Then

where and the sums are modulo .

Proof:

Let be generator matrices of respectively and let be a generator matrix for . Using the same notation as in the proof of Theorem 3.1, contains all rows of the form

for . Note that if then and hence we can consider modulo . Thus if and we could write the coefficients in front of as for . Removing linearly dependent rows this results in a generator matrix for a matrix-product code of the form

(12)

where again is considered modulo .

As we will show below, the fact that we obtain codes of the form is especially helpful for determining the parameters of in some cases. We remark that the same phenomenon occurs in the case of the construction but only if the codes are nested.

Note also that (the linear code spanned by the first rows of ) is a Reed Solomon code111A Reed-Solomon code is an MDS code meaning that it achieves the highest possible minimum distance for a given length and dimension. Thus the ’s are maximal and hence we obtain the best possible bound for the minimum distance we can hope for using the matrix-product construction. of dimension and hence we have that , for (we remark that we have renumbered the ’s such that it fits better to the properties of the Vandermonde matrix). Hence, if has length and dimension , then is a

linear code, and has minimum distance greater than or equal to

(13)

Even though the expression in (12) may at first sight seem hard to work with, this is not the case if the ’s come from some specific families of codes. For example, Proposition 4.3 tells us that will again be a cyclic code if the ’s are cyclic and we will be able to determine its generating set from the generating sets of the ’s.

Additionally, one could consider the case where the ’s are Reed-Solomon codes or more generally algebraic geometric codes. Let be a formal sum of rational places in a function field over and let where all the ’s and ’s are different. An algebraic geometric code is the evaluation of the elements in the Riemann-Roch space in the places from . It is then known that is contained in and , where . Hence, we can find a lower bound for from (13) using the fact that from the above observations we can find algebraic geometric codes containing where we can control the minimum distance. We exemplify some specific constructions with algebraic geometric codes, more specific Hermitian codes, in the following example.

Example 5.2:

We will not go into details about the Hermitian function field and codes, but we do mention that the Hermitian function field has rational places, where one of these places is the place at infinity. Denote the place at infinity by and the remaining rational places by , for , and let . Then a Hermitian code is given by the algebraic geometric code . This is a code as long as , see for instance [YK92]. Denote by

where and . With such a construction we have that

(14)

from the observations about algebraic geometric codes above the example. Note that implying that all the Hermitian codes in (14) satisfy that their is lower than . Hence,