Information set decoding of Lee-metric codes over finite rings

01/23/2020 ∙ by Violetta Weger, et al. ∙ Universität Zürich Florida Atlantic University 0

Information set decoding (ISD) algorithms are the best known procedures to solve the decoding problem for general linear codes. These algorithms are hence used for codes without a visible structure, or for which efficient decoders exploiting the code structure are not known. Classically, ISD algorithms have been studied for codes in the Hamming metric. In this paper we switch from the Hamming metric to the Lee metric, and study ISD algorithms and their complexity for codes measured with the Lee metric over finite rings.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

The task of decoding a given code, also known as syndrome decoding problem (SDP), is a fundamental issue in coding theory. In formula, given a parity-check matrix , a syndrome and an integer

, solving the SDP consists in finding a vector

, of weight not larger than , such that , where denotes vector transposition. Note that such a formulation does not depend on the metric with which the code is embedded. A well-studied case is that of the Hamming metric, for which the SDP has been proven to be NP-hard  [Berlekamp1978, barg1994some]; recently, the same hardness result has been extended to the rank metric case [gaborit2016hardness].

The best known solvers for the Hamming SDP are known as information set decoding (ISD) algorithms, originally proposed by Prange in 1962 [Prange1962]

. Prange’s idea consists in iteratively testing randomly chosen information sets, until a set which does not overlap with the support of the unknown vector is found; the expected number of required iterations is given by the reciprocal of the probability that a randomly chosen set is indeed valid. Prange’s ISD has been improved through several subsequent works 

[Lee1988, Leon1988, dumer, chabaud, Stern1994, canteaut1998new, canteautsendrier, sendrier, peters, may, bernstein2011smaller, Becker2012, hirose, niebuhr, klamti, interlando2018generalization]. One of these variants, due to Stern [Stern1994], is widely used in the literature and will be considered in the following, together with the original ISD by Prange. All these approaches increase the cost of one iteration but, on average, require a smaller number of iterations. Note that, to solve the rank metric SDP, the best known algorithms [Ourivski2002, Gaborit_rsd] share the same principle, since they are based on an iterative procedure where the number of iterations depends on the probability of making an initial correct guess.

The study of ISD algorithms finds several applications in coding theory. For example, the ISD principle is at the basis of ordered statistics decoders that are used for soft-decision decoding of linear block codes [Fossorier1995, Baldi2016]. Another important field of application of ISD algorithms is that of code-based cryptography, since the security of many code-based public-key cryptosystems relies on the hardness of solving the decoding problem for a general linear block code [McEliece1978, Niederreiter1986]. Moreover, code-based signature and identification schemes exploit similar principles, since the adversary’s ability of forging signatures and proving knowledge depends on the difficulty of finding low-weight vectors associated to given syndromes [Stern1994, cfs, Veron97, Cayrel2010]. Code-based cryptosystems are nowadays characterized by a renewed interest, because of their intrinsic resistance against quantum attacks. In fact, there is no known way to exploit quantum algorithms to efficiently solve the SDP: quantum versions of ISD algorithms are still characterized by a complexity that grows exponentially in the weight of the unknown vector [bernstein2011smaller]. This well-assessed security makes code-based cryptosystems among the most promising solutions for the post-quantum world [NISTreport2016].

All the above examples, however, rely on codes in the Hamming metric. The rationale of this work is to introduce techniques to solve the SDP for general codes in the Lee metric. To the best of our knowledge, this has not been done yet, except for some preliminary work in [Horleman2019]. In particular, our study can be useful to address potential applicability of the Lee metric to design new code-based cryptosystems. For such a purpose, starting from [Horleman2019], where Stern’s ISD algorithm is converted to the ring , we extend this work and propose algorithms inspired by Prange’s and Stern’s ISD to solve the Lee metric variant of SDP for any integer residue ring whose size is a prime power. A detailed complexity analysis of these algorithms is provided. The most relevant conclusion of our analysis is that, under certain assumptions, the complexity of Prange’s ISD algorithm in the Lee metric is lower bounded by that of Prange’s ISD algorithm in the Hamming metric, reduced by a relatively small polynomial factor.

The paper is organized as follows. In Section 2 we introduce the notation used throughout the paper and give some preliminary notions on the Lee metric. In Section 3 we formulate some general properties of the Lee metric. In Section 4 we extend ISD algorithms to , considering the Lee metric and carry out a complexity analysis of these algorithms. In Section 5 we provide numerical results and in Section LABEL:sec:concl we draw some concluding remarks.

2. Notation and preliminaries

In the rest of the paper we set , where is a prime number and a positive integer. We also denote with the ring of integers modulo , and with the finite field with elements. The cardinality of a set is denoted as

. We use bold lower case (respectively upper case) letters to denote vectors (respectively matrices). The identity matrix with size

is denoted as . Given a length- vector and a set , we denote with the vector formed by the entries of indexed by ; similarly, given a matrix with columns, denotes the matrix formed by the columns of that are indexed by the elements in . The support of a vector is defined as . For , we denote by the vectors having support in .

Classically, an linear code is a linear subspace of of dimension , endowed with the Hamming metric. The size of the code, denoted as , is the number of its codewords, i.e. . For an linear code over and we denote by and we say that of size is an information set if . In other words, , we take the vectors formed by the entries of each indexed by and put them in a set . We call an information set if . Finally, we say that two codes are permutation equivalent iff their codewords coincide, except for a permutation of their symbols.

The definitions above can be extended on a finite ring . For such a purpose, let and be positive integers and let be a finite ring. is called an -linear code of length and type , if is a submodule of , with . Throughout this paper we restrict to the case and call a ring linear code of length iff is an additive subgroup of .

For a ring linear code over of length and type , where is a sequence of integers such that , we call a set of size a (ring linear) information set if .

Proposition 1.

Let be a linear code over of length and type

Then is permutation equivalent to a code having the following systematic parity-check matrix of size

where and for .

3. Properties of the Lee metric

For we define the Lee value to be

Then, for , we define the Lee weight as

For , the Lee distance is defined as

A code embedded with the Lee distance is called a Lee code. Because of the linearity, the minimum distance of a Lee code is defined as the minimum Lee weight of a non-zero codeword, that is

Lemma 2.

[roth2006introduction, Problem 10.15] Let

be the random variable with uniform distribution over

; its average Lee weight is

Let us now compute the multiplicity of vectors in having Lee weight , that we denote as .

We first count the vectors in with Lee weight and support of size ; this quantity is defined as follows

Lemma 3.

Let be a prime power, such that , such that . Then, is equal to

(3.1)

if is even, and to

(3.2)

if is odd, where and
.

Proof.

If , a vector having a support of size has at least Lee weight and can have at most Lee weight , which implies that there are no vectors such that . In the case where is even, there exists only one element in having Lee value , thus if , we can only choose this element in the non-zero positions, which can be done in different ways. Now we check whether or . In the first case the vector cannot have an entry of Lee value , thus we can choose non-zero positions, compose the wanted Lee weight into parts and for each choice of a part , there exists also the choice , hence many. In the other case, firstly, an entry of the vector could have Lee value , so we cannot simply multiply by anymore and, secondly, the compositions of into parts also consists of parts being greater than which, however, is the largest possible Lee value. For this reason, we have to define recursively. We start with all possible orderings of the desired Lee weight into parts and then take away the orderings that we cannot have, which are starting from a part being and proceed until the largest part being . Thus, we have to take away , repeating this times: factor 2 is justified by the fact that we have assumed there are always two choices for an element having Lee value , and times for the position of the entry having Lee value . The case has to be taken away only once, since, in the case where is even, we only have one element having Lee value . The case in which is odd is simpler, since an element having Lee value does not need to be treated as a special case.

Corollary 4.

Let , and . Then

(3.3)

An upper bound, also observed in [roth2006introduction], and a lower bound on (3.3) can easily be derived as reported next.

Corollary 5.

Let , and . Then, is not larger than

(3.4)

and not smaller than

(3.5)

The proof of the upper bound is given in [roth2006introduction]. Observe that the upper bound is exact for . For the lower bound, we only consider the vectors in having Lee weight , with entries in , i.e., those with maximum support size.

Simple computations show that the addends of the sum in (3.4) are monotonically increasing iff, for ,

(3.6)

Under these assumptions, the following relation holds

The above properties are exploited in the following section, in order to compute the complexity of ISD algorithms.

4. Information set decoding over

All ISD algorithms are characterized by the same approach of first randomly choosing a set of positions in the code and then applying some operations that, if the chosen set has a relatively small intersection with the error vector, allow to retrieve the error vector itself. For each ISD variant, the average computational cost is estimated by multiplying the complexity of each iteration by the expected number of performed iterations; the latter quantity corresponds to the reciprocal of the probability that a random choice of the set leads to a successful iteration. Then, for all ISD algorithms, we have a computational cost that is estimated as

, where is the expected number of (binary) operations that are performed in each iteration and is the probability that the choice of the set of positions is indeed successful. We now derive some formulas for the complexity of Prange’s and Stern’s ISD algorithms, when adapted to the Lee metric.

4.1. Adaptation of Prange’s ISD to the Lee metric

The idea of Prange’s algorithm is to first find an information set that does not overlap with the support of the searched error vector ; when such a set is found, permuting

and computing its row echelon form is enough to reveal the error vector. Our proposed adaptation of Prange’s ISD is reported in Algorithm

1. We first find an information set , and then bring the matrix

into a systematic form, by multiplying it by an invertible matrix

. For the sake of clarity, we assume that the information set is , such that

where and . Since we assume that no errors occur in the information set, we have that , with . Thus, if we also partition the new syndrome into parts of the same sizes as the (row-)parts of , and we multiply by the unknown , we get the following situation

It follows that , hence we are only left to check the weight of .

Input: , , .

Output: with and .

1:Choose an information set of size and define .
2:Compute such that
where and .
3:Compute with .
4:if :  then
5:     Return such that and .
6:Start over with Step 1 and a new selection of .
Algorithm 1 Prange’s Algorithm over in the Lee metric

4.2. Complexity analysis: Prange’s ISD in the Lee metric

In this section we provide a complexity estimate of our adaptation of Prange’s ISD to the Lee metric. First of all, we assume that adding two elements in costs binary operations and multiplying two elements costs binary operations [Menezes1996, Hankerson2010]. An iteration of Prange’s ISD only consists in bringing into systematic form and to apply the same row operations on the syndrome; thus, the cost can be assumed equal to that of computing , from which we obtain a broad estimate as

(4.1)

The success probability is given by having chosen the correct weight distribution of ; in this case, we require that does not overlap with the chosen information set, hence

(4.2)

The estimated computational cost of Prange’s ISD in the Lee metric is

(4.3)

We now analytically compare the complexity of Prange’s ISD in the Lee and Hamming metric, exploiting the properties derived in Section 3. Under the assumption that , with , from Corollary 5 we derive the following chain of inequalities

(4.4)

where corresponds to the success probability of an iteration of Prange’s ISD over the Hamming metric, seeking for an error vector of Hamming weight , in a code with length and dimension . A crude approximation, which however is particularly tight when , shows that  [bernstein2010]. Then, we have

Since does not depend on the considered metric, this simple analysis shows that the complexity of Prange’s algorithm over the Lee metric and over the Hamming metric differ at most by a polynomial factor. For all known ISD variants, the complexity grows asymptotically as , where is a constant that depends on the code rate [CantoTorres]; different ISD variants essentially differ only in the value of . Our analysis shows that, for the Lee metric, Prange’s algorithm leads to an analogous expression. Thus, our results indicate that some SDP instances in the Lee metric are as hard as their corresponding Hamming counterparts, except for a relatively small polynomial factor; we leave further studies (such as, for instance, NP-hardness results) for future works.

4.3. Stern’s ISD adaptation to the Lee metric

As a further contribution of this paper, we improve upon the basic algorithm by Prange by adapting the idea of Stern’s ISD to the Lee metric. In this algorithm, we relax the requirements on the weight distribution, by allowing an information set with small Lee weight and the existence of a (small) set of size , called zero-window, within the redundant set, where no errors occur. Our proposed adaptation of Stern’s algorithm to the Lee metric is reported in Algorithm 2.

For the sake of readability, in the following explanation we consider an information set and a zero-window given by , such that , with and . The systematic form of is obtained as

where and . Using the same row-partitions for the syndrome , we get

which implies the following three conditions

(4.5)
(4.6)
(4.7)

We want to choose such that it has support in the information set and Lee weight , whereas should have a support disjoint from that of , and the remaining Lee weight . More precisely, we test , where and have disjoint supports of respective maximal sizes and and equal weight . In order for (4.5) and (4.7) to be satisfied we construct two sets and , where contains the equations regarding and contains the equations regarding . For all choices of and , we check whether the entries of and coincide, if they do we call this a collision. For each collision, we construct from (4.6) and check if has the missing Lee weight : if this occurs, we have found the error vector .

All these considerations are incorporated in Algorithm 2, where we allow any choice of and .

Input: , , , such that , , and .

Output: with and .

1:Choose an information set of size .
2:Choose a set of size and define .
3:Choose a uniform random partition of into disjoint sets and of size and , respectively.
4:Find an invertible matrix such that
where and .
5:Compute with and .
6:Compute the set consisting of all triples , where , .
7:Compute the set consisting of all triples , where , .
8:for each  do
9:     for each  do
10:         if :  then
11:              Return .               
12: Start over with Step 1 and a new selection of .
Algorithm 2 Stern’s Algorithm over in the Lee metric

4.4. Complexity analysis: Stern’s ISD in the Lee metric

In this section we derive the computational cost of our adapted Stern’s ISD algorithm in the Lee metric; to this end, we make the following considerations.

  • The cost of bringing in systematic form is as in Section 4.2 and it requires

    binary operations.

  • To build the set , we need to compute and for all with Lee weight ; since is fixed, such vectors have a cardinality . The cost of building is given by

    binary operations.

  • The set is constructed similarly, but in the first two entries we need to subtract the vector (resp. ) from each resulting vector. Thus, constructing the set costs

    binary operations.

  • The average amount of collisions in the two entries of the set and is given by

    For each collision we need to compute and check that its Lee weight is not larger than . We exploit the concept of early abort [bernstein2011smaller], i.e., stop the computation as soon as the maximum Lee weight is reached. Since a random element over has average Lee weight , on average we need to compute entries of the vector, each one costing binary operations. This implies a further cost term

So, the number of binary operations that, on average, are performed by an iteration of Algorithm 2 is estimated as

The success probability of one iteration corresponds to the probability of correctly guessing the weight distribution in the unknown , which in this case is given by

The estimate of the overall complexity is given by

(4.8)

5. Numerical Results

In this section we assess the complexity of ISD algorithms over a finite ring endowed with the Lee metric, by using (4.3) and (4.8), and we compare it with that of ISD algorithms over a finite field endowed with the Hamming metric. Notice that we need to define : the cost of the ISD algorithms decreases with increasing , thus the lowest cost in the Lee metric is given for . Some numerical examples are reported in Table 1, where many different values of the code block length and dimension are considered and the cost is expressed in bits, i.e., as the exponent of which provides the work factor of the attack. Notice that, for space reasons, Hamming, Lee, Prange and Stern were denoted as H., L., P. and S., respectively.

H.-P. L.-P. H.-S. L.-S.
256 1000 500 75.08 73.88 64.13 59.83
256 1000 600 87.91 86.10 75.76 70.68
1024 1000 600 88.55 86.74 78.78 70.80
243 200 100 50 88.9 75.94 78.30 60.01
256 200 100 50 88.93 75.97 78.45 59.93
256 1000 700 104.70 101.84 91.30 85.14
343 300 150 75 121.95 102.33 109.34 82.86
256 1000 500 100 141.85 133.86 125.69 113.53
2401 2000 1600 179.99 174.41 166.23 151.80
512 500 250 125 186.60 153.67 171.44 128.44

2401
2000 1600 100 283.35 266.74 267.69 233.79


Table 1. Cost of Stern’s and Prange’s ISD algorithms in the Hamming and Lee metric, for different parameter sets.