# Decoding High-Order Interleaved Rank-Metric Codes

This paper presents an algorithm for decoding homogeneous interleaved codes of high interleaving order in the rank metric. The new decoder is an adaption of the Hamming-metric decoder by Metzner and Kapturowski (1990) and guarantees to correct all rank errors of weight up to d-2 whose rank over the large base field of the code equals the number of errors, where d is the minimum rank distance of the underlying code. In contrast to previously-known decoding algorithms, the new decoder works for any rank-metric code, not only Gabidulin codes. It is purely based on linear-algebraic computations, and has an explicit and easy-to-handle success condition. Furthermore, a lower bound on the decoding success probability for random errors of a given weight is derived. The relation of the new algorithm to existing interleaved decoders in the special case of Gabidulin codes is given.

## Authors

• 40 publications
• 13 publications
• 51 publications
01/14/2020

### Generic Decoding in the Sum-Rank Metric

We propose the first non-trivial generic decoding algorithm for codes in...
12/19/2017

### A Decoding Algorithm for Rank Metric Codes

In this work we will present algorithms for decoding rank metric codes. ...
06/04/2021

### Quantum Reduction of Finding Short Code Vectors to the Decoding Problem

We give a quantum reduction from finding short codewords in a random lin...
01/04/2022

### Fast Decoding of Interleaved Linearized Reed-Solomon Codes and Variants

We construct s-interleaved linearized Reed-Solomon (ILRS) codes and vari...
04/21/2011

### Rank Minimization over Finite Fields: Fundamental Limits and Coding-Theoretic Interpretations

This paper establishes information-theoretic limits in estimating a fini...
08/20/2020

### Corrections to "An Innovations Approach to Viterbi Decoding of Convolutional Codes"

We correct errors in Section III-B of the above-titled paper. The correc...
08/27/2020

### Orbit Structure of Grassmannian G_2, m and a decoder for Grassmann code C(2, m)

In this manuscript, we consider decoding Grassmann codes, linear codes a...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Interleaved codes are direct sums of codes of the same length, where the summands are termed constituent codes and their number is called interleaving order. By assuming that errors occur in certain patterns, it is possible to correct more errors than half the minimum distance.

In the Hamming metric, interleaved codes have been considered for replicated file disagreement location [1], correcting burst errors in data-storage applications [2], suitable outer codes in concatenated codes [1, 3, 4, 5, 6, 7], an ALOHA-like random-access scheme [4], decoding non-interleaved codes beyond half-the-minimum distance by power decoding [8, 9, 10, 11], and recently for code-based cryptography [12, 13]. In all these works, the errors are assumed to be matrices with only a few non-zero columns which are added to an interleaved codeword matrix where each row is a codeword of the constituent code. This means that the errors affect the same positions in the constituent codes (burst errors) and the number of errors is given by the number of non-zero columns of the error matrix.

There exist several decoding algorithms for interleaved Reed–Solomon codes that, for interleaving order at least two, decode beyond half the minimum distance and also beyond the Johnson radius [2, 14, 15, 16, 17, 18, 19, 7, 20, 21, 22, 23, 24]

. Beyond the unique decoding radius, decoding sometimes fails, but with a small probability which can be bounded from above and roughly estimated by

where is the field size of the constituent code.

Already in 1990, Metzner and Kapturowski [1] introduced a decoding algorithm for interleaved codes in the Hamming metric, where the constituent codes are the same (homogeneous interleaved codes) and have minimum distance . The decoder can correct up to errors, given that the interleaving order is high enough (i.e., at least the number of errors) and that the rank of the error matrix equals the number of errors. We want to stress that this decoding algorithm works for interleaved codes with arbitrary constituent codes, is purely based on linear-algebraic operations (i.e., row operations on matrices), and has complexity quadratic in the code length and linear in the interleaving order. This is remarkable since the code can correct most error patterns up to almost the minimum distance of the code without assuming any side information about the error (e.g., as for erasures, where the error positions are known). The result by Metzner and Kapturowski was later independently rediscovered in [4] and generalized to dependent errors by the same authors in [25].

Rank-metric codes are sets of vectors over an extension field, whose elements can be interpreted as matrices over a subfield and whose distance is given by the rank of their difference. The codes were independently introduced in

[26, 27, 28], together with their most famous code class, Gabidulin codes, which can be seen as the rank-metric analogs of Reed–Solomon codes.

Interleaved codes in the rank metric were introduced in [29] and [30], and have found applications in code-based cryptography [31, 32, 33, 34], network coding [30, 35], and construction and decoding of space-time codes [36, 37, 38, 39, 40, 41].

Similar to the Hamming metric, in the rank metric, the errors occur as additive matrices, but their structure is different: the row spaces of the constituent errors are contained in a relatively small joint row space whose dimension is the number of errors. This (joint) row space is usually seen as the rank-metric analog to the support of an error [27, 28, 42, 43, 44, 45].

There are several algorithms for decoding interleaved Gabidulin codes [29, 35, 46], as well as efficient variants thereof [47, 48, 49, 50], which are able to correct most (but not all) error patterns up to a certain number of errors that is beyond half the minimum distance for interleaving orders at least two.

In this paper, we adapt Metzner and Kapturowski’s algorithm to the rank metric. As a result, we obtain an algorithm that can correct up to rank errors with a homogeneous interleaved code over an arbitrary constituent code of minimum rank distance . The success conditions are the same as in Hamming metric: the interleaving order must be large enough and the rank of the error matrix (over the extension field) must be equal to the number of errors. The new algorithm also works for arbitrary linear rank-metric codes, including, but not limited to, Gabidulin [26, 27, 28], generalized Gabidulin [51, 52, 53], low-rank-parity-check (LRPC) [54], Loidreau’s Gabidulin-like [55], or twisted Gabidulin codes [56, 57] and their generalizations [58, 59]. The algorithm is again purely based on linear-algebraic operations and has a complexity of

 O∼(max{n2ℓ,n3}m)

operations over the subfield , where neglects log factors, is the code length, is the interleaving degree, and is the extension degree of the extension field over the subfield . We prove that for random errors of a given weight and growing interleaving order, the success probability gets arbitrarily close to . Further, we derive sufficient conditions on the error for which the decoder is able to correct more than errors and present an adaption to certain heterogeneous codes. In addition, we show that by viewing a homogeneous interleaved code as a linear code over a large extension field, one obtains a (non-interleaved) linear rank-metric code and the proposed decoder corrects almost any error of rank weight up to in this code. Finally, we prove that in the case of Gabidulin codes, the new decoder succeeds under the same conditions as the known decoding algorithms.

The structure of this paper is as follows. In Section II, we introduce notation, give definition,s and recall the Hamming-metric algorithm by Metzner and Kapturowski. In Section III, we propose the new algorithm, prove its correctness, analyze its complexity, compare it to the algorithm in Hamming metric, and give an example. In Section IV, we show further results including the success probability of the new decoder for random errors, sufficient conditions to successfully decode more than errors, an adaptation to heterogeneous interleaved codes, and relations to existing decoders. Conclusions and open problems are given in Section V.

## Ii Preliminaries

### Ii-a Notation

Let be a power of a prime and let denote the finite field of order and its extension field of order . Any element of can be seen as an element of and is an -dimensional vector space over .

We use to denote the set of all matrices over and for the set of all row vectors of length over . Rows and columns of -matrices are indexed by and , where is the element in the -th row and -th column of the matrix . The transposition of a matrix is indicated by superscript and

refers to a reduced row echelon form of

. Further, we define the set of integers and the submatrix notation

 \boldmathA[a:b],[c:d]:=⎡⎢ ⎢ ⎢⎣Aa,c…Aa,d⋮⋱⋮Ab,c…Ab,d⎤⎥ ⎥ ⎥⎦.

Let be an ordered basis of over . By utilizing the vector space isomorphism , we can relate each vector to a matrix according to

 ext:Fnqm →Fm×nq, \boldmatha=[a1,…,an] ↦\boldmathA=⎡⎢ ⎢ ⎢⎣A1,1…A1,n⋮⋱⋮Am,1…Am,n⎤⎥ ⎥ ⎥⎦,

where for all . Further, we extend the definition of to matrices by extending each row and then vertically concatenating the resulting matrices. A property that will be used in the paper is that if is a matrix from the small field and , then .

Let be a vector space. By , we indicate the dual space of , i.e.,

 V⊥:={\boldmathv′:\boldmathv′\boldmathv⊤=0,∀\boldmathv\boldmathv∈V}.

In the following, let . We deliberately allow to be the extension field or a subfield thereof. Since then always , operations between elements of and are well-defined. This will be used several times throughout the paper. The (-)span of vectors is defined by the (-)vector space

 ⟨\boldmathv1,…,\boldmathvl⟩F={l∑i=1ai\boldmathvi: ai∈F}.

The (-)row space of a matrix is the (-)vector space spanned by its rows,

The (right) (-)kernel of a matrix is the (-)vector space given by

 KF(\boldmathA):={\boldmathv∈Fn:% \boldmathA% \boldmathv⊤=\boldmath0}.

Note that in case of , we can write and compute the -kernel as . We define the -rank of a matrix to be

and its -rank as

 rkFq(\boldmathA):=dimFq(RFq(ext(\boldmathA))).

Note that the latter rank equals the -dimension of the -column span of the matrix (and, obviously, of its extension ). For the same matrix , the - and -rank can be different. In general, we have , where equality holds if and only if the reduced row echelon form of has only entries in .

Further, throughout this paper, we use for any integer .

### Ii-B Rank-Metric Codes

The rank norm of a vector is the rank of the matrix representation over , i.e.,

 rkq(\boldmatha):=rkq(\boldmathA).

The rank distance between and (with and ) is defined by

 dR(\boldmatha,\boldmathb):=rkq(\boldmatha−% \boldmathb)=rkq(\boldmathA−\boldmathB).

A linear code over is a -dimensional subspace of and minimum rank distance , where

 d:=min\boldmatha,\boldmathb∈C\boldmatha≠\boldmathb{rkq(\boldmatha−% \boldmathb)}=min\boldmatha∈C∖{0}{rkq(\boldmath% a)}.

Gabidulin codes are the first-known and most-studied class of rank-metric codes. They are defined as follows.

###### Definition 1 (Gabidulin code, [26, 27, 28]).

A Gabidulin code over of length and dimension is defined by its generator matrix

 \boldmathG=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣g1g2…gng[1]1g[1]2…g[1]n⋮⋮⋱⋮g[k−1]1g[k−1]2…g[k−1]n⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,

where and .

Gabidulin codes are MRD codes, i.e., , and can decode uniquely and efficiently any error of rank weight .

Besides Gabidulin codes and variants therof based on different automorphisms [51, 52, 53], there are several other (-)linear rank metric code constructions, for instance: low-rank-parity-check (LRPC) [54], which have applications in code-based cryptography, Loidreau’s code class that modifies Gabidulin codes for cryptographic purposes [55], and twisted Gabidulin codes [56, 57], which were the first general family of non-Gabidulin MRD codes. There are also generalizations of twisted Gabidulin codes [58, 59] and other example codes for some explicit parameters [60].

### Ii-C Interleaved Codes

In this paper, we propose a new decoding algorithm for homogeneous interleaved codes, which are defined as follows.

###### Definition 2.

Let be a linear (rank- or Hamming-metric) code over and be a positive integer. The corresponding (-)interleaved code is defined by

 IC[ℓ;n,k,d]:=⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩\boldmathC% =⎡⎢ ⎢ ⎢ ⎢ ⎢⎣\boldmathc1\boldmathc2⋮\boldmathcℓ⎤⎥ ⎥ ⎥ ⎥ ⎥⎦:\boldmathci∈C⎫⎪ ⎪ ⎪ ⎪ ⎪⎬⎪ ⎪ ⎪ ⎪ ⎪⎭⊆Fℓ×nqm .

We call the constituent code and the interleaving order.

Note that any codeword of an interleaved code can be written as , where is a generator matrix of the constituent code and is a message. This also directly implies that for any codeword , where is a parity-check matrix of the constituent code.

### Ii-D Error Model and Support

As an error model, we consider additive error matrices of specific structure, depending on the chosen metric. The goal of decoding is to recover a codeword from a received word

 \boldmathR=\boldmathC\boldmathC+\boldmathE\boldmathE∈Fℓ×nqm.

We outline the error models for both Hamming and rank metric since we will often discuss analogies of the Hamming and rank case throughout the paper. Furthermore, we recall the important notion of support of an error.

#### Ii-D1 Hamming Metric

In the Hamming metric, an error (of a non-interleaved code) of weight is a vector having exactly non-zero entries. It is natural to define the support of the error as the set of indices of these non-zero positions, and many algebraic decoding algorithms aim at recovering the support of an error since it is easy to retrieve the error values afterwards.

For interleaved codes in the Hamming metric, errors of weight are considered to be matrices that have exactly non-zero columns. This means that errors occur at the same positions in the constituent codewords. A natural generalization of the support of the error is thus the set of indices of non-zero columns, i.e.,

 suppH(\boldmathE):={j:j-th column of \boldmathE\boldmathE % is non-zero}.

The number of errors, or Hamming weight of the error , is then defined as the cardinality of the support. Since has only non-zero columns, we can decompose it into two matrices

 \boldmathE=\boldmathA\boldmathA\boldmathB\boldmathB, (1)

where consists of the non-zero columns of and the rows of are the corresponding identity vectors of the error positions.

#### Ii-D2 Rank Metric

In the rank metric, an error of weight , in the non-interleaved case, is a vector , whose -rank (i.e., the -rank of its matrix representation ) is . It has been noted in the literature several times that the row (or column) space of the matrix representation of the error shares many important properties with the support notion in the Hamming metric, see, e.g., [27, 28, 42, 43, 44, 45]. We therefore define the (rank) support of an error to be the row space of its matrix representation. Then, the rank weight equals the dimension of its support.

In the interleaved case, an error of weight is a matrix with -rank , cf. [29, 46].111The paper [35] considers a different error model, but the algorithm in [35] can be reformulated to work with the model considered here, cf. [61, Section 4.1]. Note that the matrix entries are in general over the large field , but the rank is taken over . Analog to the case of a single vector, we define the rank support of a matrix to be the row space of the extended matrix , i.e.,

Thus, the number of errors, or rank weight of the error, equals the -dimension of the error’s support. Similar to (1), we can decompose the error matrix as follows.

###### Lemma 1 (see, e.g., [62, Theorem 1]).

Let be an error matrix with . Then, it can be decomposed into

 \boldmathE=\boldmathA\boldmathA\boldmathB\boldmathB,

where and both have full -rank , cf. right part of Figure 2. The matrix and are unique up to elementary -column and -row operations, respectively, and the rows of are a basis of the error support .

For the two metrics, we will illustrate analogies of the notions of support and the decompositions, (1) and Lemma 1, in Figure 2 (see Section III-E, a few pages ahead).

### Ii-E Metzner–Kapturowski Algorithm for Decoding High-Order Interleaved Codes in the Hamming Metric

In [1], Metzner and Kapturowski proposed a Hamming-metric decoding algorithm for interleaved codes with high interleaving order, i.e., , where is the number of errors. The algorithm is generic as it works with any code of minimum Hamming distance . It was shown that the proposed algorithm always retrieves the transmitted codeword if and if the non-zero columns of the error matrix are linearly independent, i.e., . The algorithm is given in Algorithm 1. Furthermore, an illustration of the algorithm can be found in the left part of Figure 2 (see Section III-E, some pages ahead), which compares the classical Hamming-metric Metzner–Kapturowski algorithm with the new algorithm for rank-metric codes.

 Input: Parity-check matrix H, received word R Output: Transmitted codeword C 1 2\boldmathS←\boldmathH% \boldmathR⊤∈F(n−k)×ℓqm. 3Determine \boldmathP∈F(n−k)×(n−k)qm s.t. \boldmathP\boldmathS\boldmathS=ref(\boldmathS). 4\boldmathHsub←(% \boldmathP% \boldmathH)[tH+1:n−k],[1:n]∈F(n−k−tH)×nqm. 5Determine \boldmathB∈FtH×nq s.t. the columns of B, which correspond to the zero-columns of \boldmathHsub, form an identity matrix and the remaining columns of B are zero. 6Determine \boldmathA∈Fℓ×tHqm s.t. (\boldmathH% \boldmathB\boldmathB⊤)\boldmathA⊤=\boldmathS. 7\boldmathC←\boldmathR−%\boldmath$A$% \boldmathB∈Fℓ×nqm. return C Algorithm 1 Metzner–Kapturowski Algorithm [1]

We observe that the algorithm first determines the error positions, i.e., , by bringing the syndrome matrix in reduced row echelon form and applying the same transformation to . The matrix , which consists of the last rows of the transformed matrix , has then zero columns exactly at the error positions. After the error positions are determined, erasure decoding is performed.

## Iii Decoding High-Order Interleaved Codes in the Rank Metric

In this section, we propose a new decoding algorithm for interleaved codes in the rank metric, which is an adaption of Metzner and Kapturowski’s decoder to the rank metric and works under similar conditions for up to errors:

1. High-order condition: The interleaving order is at least the number of errors, i.e., .

2. Full-rank condition: The error matrix has full -rank, i.e., .

In fact, the full-rank condition implies the high-order condition since the -rank of a matrix is at most . We will nevertheless mention both conditions for didactic reasons.

Throughout this section, we fix a rank-metric code over a field with parameters and a parity-check matrix of . We want to retrieve a codeword of the homogeneous -interleaved code , given the received word

 \boldmathR=\boldmathC+\boldmathE∈Fℓ×nqm,

where is an error matrix of rank weight .

### Iii-a The Error Support

Similar to the Metzner–Kapturowski algorithm for the Hamming metric, our new algorithm is centered around retrieving the rank support of the error matrix from the syndrome matrix . As soon as is known, we can recover the error using Lemma 2 below. The method is a form of erasure correction, i.e., the rank-metric analog of computing the error values given the error positions in the Hamming metric. For Gabidulin codes, this fact was already used in [27, 28] and can be efficiently implemented by error-erasure decoders, cf. [30, 63] or their fast variants [64, 65, 66]. In the general case, it has been an important ingredient of generic rank-syndrome decoding algorithms [42, 43, 44, 45], which are mostly based on guessing the error support and then computing the error. Since computing the error from its support is an important step of the new algorithm, we present the formal statement and proof, together with the resulting complexity, below for completeness.

###### Lemma 2 (see, e.g., [27, 28, 42, 43, 44, 45]).

Let , be a basis of the rank support of an error matrix , and be the corresponding syndrome matrix. Then, the error is given by , where is the unique solution of the linear system of equations

 \boldmathS=(\boldmathH\boldmathB\boldmathB⊤)\boldmathA⊤. (2)

Thus, can be computed in operations in from and .

###### Proof.

Since is a basis of , there must be a matrix such that . Since , must fulfill . On the other hand, there can only be one matrix fulfilling (2) since, by [27, Theorem 1], the matrix has -rank due to and . The multiplications and cost field operations, respectively, and solving the system (2) requires operations in , which implies the complexity statement. ∎

### Iii-B How to Determine the Error Support

Our new decoding algorithm is based on retrieving the support of the error. The error itself can then be computed using the method implied by Lemma 2. In the following, we show how to obtain the error support from the syndrome and parity-check matrix.

Similar to Metzner and Kapturowski, we compute the syndrome matrix as the product of the parity-check matrix and the transposed received word . Due to the properties of the parity-check matrix, we obtain

 \boldmathS=\boldmathH\boldmathE⊤.

Then, we transform into row echelon form. Since has -rank at most , which is smaller than its number of rows , the resulting matrix has zero rows. We apply the same row operations used to obtain the echelon form of to the parity-check matrix and consider the matrix , which consists of the rows of the resulting matrix corresponding to the zero rows of the echelon form of . This process is illustrated in Figure 1. The following sequence of statements derives the main statement of this section, Theorem 6: the error support can be efficiently computed from .

###### Lemma 3.

Let be the syndrome of an error of rank weight and be a matrix of rank such that is in row-echelon form. Then, at least rows of are zero. Let be the rows of corresponding to the zero rows in . Then, is a basis of

 KFqm(\boldmathE)∩C⊥.
###### Proof.

Since has -rank , its -rank is at most . Hence, the -rank of is at most , and at least of the rows are zero in its echelon form .

The rows of (and thus of ) are in the row space of , which is equal to . Furthermore, the rows of are in the kernel of since . It is left to show that the rows span the entire intersection. Write

 \boldmathP\boldmathS=[\boldmathS′\boldmath0],\boldmathP\boldmathH=[\boldmathH′\boldmathHsub],

where has full rank and has as many rows as . Let be a vector in the row space of and in the kernel of . Then, we can write

 \boldmath0=\boldmathh\boldmathE⊤=[\boldmathv% 1,\boldmathv2][\boldmathH′\boldmathHsub]\boldmathE⊤=\boldmathv1\boldmathH′\boldmathE\boldmathE⊤=\boldmathv1\boldmathS′

due to . This implies that since the rows of are linearly independent. Thus, is in the row space of . ∎

Lemma 3 shows that the matrix is connected to the kernel of the error. The next lemma proves that this kernel is closely connected to the rank support of the error if the -rank of the error is (full-rank condition). Note that we only required in Lemma 3, which is a weaker condition.

###### Lemma 4.

Let be an error of -rank . If (high-order condition) and (full-rank condition), then

 KFqm(\boldmathE% )=KFqm(\boldmathB\boldmathB),

where is any basis of the error support .

###### Proof.

Let have -rank (recall that the high-order condition is necessary for this) and be a decomposition as in Lemma 1. Then, must have full -rank and . Thus, for all , if and only if , which is equivalent to

 KFqm(\boldmathE% )=KFqm(\boldmathA\boldmathA\boldmathB\boldmathB)=KFqm(\boldmathB% ).\qed

In order to prove that the error support can be computed from , we require the following property of .

###### Lemma 5.

Let and . Then each row of is in .

###### Proof.

Since , the vector can be written as

 \boldmathh=n−k−t∑i=1ai\boldmathHsub,i, (3)

where and denotes the -th row of . Using the vector and matrix representation of finite field elements, equation (3) can be mapped to

 ext(\boldmathh)=m∑i=1\boldmathMaiext(\boldmathHsub,i),

where is the matrix representation of over for a given basis , cf. [67]. Since for is over , each row of is in . ∎

By combining the three lemmas above, the following theorem shows that the rank support of the error can be computed from under the high-order and full-rank condition. In the Hamming metric, Metzner and Kapturowksi required the number of errors to be since they used the fact that any columns of the parity-check matrix are linearly independent. Here, we obtain the same condition as we use the rank-metric analog of this statement, [27, Theorem 1]: If the parity-check matrix is multiplied from the right by any matrix over the small field of rank , then the resulting matrix has rank .

###### Theorem 6.

Let be an error of -rank . If (high-order condition) and (full-rank condition), then

 suppR(\boldmathE)=KFq(ext(\boldmathHsub)),

where is defined as in Lemma 3.

###### Proof:

Lemmas 3 and 4 show that the high-order and full-rank conditions imply

 (4)

In the following, we prove that if we consider the -row space of the extended instead of the -row space of , the result is directly the -kernel of , i.e.,

 RFq(ext(\boldmathHsub))=KFq(\boldmathB).

Together with and , this implies the claim.

First, we prove that . It suffices to show that any row of is in the -kernel of . Such a row is again a row of . Since obviously , it is left to show that . This follows from (4), which implies that and thus

 \boldmath0=ext(\boldmathB\boldmathv⊤)=\boldmathB% ext(% \boldmathv)⊤,

where the second equality is true since has entries in .

Second, we show by proving that