I Introduction
Interleaved codes are direct sums of codes of the same length, where the summands are termed constituent codes and their number is called interleaving order. By assuming that errors occur in certain patterns, it is possible to correct more errors than half the minimum distance.
In the Hamming metric, interleaved codes have been considered for replicated file disagreement location [1], correcting burst errors in datastorage applications [2], suitable outer codes in concatenated codes [1, 3, 4, 5, 6, 7], an ALOHAlike randomaccess scheme [4], decoding noninterleaved codes beyond halftheminimum distance by power decoding [8, 9, 10, 11], and recently for codebased cryptography [12, 13]. In all these works, the errors are assumed to be matrices with only a few nonzero columns which are added to an interleaved codeword matrix where each row is a codeword of the constituent code. This means that the errors affect the same positions in the constituent codes (burst errors) and the number of errors is given by the number of nonzero columns of the error matrix.
There exist several decoding algorithms for interleaved Reed–Solomon codes that, for interleaving order at least two, decode beyond half the minimum distance and also beyond the Johnson radius [2, 14, 15, 16, 17, 18, 19, 7, 20, 21, 22, 23, 24]
. Beyond the unique decoding radius, decoding sometimes fails, but with a small probability which can be bounded from above and roughly estimated by
where is the field size of the constituent code.Already in 1990, Metzner and Kapturowski [1] introduced a decoding algorithm for interleaved codes in the Hamming metric, where the constituent codes are the same (homogeneous interleaved codes) and have minimum distance . The decoder can correct up to errors, given that the interleaving order is high enough (i.e., at least the number of errors) and that the rank of the error matrix equals the number of errors. We want to stress that this decoding algorithm works for interleaved codes with arbitrary constituent codes, is purely based on linearalgebraic operations (i.e., row operations on matrices), and has complexity quadratic in the code length and linear in the interleaving order. This is remarkable since the code can correct most error patterns up to almost the minimum distance of the code without assuming any side information about the error (e.g., as for erasures, where the error positions are known). The result by Metzner and Kapturowski was later independently rediscovered in [4] and generalized to dependent errors by the same authors in [25].
Rankmetric codes are sets of vectors over an extension field, whose elements can be interpreted as matrices over a subfield and whose distance is given by the rank of their difference. The codes were independently introduced in
[26, 27, 28], together with their most famous code class, Gabidulin codes, which can be seen as the rankmetric analogs of Reed–Solomon codes.Interleaved codes in the rank metric were introduced in [29] and [30], and have found applications in codebased cryptography [31, 32, 33, 34], network coding [30, 35], and construction and decoding of spacetime codes [36, 37, 38, 39, 40, 41].
Similar to the Hamming metric, in the rank metric, the errors occur as additive matrices, but their structure is different: the row spaces of the constituent errors are contained in a relatively small joint row space whose dimension is the number of errors. This (joint) row space is usually seen as the rankmetric analog to the support of an error [27, 28, 42, 43, 44, 45].
There are several algorithms for decoding interleaved Gabidulin codes [29, 35, 46], as well as efficient variants thereof [47, 48, 49, 50], which are able to correct most (but not all) error patterns up to a certain number of errors that is beyond half the minimum distance for interleaving orders at least two.
In this paper, we adapt Metzner and Kapturowski’s algorithm to the rank metric. As a result, we obtain an algorithm that can correct up to rank errors with a homogeneous interleaved code over an arbitrary constituent code of minimum rank distance . The success conditions are the same as in Hamming metric: the interleaving order must be large enough and the rank of the error matrix (over the extension field) must be equal to the number of errors. The new algorithm also works for arbitrary linear rankmetric codes, including, but not limited to, Gabidulin [26, 27, 28], generalized Gabidulin [51, 52, 53], lowrankparitycheck (LRPC) [54], Loidreau’s Gabidulinlike [55], or twisted Gabidulin codes [56, 57] and their generalizations [58, 59]. The algorithm is again purely based on linearalgebraic operations and has a complexity of
operations over the subfield , where neglects log factors, is the code length, is the interleaving degree, and is the extension degree of the extension field over the subfield . We prove that for random errors of a given weight and growing interleaving order, the success probability gets arbitrarily close to . Further, we derive sufficient conditions on the error for which the decoder is able to correct more than errors and present an adaption to certain heterogeneous codes. In addition, we show that by viewing a homogeneous interleaved code as a linear code over a large extension field, one obtains a (noninterleaved) linear rankmetric code and the proposed decoder corrects almost any error of rank weight up to in this code. Finally, we prove that in the case of Gabidulin codes, the new decoder succeeds under the same conditions as the known decoding algorithms.
The structure of this paper is as follows. In Section II, we introduce notation, give definition,s and recall the Hammingmetric algorithm by Metzner and Kapturowski. In Section III, we propose the new algorithm, prove its correctness, analyze its complexity, compare it to the algorithm in Hamming metric, and give an example. In Section IV, we show further results including the success probability of the new decoder for random errors, sufficient conditions to successfully decode more than errors, an adaptation to heterogeneous interleaved codes, and relations to existing decoders. Conclusions and open problems are given in Section V.
Ii Preliminaries
Iia Notation
Let be a power of a prime and let denote the finite field of order and its extension field of order . Any element of can be seen as an element of and is an dimensional vector space over .
We use to denote the set of all matrices over and for the set of all row vectors of length over . Rows and columns of matrices are indexed by and , where is the element in the th row and th column of the matrix . The transposition of a matrix is indicated by superscript and
refers to a reduced row echelon form of
. Further, we define the set of integers and the submatrix notationLet be an ordered basis of over . By utilizing the vector space isomorphism , we can relate each vector to a matrix according to
where for all . Further, we extend the definition of to matrices by extending each row and then vertically concatenating the resulting matrices. A property that will be used in the paper is that if is a matrix from the small field and , then .
Let be a vector space. By , we indicate the dual space of , i.e.,
In the following, let . We deliberately allow to be the extension field or a subfield thereof. Since then always , operations between elements of and are welldefined. This will be used several times throughout the paper. The ()span of vectors is defined by the ()vector space
The ()row space of a matrix is the ()vector space spanned by its rows,
The (right) ()kernel of a matrix is the ()vector space given by
Note that in case of , we can write and compute the kernel as . We define the rank of a matrix to be
and its rank as
Note that the latter rank equals the dimension of the column span of the matrix (and, obviously, of its extension ). For the same matrix , the  and rank can be different. In general, we have , where equality holds if and only if the reduced row echelon form of has only entries in .
Further, throughout this paper, we use for any integer .
IiB RankMetric Codes
The rank norm of a vector is the rank of the matrix representation over , i.e.,
The rank distance between and (with and ) is defined by
A linear code over is a dimensional subspace of and minimum rank distance , where
Gabidulin codes are the firstknown and moststudied class of rankmetric codes. They are defined as follows.
Definition 1 (Gabidulin code, [26, 27, 28]).
A Gabidulin code over of length and dimension is defined by its generator matrix
where and .
Gabidulin codes are MRD codes, i.e., , and can decode uniquely and efficiently any error of rank weight .
Besides Gabidulin codes and variants therof based on different automorphisms [51, 52, 53], there are several other ()linear rank metric code constructions, for instance: lowrankparitycheck (LRPC) [54], which have applications in codebased cryptography, Loidreau’s code class that modifies Gabidulin codes for cryptographic purposes [55], and twisted Gabidulin codes [56, 57], which were the first general family of nonGabidulin MRD codes. There are also generalizations of twisted Gabidulin codes [58, 59] and other example codes for some explicit parameters [60].
IiC Interleaved Codes
In this paper, we propose a new decoding algorithm for homogeneous interleaved codes, which are defined as follows.
Definition 2.
Let be a linear (rank or Hammingmetric) code over and be a positive integer. The corresponding ()interleaved code is defined by
We call the constituent code and the interleaving order.
Note that any codeword of an interleaved code can be written as , where is a generator matrix of the constituent code and is a message. This also directly implies that for any codeword , where is a paritycheck matrix of the constituent code.
IiD Error Model and Support
As an error model, we consider additive error matrices of specific structure, depending on the chosen metric. The goal of decoding is to recover a codeword from a received word
We outline the error models for both Hamming and rank metric since we will often discuss analogies of the Hamming and rank case throughout the paper. Furthermore, we recall the important notion of support of an error.
IiD1 Hamming Metric
In the Hamming metric, an error (of a noninterleaved code) of weight is a vector having exactly nonzero entries. It is natural to define the support of the error as the set of indices of these nonzero positions, and many algebraic decoding algorithms aim at recovering the support of an error since it is easy to retrieve the error values afterwards.
For interleaved codes in the Hamming metric, errors of weight are considered to be matrices that have exactly nonzero columns. This means that errors occur at the same positions in the constituent codewords. A natural generalization of the support of the error is thus the set of indices of nonzero columns, i.e.,
The number of errors, or Hamming weight of the error , is then defined as the cardinality of the support. Since has only nonzero columns, we can decompose it into two matrices
(1) 
where consists of the nonzero columns of and the rows of are the corresponding identity vectors of the error positions.
IiD2 Rank Metric
In the rank metric, an error of weight , in the noninterleaved case, is a vector , whose rank (i.e., the rank of its matrix representation ) is . It has been noted in the literature several times that the row (or column) space of the matrix representation of the error shares many important properties with the support notion in the Hamming metric, see, e.g., [27, 28, 42, 43, 44, 45]. We therefore define the (rank) support of an error to be the row space of its matrix representation. Then, the rank weight equals the dimension of its support.
In the interleaved case, an error of weight is a matrix with rank , cf. [29, 46].^{1}^{1}1The paper [35] considers a different error model, but the algorithm in [35] can be reformulated to work with the model considered here, cf. [61, Section 4.1]. Note that the matrix entries are in general over the large field , but the rank is taken over . Analog to the case of a single vector, we define the rank support of a matrix to be the row space of the extended matrix , i.e.,
Thus, the number of errors, or rank weight of the error, equals the dimension of the error’s support. Similar to (1), we can decompose the error matrix as follows.
IiE Metzner–Kapturowski Algorithm for Decoding HighOrder Interleaved Codes in the Hamming Metric
In [1], Metzner and Kapturowski proposed a Hammingmetric decoding algorithm for interleaved codes with high interleaving order, i.e., , where is the number of errors. The algorithm is generic as it works with any code of minimum Hamming distance . It was shown that the proposed algorithm always retrieves the transmitted codeword if and if the nonzero columns of the error matrix are linearly independent, i.e., . The algorithm is given in Algorithm 1. Furthermore, an illustration of the algorithm can be found in the left part of Figure 2 (see Section IIIE, some pages ahead), which compares the classical Hammingmetric Metzner–Kapturowski algorithm with the new algorithm for rankmetric codes.
We observe that the algorithm first determines the error positions, i.e., , by bringing the syndrome matrix in reduced row echelon form and applying the same transformation to . The matrix , which consists of the last rows of the transformed matrix , has then zero columns exactly at the error positions. After the error positions are determined, erasure decoding is performed.
Iii Decoding HighOrder Interleaved Codes in the Rank Metric
In this section, we propose a new decoding algorithm for interleaved codes in the rank metric, which is an adaption of Metzner and Kapturowski’s decoder to the rank metric and works under similar conditions for up to errors:

Highorder condition: The interleaving order is at least the number of errors, i.e., .

Fullrank condition: The error matrix has full rank, i.e., .
In fact, the fullrank condition implies the highorder condition since the rank of a matrix is at most . We will nevertheless mention both conditions for didactic reasons.
Throughout this section, we fix a rankmetric code over a field with parameters and a paritycheck matrix of . We want to retrieve a codeword of the homogeneous interleaved code , given the received word
where is an error matrix of rank weight .
Iiia The Error Support
Similar to the Metzner–Kapturowski algorithm for the Hamming metric, our new algorithm is centered around retrieving the rank support of the error matrix from the syndrome matrix . As soon as is known, we can recover the error using Lemma 2 below. The method is a form of erasure correction, i.e., the rankmetric analog of computing the error values given the error positions in the Hamming metric. For Gabidulin codes, this fact was already used in [27, 28] and can be efficiently implemented by errorerasure decoders, cf. [30, 63] or their fast variants [64, 65, 66]. In the general case, it has been an important ingredient of generic ranksyndrome decoding algorithms [42, 43, 44, 45], which are mostly based on guessing the error support and then computing the error. Since computing the error from its support is an important step of the new algorithm, we present the formal statement and proof, together with the resulting complexity, below for completeness.
Lemma 2 (see, e.g., [27, 28, 42, 43, 44, 45]).
Let , be a basis of the rank support of an error matrix , and be the corresponding syndrome matrix. Then, the error is given by , where is the unique solution of the linear system of equations
(2) 
Thus, can be computed in operations in from and .
Proof.
Since is a basis of , there must be a matrix such that . Since , must fulfill . On the other hand, there can only be one matrix fulfilling (2) since, by [27, Theorem 1], the matrix has rank due to and . The multiplications and cost field operations, respectively, and solving the system (2) requires operations in , which implies the complexity statement. ∎
IiiB How to Determine the Error Support
Our new decoding algorithm is based on retrieving the support of the error. The error itself can then be computed using the method implied by Lemma 2. In the following, we show how to obtain the error support from the syndrome and paritycheck matrix.
Similar to Metzner and Kapturowski, we compute the syndrome matrix as the product of the paritycheck matrix and the transposed received word . Due to the properties of the paritycheck matrix, we obtain
Then, we transform into row echelon form. Since has rank at most , which is smaller than its number of rows , the resulting matrix has zero rows. We apply the same row operations used to obtain the echelon form of to the paritycheck matrix and consider the matrix , which consists of the rows of the resulting matrix corresponding to the zero rows of the echelon form of . This process is illustrated in Figure 1. The following sequence of statements derives the main statement of this section, Theorem 6: the error support can be efficiently computed from .
Lemma 3.
Let be the syndrome of an error of rank weight and be a matrix of rank such that is in rowechelon form. Then, at least rows of are zero. Let be the rows of corresponding to the zero rows in . Then, is a basis of
Proof.
Since has rank , its rank is at most . Hence, the rank of is at most , and at least of the rows are zero in its echelon form .
The rows of (and thus of ) are in the row space of , which is equal to . Furthermore, the rows of are in the kernel of since . It is left to show that the rows span the entire intersection. Write
where has full rank and has as many rows as . Let be a vector in the row space of and in the kernel of . Then, we can write
due to . This implies that since the rows of are linearly independent. Thus, is in the row space of . ∎
Lemma 3 shows that the matrix is connected to the kernel of the error. The next lemma proves that this kernel is closely connected to the rank support of the error if the rank of the error is (fullrank condition). Note that we only required in Lemma 3, which is a weaker condition.
Lemma 4.
Let be an error of rank . If (highorder condition) and (fullrank condition), then
where is any basis of the error support .
Proof.
Let have rank (recall that the highorder condition is necessary for this) and be a decomposition as in Lemma 1. Then, must have full rank and . Thus, for all , if and only if , which is equivalent to
In order to prove that the error support can be computed from , we require the following property of .
Lemma 5.
Let and . Then each row of is in .
Proof.
By combining the three lemmas above, the following theorem shows that the rank support of the error can be computed from under the highorder and fullrank condition. In the Hamming metric, Metzner and Kapturowksi required the number of errors to be since they used the fact that any columns of the paritycheck matrix are linearly independent. Here, we obtain the same condition as we use the rankmetric analog of this statement, [27, Theorem 1]: If the paritycheck matrix is multiplied from the right by any matrix over the small field of rank , then the resulting matrix has rank .
Theorem 6.
Let be an error of rank . If (highorder condition) and (fullrank condition), then
where is defined as in Lemma 3.
Proof:
Lemmas 3 and 4 show that the highorder and fullrank conditions imply
(4) 
In the following, we prove that if we consider the row space of the extended instead of the row space of , the result is directly the kernel of , i.e.,
Together with and , this implies the claim.
First, we prove that . It suffices to show that any row of is in the kernel of . Such a row is again a row of . Since obviously , it is left to show that . This follows from (4), which implies that and thus
where the second equality is true since has entries in .
Second, we show by proving that
Comments
There are no comments yet.