A Decoding Algorithm for Rank Metric Codes

by   Tovohery Hajatiana Randrianarisoa, et al.

In this work we will present algorithms for decoding rank metric codes. First we will look at a new decoding algorithm for Gabidulin codes using the property of Dickson matrices corresponding to linearized polynomials. We will be using a Berlekamp-Massey-like algorithm in the process. We will show the difference between our and existing algorithms. Apart from being a new algorithm, it is also interesting that it can be modified to get a decoding algorithm for general twisted Gabidulin codes.



There are no comments yet.


page 1

page 2

page 3

page 4


Fast Decoding of Codes in the Rank, Subspace, and Sum-Rank Metric

We speed up existing decoding algorithms for three code classes in diffe...

Decoding a class of maximum Hermitian rank metric codes

Maximum Hermitian rank metric codes were introduced by Schmidt in 2018 a...

Decoding High-Order Interleaved Rank-Metric Codes

This paper presents an algorithm for decoding homogeneous interleaved co...

Decoding Downset codes over a finite grid

In a recent paper, Kim and Kopparty (Theory of Computing, 2017) gave a d...

NP-Complete Problems for Lee Metric Codes

We consider codes over finite rings endowed with the Lee metric and prov...

Decoding up to 4 errors in Hyperbolic-like Abelian Codes by the Sakata Algorithm

We deal with two problems related with the use of the Sakata's algorithm...

Residues of skew rational functions and linearized Goppa codes

This paper constitutes a first attempt to do analysis with skew polynomi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Rank metric codes have many applications in network coding and in cryptography. So far, there are two known general constructions of rank metric codes with arbitrary parameters. The first class of rank metric codes are the Gabidulin codes [Del78, Gab85, KG05] and they were generalised to the twisted Gabidulin codes [She16, LTZ15]. Several decoding algorithms already exist for Gabidulin codes [Gab85, Loi06, RP04]. For twisted Gabidulin codes, a decoding algorithm exists but only for a particular parameters of the code [RR17]. Most of the algorithms for Gabidulin codes are using syndrome computation, extended Euclidean algorithm, Berlekamp-Massey algorithm. In this work we will use the Berlekamp-Massey algorithm for a rank metric code again, but in a different way. Namely, suppose that

is the received vector with

being the error. We first interpolate the polynomial

from the received vector, where is the message polynomial and is the polynomial corresponding to the error vector. Due to the form of (it has degree at most), we already know some coefficients of and we will show that these coefficients are enough to recover the whole polynomial . This algorithm can be further modified to be used with all cases of twisted Gabidulin codes. To do these, we will first give a description of Gabidulin codes and twisted Gabidulin codes in Section 2. Also, we will give a brief description of two decoding algorithms for Gabidulin codes. Then, in Section 3, we prove a theorem which enables us to build a new decoding algorithm. We will present the new decoding algorithm for Gabidulin codes and we will show its difference with the two algorithms we presented in Section 2. In Section 4, we will modify the algorithm to use it with twisted Gabidulin codes. Finally, we will conclude in Section 5.

2 Rank metric codes

Definition 1.

Let be a finite field extension of degree . A linearized polynomial of -degree is a polynomial of the form

where and for any integer with . The set of all these polynomials will be denoted by . If we fix an integer , then denotes the set of all linearized polynomials of -degree at most .

Example 1.

The trace map is the linearized polynomial

We know that is a vector space over of dimension . And since is an -automorphism of , we see that any linearized polynomial is an -linear map . In fact, is isomorphic to the set of all -matrices over . In this regard, we define the rank of a linearized polynomial to be its rank as an -linear map on .

Furthermore, if has -degree , then considering it as a polynomial in , it can have roots at most. Therefore as an -linear map , has a kernel of dimension of at most. This property allows us to use these polynomials to construct rank metric codes with good property.

For more on the theory of linearized polynomials, one can have a look at [LN96] Chapter 3.

Definition 2.

Given a finite field , a rank metric code is a subset of together with the metric defined by , for . We call it linear if is a vector space over of dimension and if furthermore, is the minimum distance between two distinct codewords of , then we say that is a linear rank metric code.

Remark 1.

Alternative representations of a rank metric codes are given by the following:

  1. The code is , where the of a matrix is replaced by the maximum number of -linearly independent elements in .

  2. The code is , where the of a matrix is replaced by the rank of the linearized polynomial as an -linear map.

Given a rank metric code and a minimum distance, the upper-bound on the size of the code is given by the following theorem.

Theorem 1 ([Del78]).

Let be a linear code in , if the minimum distance of is equal to a positive integer , then . Codes for which the bound is attained (i.e ) are called maximum rank distance (MRD) codes.

The first class of linear rank metric code is the family of Gabidulin codes [Del78, Gab85], given by,

To prove that this is MRD, we just use the fact that the kernel has dimension at most and then use the rank nullity theorem.

This construction was generalized by Sheekey in [She16]. Namely, the Class of twisted Gabidulin codes is defined as follows:

where with and is a non-negative integer. This is an MRD code because a codeword cannot have zeroes by the choice of the the coefficients of and [She16].

Remark 2.

The above representations are using linearized polynomials. These are -linear maps . To get a representation of the code as a subset of , we evaluate the code on a fixed basis of the extension . Using this basis, we can also get a representation in the matrix form in .

Remark 3.

The construction can be generalized by replacing the monomials by where . They are called generalized Gabidulin codes, see [KG05].

There are already a lot of decoding algorithms for Gabidulin codes. For twisted Gabidulin codes, there is a decoding algorithm but only for some specific parameters [RR17]. As we mentioned before, we will give another decoding algorithm for Gabidulin codes and we will show how to modify it to get a decoding algorithm for twisted general Gabidulin codes. In order to see the difference between the existing and our algorithms, we will first show a brief description of two decoding algorithms for an -Gabidulin codes

  1. Compute the syndrome vector , where is a parity check matrix of the code given by . The entries of this vector define a linearized polynomial .

  2. Determine two linearized polynomials and such that . Here, there are two methods: Use Berlekamp Massey [RP04] or use the extended Euclidean algorithm [Gab85].

  3. Find a basis of the kernel of .

  4. Compute the ’s from .

  5. Find a matrix , with .

  6. Finally the error vector is , where the entries of are .

  7. output the message as .

3 Decoding algorithm for Gabidulin codes

Before we give our new decoding algorithm, we first give the needed tool. We know that linear maps can be decomposed as a sum of several linear maps of dimension one. And this can be shown in the setting of linearized polynomials. In the remaining part of this paper, we will use only linearized polynomials in .

Lemma 1.

Any -linear map can be represented by for a fixed .


The set of are obviously -linear maps . The equality comes by looking at the dimension of the space of -linear maps . ∎

Corollary 1.

Let be an element of and let be the -subspace of generated by . Then any linear map has the form for some .

The above representation in the corollary is of course not unique. As a consequence of the previous corollary, we have the following theorem.

Theorem 2.

Let be a linearized polynomial of rank , then there are two subsets of and such that they are both linearly independent over and that


Since is of rank , then we choose to be a generator of the image of as a linear map. By Corollary 1, each projection of onto the subspace has the form . Thus we get the desired form of . What remains to show is the linear independence of the ’s. Without loss of generality, say , with . Then

Thus rank of is at most which is a contradiction. ∎

From Theorem 2, we get the following corollary.

Corollary 2.

Let be a linearized polynomial of rank over the field extension such that

Then there are two subsets of and such that they are both linearly independent over , and for all integer such that ,

Definition 3.

Let be a linearized polynomial. The Dickson matrix associated to is the matrix

Another matrix related to linearized polynomials is the Moore matrix.

Definition 4.

Given , the Moore matrix associated to the ’s is the matrix

It is well known that the above Moore matrix is invertible if and only if the ’s are linearly independent over .

As a consequence of Corollary 2, we have the following theorem.

Theorem 3.

Let be a linearized polynomial of rank over the field extension such that

Let be the rows of the matrix as in Definition 3.

Then we have the following property:

  1. The matrix is of rank .

  2. Any successive rows are linearly independent and the other rows are linear combinations of them.

  3. All -matrices with are invertible.


From Corollary 2, one sees that , where

Since the ’s are linearly independent over and the same for the , we see that each successive rows of and any successive columns of constitute invertible matrices. All statements of the theorem follow from these facts. ∎

It is this theorem that is important to us. This enables us to build a new decoding algorithm.

We are now ready to explain the decoding algorithm. It consists of two steps. The first part is to interpolate the received message to construct the polynomial , where is the message polynomial and is the error polynomial. Since is of degree at most, we should know the coefficient of in , . We will show that these coefficients are actually enough to recover the whole polynomial with some condition on the rank of .

3.1 Polynomial interpolation

First of all, depending on the representation of the code, we need to do some interpolation to get a linearized polynomial form. Assume that our encoding was given by

where is a fixed basis of and .

We assume that an error of was added to the original codeword and suppose that . Therefore was received with .

Let be the Moore matrix


where is the error polynomial corresponding to i.e. . Obviously, as an -linear map has rank .

Thus, we may compute in advance and then compute

This gives us . Since , we now know the values of . In the next step, we will use these coefficients to recover the other coefficients of .

3.2 Polynomial reconstruction

Let us have a look at the matrix in Theorem 3 from the error polynomial . We consider its submatrix

We know that the last rows are linearly independent and that the first row should be a linear combination of the last rows.

This gives us an equation of the form


where we may assume that .

Notice that by the interpolation step, we know the coefficients . Since we suppose that , then . Thus we know all the coefficients . And thus, by Theorem 3, this equation has unique solution in which we can compute. This can be done for example by using matrix inversion but that will take operations.

Similarly to the case of Reed-Solomon codes, we can do better. Namely, we have here a Toeplitz-like matrix. And this can actually be solved by using a Berlekamp-Massey-like algorithm from [RP04]. To see this let . Therefore, Equation (1) becomes


We want to find from the sequence . Equation (2) is exactly the form of recurrence shown in [RP04]. In that paper, they gave an algorithm for solving Equation (2). We will give the algorithm in Algorithm 1. We set .

1:procedure BERLEKAMP-MASSEY()
6:     while  do
9:         if  then
11:         else
12:              if  then
14:              else
17:              end if
18:         end if
20:     end while
21:     return
22:end procedure
Algorithm 1 Berlekamp-Massey

In this algorithm and at the end of the algorithm, we will just collect the coefficient of the to get our . Notice that on input we take .

We summarize our decoding algorithm with the following steps in Algorithm 2. Suppose was received with an error of rank . We already know the matrix in advance.


  1. Compute

  2. Use the Berlekamp-Massey-like Algorithm 1 to get the ’s.

  3. Use the fact that the first row of the matrix is a linear combination of the remaining rows, using the ’s, to recursively compute the remaining coefficients of . This is just like recursively computing elements of a sequence but the difference with linear-feedback shift register is that the steps also involves raising to some power of .

  4. Output the message as .

Algorithm 2 Decoding algorithm

3.3 Complexity and comparison with other algorithms

All the three first steps of Algorithm 2 have quadratic complexity i.e they can be done in operations in . The last step is a linear operation. Thus in general we have an algorithm with operations in .

We already saw two decoding algorithms in Section 2. As we can see, there is a difference in the first steps of these algorithms and our algorithm. Instead of using an matrix for computing the syndromes, we use an matrix to interpolate . So in the first step, we have some extra extra multiplications. The second steps are more or less the same as they are either Berlekamp-Massey or extended Eulidean algorithm. The last steps are where we may get the advantage as we directly use a linear recurrence to recover the error polynomial. For the other algorithms in Section 2, one first needs to compute the roots of some polynomials (error locator polynomial) before one can reconstruct the error vectors using some relations.

4 Extension to twisted Gabidulin codes

In this section, we will explain that our algorithm can also be modified to get a decoding algorithm for twisted Gabidulin codes. And in contrary to the algorithm in [RR17], we can do it for any parameters. We assume that the original message was given by

After the interpolation step, we get the polynomial . In opposite to the case of Gabidulin codes, we do not know the value of from this. However the problem we are faced remains similar. We want to find a linear relations between the rows of

where we know the values and . We will see that we still can solve this problem. We have an equation of the form

Notice that we introduce one more columns in the equation. Again, by assumption, we have . Thus . If , then a Berlekamp-Massey algorithm using the columns of the previous matrix except the last column is enough to compute the ’s. If , then the equation becomes,


where two entries in terms of and are unknown.

If we use the columns of the matrix except the first and last columns, then we should have an underdetermined system of linear equations whose solution space is of dimension two. We assume that two linearly independent solutions are and . They can be found using the Berlekamp-Massey like algorithm again. Thus a solution of equation (3) is of the form for some . Using this with the first column and the last column, we get two equations. Furthermore, we also know . So in total we get a system of three equations with three unknowns,


In this system, we know all the and are unknown. Notice that any solution of the system of equation (4) is actually a solution of the decoding algorithm. By the unique decoding property, there can only be one solution of this system.

To solve the system, we use the third equation in the two first equations and we get


with the ’s known. We can further reduce this into one variable equation of the form, for some integer ,

We want to point out that this form of equation was also obtained in [RR17]. However, in our case here, we are sure that any solution would give us the closest codeword to the received message. We have now reduced the problem to solving the polynomial equation of the form

We distinguish three cases:

  • If , then we can factor .

  • If , then

  • If and , then, from [Blu04], by a change of variable , we will get a polynomial equation of the form

    with .

First of all, it is easy to show that if we get from , then we can use equation (5) to get . And we use equation (4) to get . These will give us the error polynomial with the recurrence relation from equation (3). So, normally, there should be only one unique solution for . Now the question is how do we solve the equation ? Any of the three cases which produce multiple solutions should be ruled out. The first case of is easy to solve. The two last cases reduce to polynomials of the form

The number of roots of such polynomials was studied in [Blu04]. Here we will give a method to find these roots.

Suppose that is a root of . Then set and choose . thus . We get

The converse is also true. So, to get the root of , we just need to factor the linearized polynomial . In case this polynomial admits a root in then we just take . Otherwise, we will need to use a factorization algorithm like in [Gie98].

Once is computed, we can compute and . Then we continue the decoding algorithm with the same methods as with the Gabidulin codes.

Remark 4.

These algorithms can be easily modified to get a decoding algorithm for generalized (twisted) Gabidulin codes. Namely instead of working with the field automorphism , we work with automorphisms of the form .

5 Conclusion

In this work we have given a new decoding algorithm for Gabidulin codes. First, instead of computing syndromes, we do some polynomial interpolation. Our algorithm requires more computations in this first steps but we can compensate this in the last steps. Namely, there is no need to find roots of some “error locator polynomial”. We just need to use a recurrence relation to recover the “error polynomial” after using a Berlekamp-Massey-like algorithm. We gave a brief analysis on the complexity and a comparison of our algorithm to some existing decoding algorithms for Gabidulin codes. Furthermore, we show that our algorithm can be modified to get a general decoding algorithm for twisted Gabidulin codes.

Finally, we think that it is possible to get a version of our algorithm for Reed-Solomon codes. Namely we can use an equivalent of the Dickson matrix. In the case of Reed-Solomon codes, we have a circulant matrix. And a theorem of König-Rados gives a relation between the number of non-zero roots of a polynomial and the rank of some circulant matrix, see [LN96], Chapter 6, Section 1. It is known that the most expensive steps in the decoding of Reed-Solomon codes is finding roots of the error locator polynomials. This can be avoided in our algorithm.

We have seen that our algorithm involves factoring linearized polynomial of degree . It is well known that factoring a regular polynomial of degree can be done by computing the discriminant of the polynomial. The algorithm presented in [Gie98] gives a factorization for linearized polynomials of general degree, we could further simplify our algorithm if we would have a discriminant like method to factorize a degree linearized polynomial.


I would like to thank Anna-Lena Horlemann-Trautmann and Joachim Rosenthal for their valuable comments and suggestions on this work.


  • [Blu04] A. W. Bluher. On . Finite fields and their applications, 10(3):285 – 305, 2004.
  • [Del78] P. Delsarte. Bilinear forms over a finite field, with applications to coding theory. Journal of Combinatorial Theory, Series A, 25(3):226 – 241, 1978.
  • [Gab85] E.M. Gabidulin. Theory of codes with maximum rank distance. Probl. Inf. Transm., 21:1–12, 1985.
  • [Gie98] M. Giesbrecht.

    Factoring in skew-polynomial rings over finite fields.

    Journal of Symbolic Computation, 26(4):463 – 486, 1998.
  • [KG05] A. Kshevetskiy and E. Gabidulin. The new construction of rank codes. In Proceedings. International Symposium on Information Theory, 2005. ISIT 2005., pages 2105–2108, Sept 2005.
  • [LN96] R. Lidl and H. Niederreiter. Finite fields. 2nd ed. Cambridge: Cambridge Univ. Press, 2nd ed. edition, 1996.
  • [Loi06] P. Loidreau. A Welch-Berlekamp like algorithm for decoding Gabidulin codes, pages 36–45. Springer Berlin Heidelberg, Berlin, Heidelberg, 2006.
  • [LTZ15] G. Lunardon, R. Trombetti, and Y. Zhou. Generalized twisted Gabidulin codes. ArXiv e-prints, July 2015.
  • [RP04] G. Richter and S. Plass. Error and erasure decoding of rank-codes with a modified Berlekamp-Massey algorithm. In 5th International ITG Conference on Source and Channel Coding, pages 249–256, 2004.
  • [RR17] T. Randrianarisoa and J. Rosenthal. A decoding algorithm for twisted gabidulin codes. In 2017 IEEE International Symposium on Information Theory (ISIT), pages 2771–2774, June 2017.
  • [She16] John Sheekey. A new family of linear maximum rank distance codes. Advances in Mathematics of Communications, 10(3):475–488, 2016.