On the List Decodability of Self-orthogonal Rank Metric Codes

by   Shu Liu, et al.
Nanyang Technological University

V. Guruswami and N. Resch prove that the list decodability of F_q-linear rank metric codes is as good as that of random rank metric codes in venkat2017. Due to the potential applications of self-orthogonal rank metric codes, we focus on list decoding of them. In this paper, we prove that with high probability, an _q-linear self-orthogonal rank metric code over F_q^n× m of rate R=(1-τ)(1-n/mτ)-ϵ is shown to be list decodable up to fractional radius τ∈(0,1) and small ϵ∈(0,1) with list size depending on τ and q at most O_τ, q(1/ϵ). In addition, we show that an F_q^m-linear self-orthogonal rank metric code of rate up to the Gilbert-Varshamov bound is (τ n, (O_τ, q(1/ϵ)))-list decodable.



There are no comments yet.


page 1

page 2

page 3

page 4


On the List-Decodability of Random Linear Rank-Metric Codes

The list-decodability of random linear rank-metric codes is shown to mat...

Bounds on List Decoding of Linearized Reed-Solomon Codes

Linearized Reed-Solomon (LRS) codes are sum-rank metric codes that fulfi...

Random linear binary codes have smaller list sizes than uniformly random binary codes

There has been a great deal of work establishing that random linear code...

On the list decodability of rank-metric codes containing Gabidulin codes

Wachter-Zeh in [42], and later together with Raviv [31], proved that Gab...

On the list decodability of Rank Metric codes

Let $k,n,m \in \mathbb{Z}^+$ integers such that $k\leq n \leq m$, let $\...

Joint Design of Convolutional Code and CRC under Serial List Viterbi Decoding

This paper studies the joint design of optimal convolutional codes (CCs)...

Decoding of Space-Symmetric Rank Errors

This paper investigates the decoding of certain Gabidulin codes that wer...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In the late 50’s, P. Elias [10], [11] and J. M. Wozencraft [6] introduced list decoding. Compared with unique decoding, list decoding can output a list of codewords which contains the correct transmitted codeword rather than output a unique codeword. Consequentially, the list size of list decoding can be bigger than

In coding theory, there exists a trade-off between the fraction of errors that can be corrected and the rate. The largest list size of the decoder’s output is an important parameter in list decoding. In fact, we want to have a small list size. At least two reasons can be argued. The first reason is due to the usefulness of this list. After the output of this list, the next step is to utilize this list to decide what the original transmitted message is. This can be done, by outputting the codeword corresponding to the smallest error. If the list size is exponential, the decision step needs exponential time complexity. The second reason is that this list size provides us with a lower bound for the worst-case complexity of the decoding algorithm itself. So, if we require the decoding algorithm to be efficient, we need the list size to be as small as possible.

List Decoding of Rank Metric Codes

Rank metric codes is a set of matrices over a finite field By the ring isomorphism between and an matrix over

can be defined as a vector of length

over the extension field Rank metric codes have been receivced many attention because of their applications in network coding [3], [13], storage systems [15], cryptography [4], [16], and space-time coding [12].

Finding good list decodable rank metric codes attracts more and more researchers. In order to find the limit in which an efficient list decoding is possible, A. Wachter-Zeh provided lower and upper bounds on the list size in [2], [1] and showed that the upper bound of the list size is exponential, for any decoding radius beyond half of the minimum distance. In addition, there exists an exponential list size rank metric code for any decoding radius is larger than half of the minimum distance. No efficient list decoding can be found for Gabidulin codes if decoding radius beyond the Johnson bound. Y. Ding [18] reveals that the Singleton bound is the list decoding barrier for any rank metric code. With high probability, the decoding radius and the rate of random rank metric codes satisfy the Gilbert-Varshamov bound with constant list size. In addition, with high probability, an -linear rank metric code can be list decoded with list decoding radius attaining the Gilbert-Varshamov bound with exponential list size. Since efficient list decoding radius of Gabidulin codes cannot be larger than the unique decoding radius, S. Liu, C. Xing and C. Yuan show that with high probability, a random subcode of a Gabidulin code is list decodable with decoding radius far beyond the unique decoding radius in [9]. However, for -linear rank metric codes, when the list decoding radius is beyond half of the minimum distance, the list size is exponential. V. Guruswami and N. Resch decrease the list size of -linear rank metric codes and show that it is list decodable as good as random rank metric codes in [17].


There has been some interesting findings on the list decodability for random -linear rank metric codes [17][5]. An interesting direction is to see whether these new results can be applied to improve results on specific -linear rank metric codes. Due to its potential application in many fields, the specific -linear rank metric codes that we are interested in is the -linear self-orthogonal rank metric codes. Due to the finding that from list decoding point of view, random -linear rank metric codes perform as well as general rank metric codes, a natural question that follows is whether the performance can still be maintained when we further restrict that the random -linear rank metric codes to be also self-orthogonal. Moreover, based on -linear case, we investigate how well one can list decode random -linear self-orthogonal rank metric codes.


Firstly, we give some definitions and notations about self-orthogonal rank metric codes, list decoding and quadratic form. Then, we describe how to construct and -linear self-orthogonal rank metric codes based on the quadratic form and analyze the list decodability of them. Ultimately, we draw a conclusion.

Ii Preliminaries

Ii-a Defined Self-Orthogonal Rank Metric Codes

Rank metric codes can mainly be interpreted in two different representations. The first representation is to deem each codeword as matrices in Alternatively, we can interpret each element of a rank metric code as a vector in In the first representation of codewords as matrices, linear codes are considered to be linear over On the other hand, the linearity considered when seeing a rank metric code as a set of vectors is assumed to be linearity. The two different representations of rank metric codes provide us with two different ways in defining self-orthogonal rank metric codes (-linear and -linear).

-linear self-orthogonal rank metric codes

To properly define an -linear self-orthogonal rank metric code, first we briefly provide the definitions and notations of matrix representation for rank metric codes. A rank metric code contains matrices over for integers and prime power Through the use of matrix transpose, for simplicity, we can just assume that is at most The rank distance between two matrices is then defined by the rank norm of the difference between the two, These parameters can then be used to formulate two relative parameters, namely relative minimum rank distance and rate These relative parameters are then defined by

The (Delsarte) dual of is then defined to be

Based on this definition of dual, an -linear rank metric code is said to be self-orthogonal if A property of -linear self-orthogonal rank metric code that is readily verified is that its -dimension should be at most and hence its rate must be in the range

-linear self-orthogonal rank metric codes

Pick the extension field of with degree The ring isomorphism between and through the use of a fixed -basis of implies the possibility to identify a rank metric codes over as a collection of vectors over For any two vectors and we say that they are orthogonal to each other if The vector is then called a self-orthogonal vector if it is orthogonal to itself. The definition of self-orthogonality can be naturally extended to a set where this set is self-orthogonal if for any choices of and

Clearly, this suggests an alternative way to define a dual of a rank metric code namely the collection of codewords that are orthogonal to all codewords in Analogous to the previous definition, we call an -linear rank metric code to be self-orthogonal given that Consequentially, an -linear rank metric code can only be self-orthogonal if its -dimension does not exceed or it has rate We will devote the remainder of this section to investigate when both definition of duals can coincide.

In order to do that, we first need to review some basic concepts of algebra. Recall that for the field there are different -linear automorphisms which are the Frobenius automorphisms for Based on these automorphisms, we can then define the field trace which sends to We also recall that for any given basis of there always exists a dual bases satisfying the Kronecker delta function. Now if coincides with for all is called a self-dual basis. Note that the existence of a self-dual -basis of is equivalent to the condition that is even or both and

are odd 

[5]. We are ready to conduct our investigations.

Lemma 1.

Let and be chosen such that has a self dual basis Furthermore, pick any two rank metric codes and over Then

if and only if

Note that we assume to be in its matrix representation for the earlier equation and vector representation for the latter.


Let and We further let and for all Assuming we have

Applying the field trace function to both sides, we have

Note that by assumption, the matrices with size are the matrix representations of and respectively. Noting that the last equality implies that Since and are arbitrary elements of and respectively, we have the conclusion from the assumption that

On the other direction can be easily proved. Suppose that for all and Hence, we have or for all and which ultimately implies that

Ii-B List Decoding

Given a received word, list decoding outputs a list of codewords, if successful, it contains the correct transmitted codeword. Analogous to the Hamming ball in classical block codes, in rank metric codes, we also have the concept of rank metric ball. The formal definition is given in the following.

Definition 1.

Let and . The rank metric ball centre at and radius is defined by

For any -dimensional vector space over we denote by the number of subspaces of with dimension This is called the Gaussian binomial coefficient and it has the following explicit formula,

It can be verified that this formula

has the following bounds that can be used as estimation 


Definition 2.

For an integer and a real , a rank metric code is said to be -list decodable if for every

Analogously, for

Ii-C Quadratic forms

We say is an -variate quadratic form over if it is a degree homogeneous multinomial of variables with coefficients from The general formula that should follow

Note that an -variate quadratic form over can be expressed as multiplication of matrices. Assuming and over can then be rewritten as

Two quadratic forms and of and indeterminates respectively are called equivalent provided that we can find a full rank matrix over satisfying Note that equivalence implies the same number of roots.

Equivalence enables two quadratic forms of different number of indeterminates to be closely related to each other. Given a non-zero quadratic form the smallest number of indeterminates that a quadratic form can have while still being equivalent to is a parameter of that is called to be its rank. By convention, we let the zero quadratic form be rank A non-zero quadratic form is said to be non-degenerate if its rank is equal to the number of its indeterminates.

To aid our analysis in this paper, the number of roots a quadratic form is a topic of interest. We combine several results in [14] as a lemma to consider two cases (over and ).

Lemma 2.

[14]  Let be a quadratic form with rank over (or ). denotes the number of roots of in (or ). If then we have If then we have the following results:

If is defined over with solutions in

Alternatively, if is defined over with solutions in

Iii Construction of Random Self-Orthogonal Rank Metric Codes

Iii-a Construct -linear Self-Orthogonal Rank Metric Codes

In this part, we construct -linear self-orthogonal rank metric codes based on quadratic forms.

Let be a word. If is self-orthogonal, then

Considering the standard bijection from to where we can rewrite the double index to a single index to obtain


  • Step Choose a nonzero random solution of the quadratic equation

    By Lemma 2 we can obtain that the above equation has at least solutions, so a self-orthogonal word can be found.

  • Step Obtain a linearly independent set of random self-orthogonal matrices given

    Firstly, we assume that a linearly independent set of random self-orthogonal matrices has already been found, i.e. . Then, if we want to find the -th matrix then we need to find a solution of the following equations


Take the first equations above in to the last one, we have a quadratic equation of variables. So, is the number of solutions of the equation (1). The number of the cardinality of is equal to And, thus we can randomly choose a solution from (1), it is not contained in

So, we can obtain a linearly independent set of random self-orthogonal matrices.

Moreover, by Lemma 2, the number of solution of is at least Thus, the set can always be constructed as long as

Iii-B Construct -linear Self-Orthogonal Rank Metric Codes

We study how to construct -linear self-orthogonal rank metric codes. The idea is similar to the construction of -linear self-orthogonal rank metric codes. Constructing a random -linear self-orthogonal rank metric code is equivalent to find a linearly independent set of random -linear self-orthogonal vectors, where

Choose a nonzero random solution of the quadratic equation This equation has at least roots, so a self-orthogonal can be found. The same method as the construction -linear self-orthogonal rank metric codes can be conducted. Then, we can confirm there exists a linearly independent set of -linear self-orthogonal vectors. In addition, by calculation we have such as long as

Iv List Decoding Self-Orthogonal Rank Metric Codes

Iv-a List Decoding -linear Self-Orthogonal Rank Metric Codes

In this part, we investigate the list decodability of -linear self-orthogonal rank metric codes. We show that rate and decoding radius of -linear self-orthogonal rank metric codes can achieve the Gilbert-Varshamov bound. From now on, the information rate and the ratio are denoted by and respectively.

Our main result of list decoding of -linear self-orthogonal rank metric codes can be found in the Theorem 1. With the help of studying and discussing the weight distribution of certain rank metric code, we can deal with it.

Lemma 3.

[17] For all integers , every and there exists a constant such that if are selected independently and uniformly at random from then we have

From the above lemma, it reveals that randomly picking words from there exists more than words in the span of words lies in the happens with a very small probability, where the parameter depends on the list size

Then, we consider the following result on the probability that a random dimension -linear rank metric code contains a dimension -linear self-orthogonal subcode and a given set

of linearly independent vectors. Let

present the set of dimension -linear rank metric codes, where every code contains a dimension -linear self-orthogonal subcode.

Lemma 4.

[7] For any -linearly independent words in with the probability of a random code from contains is


Thus, we have


Based on the Lemma 3 and Lemma 4, we prove Theorem 1.

Theorem 1.

Let be prime power and There exist a constant and all large enough , for small an -linear self-orthogonal rank metric code of rate is -list decodable with high probability at least .


Pick where is the constant in Lemma 3. Set and to be large enough.

Let be an -linear self-orthogonal rank metric codes with in , the size We want to show that with high probability, is -list decodable. In other words, the code is not -list decodable with low probability, i.e.,


where denotes the set of -linear self-orthogonal rank metric codes with dimension

Let be picked uniformly at random, define

For (4), it suffices to prove


The inequality (5) is derived from (4). For every -linear code , due to ”bad” case such that (), there are such ”bad”

Since is -linear, we have

where is a dimension random -subspace of containing ( If is not in then otherwise where word is randomly picked from ).

For each integer let be the set of all tuples such that are linearly independent and

Let For each let and denote the set and the tuple respectively.

Claim that if there must exist such that Indeed, let be a maximal linearly independent subset of If then we have . Otherwise, we have to be any subset of of size Thus,

The last inequality is from the Lemma 4. We want to get a good bound of our probability, so we need to take a reasonable good upper bound for In [17], we bound relying on the value of the parameter .

  • Case
    In this case, we have is a lower bound on the probability that matrices chosen independently and uniformly at random from the rank metric ball are

    By Lemma 3, the probability is at most , thus

  • Case
    We have the simple bound of

Finally, taking the value of into the below inequality,

Thus, an -linear self-orthogonal rank metric code with rate is not -list decodable with an exponential small probability

Iv-B List Decoding -linear Self-Orthogonal Rank Metric Codes

We consider the probability that a random dimension -linear code contains a self-orthogonal dimension subcode and a given set of linearly independent vectors. Let present the set of -linear codes in which every code contains an -linear dimension self-orthogonal subcode.

Lemma 5.

[7] For any -linearly independent vectors in with the probability of a random code from contains is