On Decoding of Generalized Concatenated Codes and Matrix-Product Codes

04/07/2020
by   Ferdinand Blomqvist, et al.
0

Generalized concatenated codes were introduced in the 1970s by Zinoviev. There are many types of codes in the literature that are known by other names that can be viewed as generalized concatenated codes. Examples include matrix-product codes, multilevel codes and generalized cascade codes. Decoding algorithms for generalized concatenated codes were developed during the 1970s and 1980s. However, their use does not appear to be as widespread as it should, especially for codes that are known by other names but can be viewed as generalized concatenated codes. In this paper we review the decoding algorithms for concatenated codes, generalized concatenated codes and matrix-product codes, and clarify the connection between matrix-product codes and generalized concatenated codes. We present a small improvement to the decoding algorithm for concatenated codes. We also extend the decoding algorithms from errors-only decoders to error-and-erasure decoders. Furthermore, we improve the upper bound on the computational complexity of the decoding algorithm in the case of matrix-product codes where the generator matrix for the inner code is non-singular by columns.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

02/04/2021

Decoding of (Interleaved) Generalized Goppa Codes

Generalized Goppa codes are defined by a code locator set ℒ of polynomia...
12/21/2017

Extended Product and Integrated Interleaved Codes

A new class of codes, Extended Product (EPC) Codes, consisting of a prod...
09/05/2021

A Delicately Restricted Channel and Decoding of Maximum Rank Distance Codes

In this paper an interpolation-based decoding algorithm to decode Gabidu...
03/18/2022

Zipper Codes

Zipper codes are a framework for describing spatially-coupled product-li...
02/17/2022

Generalized Inverse Based Decoding

The concept of Generalized Inverse based Decoding (GID) is introduced, a...
04/05/2018

Numerical and analytical bounds on threshold error rates for hypergraph-product codes

We study analytically and numerically decoding properties of finite rate...
07/27/2021

On decoding hyperbolic codes

Few decoding algorithms for hyperbolic codes are known in the literature...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Matrix-product codes form a class of generalized concatenated codes (GCC) and were introduced by Blackmore and Norton [blackmore2001matrix]. They can also be seen as generalizations of the Plotkin sum construction, which is also known as the construction. Decoding algorithms for matrix-product codes were considered by Hernando et. al. in [hernando2009construction] and [hernando2013decoding]. These decoding algorithms can correct all error patterns up to half the minimum distance of the code, but places restrictions on the choice of the component codes. In addition, the algorithms are computationally intensive and therefore intractable in many cases.

By utilizing Forney’s generalized minimum distance (GDM) decoding we devised an efficient decoding algorithm for matrix-product codes that can correct all error patterns of weight less than half the minimum distance, and many patterns of larger error weight. In the final stages of our work we found out that most of our work had already been done in the context of GCC already in the 70s and 80s. Most of the old research on GCC is only published in Russian, and the community working on matrix-product codes seems to be mostly unaware of its existence. Therefore this article serves the purpose of raising awareness of the existing research. In addition, this is, to the best of our knowledge, the first comprehensive treatment of this subject available in English. The treatments of the subject available in English include [ericson1986simple, bossert1988decoding], while [zyablov1999introduction] serves as a good introduction to concatenated codes and GCC, but it only briefly mentions the basics of the decoding algorithm. The decoding of GCC up to half the minimum distance is due to Blokh and Zyablov [blokh1974coding]. Improvements are due to Zinoviev [zinov1981generalized] and Bossert [bossert1988decoding].

The paper is organized as follows. Section II establishes the notation and presents the necessary preliminaries. Concatenated codes, GCC and matrix-product codes are presented in Sections III,IV, and V respectively. Sections VI, VII and VIII deal with decoding of said codes. In Section IX we break down the decoding algorithm for two simple matrix-product codes. Finally, error-and-erasure decoding is discussed in Section X, and in Section XI we discuss different methods for increasing the error correction capabilities of the decoding algorithm.

Ii Preliminaries and notation

For , define . For a prime power , denotes the finite field with elements. The support of is defined as

and denotes the Hamming weight of . Given ,

denotes the Hamming weight of punctured at the coordinates in . The set of all matrices with entries in is denoted by . The power set of a set is denoted by . denotes the -th coordinate of .

A linear code over with length , dimension and minimum distance is said to be an code. The field size will be omitted when it is clear from the context.

Ii-a Generalized minimum distance decoding

For the reminder of the section, let be a code with minimum distance , and . The following results are well known.

Theorem 1

Given and , there is at most one codeword such that

(1)
Theorem 2

Given , and , there is at most one codeword such that

(2)

Theorem 1 is the familiar statement that the spheres of Hamming radius centred at the codewords do not intersect. Theorem 2 is the analogous statement for error-and-erasure decoding. A errors-only decoder that only corrects words that satisfy (1) is called a minimum distance decoder.

In errors-only decoding the decoder is only given the received word. In order to ease the task of the decoder, one can also give the decoder information about the reliability of each symbol. If we give the decoder information which symbols are erased (unreliable) and not erased (reliable), then we are performing error-and-erasure decoding. It follows from Theorem 2 that an error-and-erasure decoder that decodes every received word that satisfies (2), and rejects all other received words, can be described by a map

where is mapped to the (unique) closest codeword, or to if (2) is not satisfied. Hence if and only if .

There is of course no reason to restrict the supplied symbol reliability information to only two reliability classes. In

Forney considered what happens if the received symbols are classified into

reliability classes and arrived at what he named GMD decoding [forney1965concatenated]. For shorter notation, define and . Furthermore, let

be the supplied reliability weight vector, meaning that

is the reliability of . A smaller reliability weight means that the symbol is considered less reliable. We have the following theorem.

Theorem 3 (Forney [forney1965concatenated])

Given and there is at most one codeword such that

(3)

Suppose we have reliability classes with corresponding reliability weights , , and for . Each symbol is put into one of these reliability classes. Let , , and . We will omit the parameter whenever it is clear from the context.

Theorem 4 (Forney [forney1965concatenated])

If (3) holds, then there exists , such that

(4)
Remark 1

Forney showed that if (3) holds, then there exists , such that (4) is satisfied. However, (4) cannot hold for , since .

We say that a is correct if (4) holds for this . Theorem 4 shows that GMD decoding can be implemented with an error-and-erasure decoder and Theorem 3 can be used to check if the chosen was correct.

It is clear that if , and hence we immediately get an upper bound on the maximum number of different we need to try, namely at most different ones. It is, however, possible to strengthen this bound.

Theorem 5

Let be such that , and suppose is even. If , then

Proof:

We prove this by contradiction. Let , and suppose that . We have by assumption, and thus by Theorem 2. Furthermore, , and therefore we must have . It follows that , which contradicts the assumption that is even.

We call the act of running the decoder for with one erasure set and then checking if Theorem 3 holds for the decoded word a trial.

Corollary 1

At most trials are required to decode any received word that satisfies Theorem 3.

This result was also noted by Forney, but he presented no formal proof. We will end this section with a few helpful observations. These observations do not, unfortunately, affect the worst case complexity of the GMD decoder. They do, however, provide a way to eliminate unnecessary trials after we know the received word.

If there are no symbols with reliability weight , then , and hence we do not need to run the trial for this value of . Furthermore, by Theorem 5, if is such that is even and , then this trial can be omitted. We call viable if it does not satisfy either of the two previous conditions. The GMD decoder is described formally as Algorithm 1.

1:procedure gmd-decode()
2:     for  to  do
3:         if  is viable or  then
4:              
5:              if  and (3) is satisfied then
6:                  return
7:              end if
8:         end if
9:     end for
10:     return Failure
11:end procedure
Algorithm 1 GMD decoding for the component codes

Ii-B Non-singular by columns matrices

Definition 1

For and , let denote the first rows of , and for , denotes the matrix consisting of columns of . We call non-singular by columns (NSC) if is non-singular for all and all .

We call a matrix triangular if it is a column permutation of an upper-triangular matrix.

Proposition 1

If is NSC, then, for all , the linear code generated by is MDS.

Proof:

Recall that a linear code generated by an matrix is MDS if and only if any columns of the generator matrix are linearly independent. Considering , we find that, since is NSC, the matrix is non-singular for all . This implies that any columns of are linearly independent, and thus the code generated by is MDS.

Iii Concatenated codes

Let be a code over of length and minimum distance . Let be the message set for , and let be an encoding function for . The code is called the outer code. Fix a basis for over . Then elements of can be represented as vectors of length over . Thus elements of can be represented as matrices over , and we will use this representation throughout this section.

Let , , be linear codes over , where for some integer . Furthermore, let , be the encoding functions associated with . The are called the inner codes.

Let , be the message we want to encode. The encoded message is

where

Or in other words, to encode with the concatenated code we first encode each of the with the outer code, and use the result to build the matrix . Then we encode each row of with its corresponding inner code. More precisely, the -th row is encoded with .

We denote this concatenated code by , or simply in the case of one inner code. It is clear that , and hence the concatenated code has length . The size of the concatenated code is . The minimum Hamming distance of the concatenated code satisfies . To see this, suppose that . Then and differ in at least rows, which implies that .

If is linear with dimension , then the resulting concatenated code is also linear over , and has dimension , since, in this case, .

Since, is linear, can be written in terms of the generator matrix of . Let be a generator matrix of . Then . If all are the same, then we let and we can simply write . For most practical purposes it seems reasonable to let all the inner codes be the same.

One example of concatenated coding is so-called product codes (also known as iterative codes) introduced by Elias in 1954 [elias1954error]. We refer to [zyablov1999introduction] for more examples.

Iv Generalized concatenated codes

We will now consider GCC. In order to ease the presentation we will restrict to the case where we only have one inner code. It is straight forward to extend to the case of several inner codes, but the notation quickly gets tedious. In addition, the use of several inner codes adds complexity for no apparent gain, since the main parameters of the generalized concatenated code does not change when using several inner codes.

Let , , be codes over of length and minimum distance . Let be the message set for , and let be an encoding function for . The are called the outer codes. Elements of will again be represented as matrices over .

Let be a linear code over , where . Furthermore, let , be the encoding function associated with . The code is called the inner code.

Let , be the message we wish to encode. The encoded message is

where

This code is called a -th order GCC. The length of the GCC is again , and the size is . The true minimum distance of the code is not known in most cases, but it can be lower bounded. We denote this GCC by .

If the outer codes are linear with dimension , then the generalized concatenated code is also linear over , and has dimension To see this, simply note that

We will now derive the lower bound for the minimum distance. First, however, we need to introduce some notation. Since, is linear, can be written in terms of the generator matrix of . For this purpose, let be a generator matrix of . Then . For , let denote the subcode of generated by the first rows of . This gives us a collection of nested codes such that

The minimum distance of is denoted by .

Suppose . Then for at least one , and let be the largest such . The minimum distance of is and hence and differ in at least rows. Therefore is non-zero in at least rows, and in addition, the last columns of are zero. Thus, every row of is a codeword of , and it follows that . This holds for every , and hence

We see that the minimum distance properties of GCC depend on the chosen encoding function for the inner code. Thus the task of designing a good generalized concatenated code requires more than finding good outer and inner codes. We also need to find a generator matrix for the inner code that generates a system of nested codes that have good minimum distance properties. This is, according to [zyablov1999introduction], one of the main problems for generalized concatenated systems.

V Matrix-product codes

Matrix-product codes were introduced by Blackmore and Norton [blackmore2001matrix]. They form a subclass of the class of GCC.

Definition 2

Let , and be codes over of length . The matrix-product code is the set of all matrix products , where is a column vector.

We see that matrix-product codes are GCC where for all . The codewords in are of the form

It is clear that has length , and the following Theorem was proven in [blackmore2001matrix]. The length, size, and minimum distance bound of the code follows directly from the corresponding properties of GCC. However, if is NSC and triangular, then the exact minimum distance of the code is known.

Theorem 6 (Blackmore, Norton [blackmore2001matrix])

If is NSC and , then

  1. ;

  2. ;

  3. if is also triangular, then .

Since matrix-product codes are also GCC, we know that if the codes , are linear, then is also linear. In addition, the dimension of is the now simply the sum of the dimensions of the .

Well known examples of matrix-product codes include the and constructions. If we choose

then we obtain the or construction, respectively.

Vi Decoding of concatenated codes

We will now show how to decode concatenated codes. For simplicity we will only consider the case where all the inner codes are the same. It is simple to adapt the decoding algorithm to the case of several inner codes.

It is non-trivial to devise a decoding algorithm for concatenated codes that can correct all error patterns of weight less than half the designed minimum distance. The algorithm itself is quite simple though. Consider the concatenated code , and let be the received word. The first step is to decode every row of with a minimum distance decoder for the inner code . Denote the result of this operation with , and let . For any matrix , let and denote the -th row and column of , respectively. We will consider as a matrix with elements in , and thus has columns.

Define

and, for all , assign the -th row the reliability weight .

For every row with a non-zero reliability weight, i.e., the rows where the row decoder succeeded, find the message that corresponds to . Thus . For rows with reliability weight , set (or some other message). Now, for , decode with a GMD decoder for the outer code using as the reliability weight vector. Let denote this decoding algorithms.

Theorem 7

The decoder can correct all error patterns that satisfy

(5)
Proof:

We only need to prove that the generalized minimum distance criterion (3) holds for , after the row decoding. Let denote all rows that were correctly decoded, and define . Suppose . Then

If , and we do not have a decoding failure for that row, then

On the other hand, if we have a decoding failure for the -th row, then , and hence

Now, assume that (5) holds. Then,

which shows that (3) holds for all columns of .

Corollary 2

The decoder can correct all error patterns that satisfy .

We have already seen that the decoder can correct the maximum number of random errors. It can also correct many bursty error patterns with much higher weight. We call a row bursty if it has at least errors.

Corollary 3

Let denote the number of bursty rows, and the remaining number of errors. The decoder can correct any error pattern such that .

Proof:

If the -th row is not bursty, then . On the other hand, if the -th row is bursty, then , and hence the result follows.

We have reliability classes. Thus, by Corollary 1, can be recovered in at most trials. However, due to the design of the reliability weights, we can lower this bound slightly. More, precisely, the GMD decoder can always consider any row with zero reliability weight as an erasure.

For , let , where the are the weights corresponding to each reliability class ordered from the smallest to the largest. Furthermore, let . From the proof of Theorem 7 and Theorem 4 we know that there exists such that

(6)

Now, suppose that (6) holds for . Then, since , it follows that

Therefore, we can skip the trial where . This means that we only need at most

trials to decode any .

Vi-a Improving the algorithm

The obvious way to improve the algorithm is to limit the number of trials that the GMD decoder has to run. As can be seen in the proof of Theorem 7, there is one such that the GMD decoder will decode any correctly with the erasure set . This can be leveraged to reduce the number of times the decoder for the outer code has to be run.

Instead of having the GMD decoder start from for every , we can do the following: Start from when decoding , but when decoding start from the that was used to successfully decode . This way the upper bound for the total number of times we decode with the outer code is reduced from to .

We presented this improvement in the context of product codes [blomqvist2020], but, to the best of our knowledge, this improvement has not been considered for concatenated codes. The only downside of this modification is that the inherent parallelism of the algorithm is lost.

Vii Decoding of generalized concatenated codes

A -th order GCC can be decoded by applying the decoding algorithm for concatenated codes -times.

Consider the GCC , and let be the received word. Furthermore, let be a generator matrix for . We can use the decoding algorithm for the concatenated code to recover . Then we can cancel out the contribution of to , by letting

Hence, , where is a codeword in . Thus, can be decoded with decoders for the concatenated codes . The decoding algorithm is described more formally as Algorithm 2, and denotes a right inverse of in the algorithm description.

1:procedure gccdecode()
2:      rowdecode()
3:     
4:      gmd-decode()
5:     
6:     if  then
7:          gccdecode()
8:         return
9:     else
10:         return
11:     end if
12:end procedure
13:
14:procedure rowdecode()
15:     Decode every row of with the code .
16:end procedure
Algorithm 2 Decoding of generalized concatenated codes

Let denote the decoding algorithm outlined as Algorithm 2. We have the following results which are analogous to Theorem 7 and Corollary 2.

Theorem 8

The decoder can correct all error patterns that satisfy

(7)

where is the designed minimum distance of the code.

Proof:

Recall that , and hence

for all . Thus, will be correctly decoded with the decoder for , for every .

Corollary 4

The decoder can correct all error patterns that satisfy , where is the designed minimum distance of the code.

In the context of GCC we call a row bursty if it has at least errors.

Corollary 5

Let denote the number of bursty rows, and the remaining number of errors. The decoder can decode any error pattern such that .

Proof:

The proof is analogous to the proof of Corollary 3.

The complexity of this algorithm is easy to describe. The decoder for the inner code is run times, and the decoder for the outer code is run at most times.

The algorithm can be improved by leveraging the fact that the are nested, and that the error pattern stays the same during all rounds of decoding. We call the process of recovering with the decoder for the -th round of decoding. We have chosen this slightly non-intuitive convention since it makes the notation easier. During the -th round of decoding we decode all the rows with , and during the next round we decode the rows again with . However, the error vector for each row remains the same and combining this with the fact that allows us to omit the decoding of certain rows during round (and possibly during subsequent rounds). These ideas were first explored by Bossert [bossert1988decoding].

Consider the -th round of decoding (). Let and denote the input to the -th and -th round of decoding, respectively. Let denote the reliability weight of the -th row after the decoding with . Let and denote the guesses for and during the -th round, respectively. Furthermore, let , and define

We have the following observations.

Lemma 1

If , then .

Proof:

Let denote the first rows of . We know that , and . Thus,

and it follows that

We have by assumption, and therefore .

The consequences of Lemma 1 are profound; if , then the -th row does not need to be decoded during the next round. To see this, note that is the unique codeword of within half the minimum distance of .

On the other hand if , then the -th row was incorrectly decoded during the -th round, and hence

and hence it is unnecessary to decode the row this round unless

(8)

Finally, if , then the row could not be decoded during the previous round. Thus, if the error correction capacity of is not larger than that of , then we can omit the decoding of this row during this round and instead directly set . This is the case if or, alternatively, if and

is odd.

These observations suggest the following algorithm for every round except the first, i.e. the round with index . Again consider the -th round of decoding. Every row such that is not decoded with the inner code , and rows such that are only decode if needed. More precisely, any row such that is decoded only if (8) is satisfied, while rows with are decoded only if the error correction capacity of is larger than that of . This modified algorithm is presented as Algorithm 3.

1:procedure gccdecode1()
2:      rowdecode()
3:     
4:      gmd-decode()
5:     
6:     if  then
7:         
8:         
9:          gccdecode1()
10:         return
11:     else
12:         return
13:     end if
14:end procedure
15:
16:procedure rowdecode()
17:     Decode every row of with the code .
18:     if  then
19:         for  do
20:              Decode with a decoder for .
21:         end for
22:     end if
23:     for  do
24:         if  then
25:              Decode with a decoder for .
26:         end if
27:     end for
28:end procedure
Algorithm 3 Improved decoding of generalized concatenated codes

This modification to the algorithm significantly lowers the number of times the rows have to be decoded. During the first round we have to decode all the rows. During the -th round, , at most rows need to be decoded. Therefore, the decoder for , , has to be run at most times. The total number of row decoder invocations is thus at most

It is possible to make another small improvement to the algorithm. Let and denote the variables that correspond to and during round , respectively. Whenever and (8) is not satisfied, then, in Algorithm 3, we do not decode the row this round because we already know what it would decode to. On the other hand, we also know that the row will be decoded to the wrong codeword, and hence – in order to ease the task of the GMD decoder – we could directly set . We should, however, not add to . Instead, we should treat the row as a row without decoding failure, which means that if and only if .

This improvement does not lower the worst case complexity of the algorithm, but it should lower the number of trials the GMD decoder has to run during the -th round.

Viii Decoding of matrix-product codes

Recall that matrix-product codes are GCC, and thus they can be decoded with the same algorithm. A matrix-product codes is constructed by specifying the generator matrix of the inner code . For the rest of this section, suppose is NCS.

Since is NSC, it follows that is MDS for all , and hence . We can use this additional structure to lower the upper bound on the number of times the row decoders need to be run.

Since for all , it follows that the error correction capacity of the row code increases every other round. Suppose that is odd. Then we only need to decode rows during of rounds. Furthermore, the total number of row decoder invocations is at most

where . On the other hand, if is even, then we only need to decode rows during of rounds. Furthermore, the total number of row decoder invocations is at most

where .

Ix Decoding examples

We will show how to apply the decoding algorithm in practice by breaking it down for two well known matrix-product codes. The examples are quite simple but we think they illustrate the algorithm without undue complications. Throughout this section, let be an error-and-erasure decoder for the code , and let denote the first rows of .

Ix-a Decoding of the construction

Recall that choosing

gives us the construction. Let denote the designed minimum distance of . The naive decoding algorithm for this code is presented in [hernando2013decoding], and, using the same notation as in Sections VI and VII, is as follows:

  1. Decode with the decoder for and denote the result by .

  2. Decode with the decoder for and denote the result by . If

    then is correct so return . Otherwise go to step .

  3. Decode with the decoder for and denote the result by . Return .

This works for any and , even if does not hold. The previous requirement is given in [hernando2013decoding], but it is not necessary. The worst case complexity of this algorithm is easy to find. The decoder for is run once, and the decoder for is run twice.

Now consider the algorithm for GCC. is of full rank, and thus cannot correct or detect any errors. Therefore, we can skip the row decoding during round of decoding. It follows that is recovered exactly as in the naive algorithm.

is a repetition code of length , and can thus detect one error. This means that we only have two reliability classes during round of decoding. These correspond to the reliability weights and respectively. Thus the GMD decoder only needs to consider one erasure set, namely , when decoding . If we choose , then the algorithm simplifies to

  1. Let , and decode with the decoder for . Denote the result by , and let be such that .

  2. Consider . Set if , and otherwise.

  3. Let , compute