On Hard-Decision Decoding of Product Codes

01/14/2020 ∙ by Ferdinand Blomqvist, et al. ∙ 0

In this paper we review existing hard-decision decoding algorithms for product codes along with different post-processing techniques used in conjunction with the iterative decoder for product codes. We improve the decoder by Reddy and Robinson and use it to create a new post-processing technique. The performance of this new post-processing technique is evaluated through simulations, and these suggest that our new post-processing technique outperforms previously known post-processing techniques which are not tailored for specific codes. The cost of using the new post-processing technique is that the algorithm becomes more complex. However, the post-processing is applied very rarely unless the channel is very noisy, and hence the increase in computational complexity is negligible for most choices of parameters. Finally, we propose a new algorithm that combines existing techniques in a way that avoids the error floor with short relatively high rate codes. The algorithm should also avoid the error floor with long high rate codes, but further work is needed to confirm this.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Product codes form a class of concatenated codes and they were introduced in 1954 by Elias [1]. Hard-decision decoding of product codes is usually carried out with a so-called iterative decoder, which is efficient and can correct most error patterns up to half the minimum distance, and many error patterns beyond half the minimum distance. The iterative decoder was hinted at by Elias in [1], but it was first properly described in [2].

The performance of the iterative decoder at low frame error rate (FER), is limited by the occurrence of error patterns of weight less than half the minimum distance that the decoder cannot correct. These error patterns are usually called stall patterns. The performance of the iterative decoder can be improved with the use of so-called post-processing techniques which essentially deal with stall patterns as they are encountered. Different post-processing techniques are considered in [3, 4, 5, 6]. Some of the techniques work for any product code while others are limited to certain codes.

There are decoding algorithms for product codes that can correct all errors up to half the minimum distance [7, 8], but they cannot correct as many error patterns of weight at least half the minimum distance as the iterative decoder mentioned above. Wainberg [9] extended the decoder from [7] to an errors and erasure decoder.

Decoding algorithms for concatenated codes are presented in [10, 11, 12] and these can be applied to product codes. These algorithms can also correct all error patters of weight less than half the minimum distance. The novelty of these algorithms is that they can work for any concatenated code. However, when applied to product codes they essentially reduce to the algorithm by Reddy and Robinson [7].111 [10, 11, 12] are all in Russian and we have not been able to find translations. Our perception of the content in these references is entirely based upon second hand sources such as [3, 13, 14].

The main idea of all these algorithms that are capable of maximum error correction is the same and it can be traced back to Forney’s generalized minimum distance decoding [15].

Soft decision decoding of product codes is considered in [16] and [17].

In this paper we review existing post-processing techniques, present an improvement to the decoding algorithm proposed in [7]

, and use this improved algorithm as part of a new post-processing technique. We compare the new post-processing technique to other known techniques and evaluate the performance of these different techniques with simulations. We also propose a new algorithm that is targeted towards high rate product codes. This algorithm is a simple combination of existing decoders and heuristic arguments imply that it should lower the error floor considerably at low enough FER/BER.

The paper is organized as follows. Section II establishes the notation and presents the necessary preliminaries. In Section III we improve the decoder presented in [7], Section IV is devoted to our new post-processing technique, and in Section V we present the simulation results. Section VI shortly deals with the challenges of avoiding the error floor for very low FER/BER.

Ii Preliminaries and notation

For , define , and let denote the finite field with elements. For , denotes the Hamming weight of

. Given a vector

, let denote the -th coordinate of . For two vectors , let denote the scalar product of and . Let denote the power set of .

Definition 1

Let and be linear codes over with parameters and , and generator matrices and , respectively. The product code is the image of the map

The following result is well known (see, for instance, [18, p. 566-567]).

Proposition 1

is an code.

Remark 1

Product codes are usually defined assuming that the generator matrices of the codes are in systematic form. Although it is quite common to assume systematic form of the component codes of a product code, we do not need to make this assumption here.

Ii-a Generalized minimum distance decoding

Generalized minimum distance (GMD) decoding was introduced in 1965 by Forney [15]. The idea of GMD decoding is to leverage symbol reliability information – which could be provided, e.g., by the demodulator – when decoding the received word.

For the remaining part of this section, let be a code with minimum distance , and . Furthermore, let denote the support of , and for , define . Thus is the Hamming weight of punctured in the coordinates of . Define

and . Furthermore, let be a so-called reliability weight vector. This means that is the reliability of . Note that a smaller reliability weight means that the symbol is considered less reliable. We then have the following theorem.

Theorem 1 (Forney [15])

Given and there is at most one codeword such that

(1)

Suppose we have reliability classes with corresponding reliability weights , , and for . Each symbol is put into one of these reliability classes. Let and , . We will omit the when there is no risk of confusion. Given an error-and-erasure decoder for , it will decode (using as the erasures) correctly if .

Theorem 2 (Forney [15])

If , then there exists such that .

Theorem 2 shows that GMD decoding can be implemented with an error-and-erasure decoder and Theorem 1 can be used to check if the chosen was correct.

By a trial, we denote the act of running the decoder for with one erasure set and then checking if Theorem 1 holds for the decoded word. Forney noted that:

Corollary 1 (Forney)

At most trials are required to decode any received word that satisfies (1).

Furthermore, since for all we do not need to try to decode with the erasure set . Therefore, at most trials are required to decode any received word that satisfies (1).

The reasoning behind Corollary 1 is that, if we erase unreliable symbols one by one (instead of erasing all symbols in one class simultaneously), then the error correction capability will increase by one for every 2 erasures. This does, however, not directly give us a simple way to skip unnecessary once we know . To this end, we finish this section with a theorem which does exactly that. In addition, it gives us an alternative proof of Corollary 1.

Recall that an error-and-erasure decoder that decodes up to the minimum (Elias) distance can be described by a map

where is mapped to the (unique) closest codeword, or to if a decoding failure occurs. Hence if and only if .

Theorem 3

Let , and . If is even and , then .

Proof:

We prove this by contradiction. Let , and suppose that . We have and , which implies . It follows that , which contradicts the assumption that is even.

Ii-B Iterative decoding of product codes

Let and be linear codes over with parameters and , and consider the product code . We will use this notation for the rest of the article.

Iterative (hard-decision) decoding of product codes was hinted at in Elias original paper [1], but it was not properly described. Abramson proposed the iterative decoder in [2]. Our description of the iterative decoder is based on the one in [3].

Let denote the received word. The standard iterative decoder works as follows:

  1. Decode all columns of with the column code and correct all errors and erasures. Denote the result by .

  2. Decode all rows of with the row code and correct all errors and erasures, and denote the result by .

  3. If , set and repeat from Step 1. Otherwise, go to Step 4.

  4. If there were any decoding failures or errors that were corrected during the last invocations of steps 1 and 2, return failure. Otherwise, return .

This iterative decoder is efficient and performs well. It can correct many error patterns with weight beyond half of the minimum distance, but unfortunately, it cannot correct all error patterns below half of the minimum distance. There are also variations of the iterative decoder that perform at most iterations, where is a constant which is usually quite small, say or .

The error patterns that cannot be corrected by the iterative decoder are often called stall patterns of stoping sets. We will use the former terminology. These stall patterns limit the performance of the iterative decoder at lower FER. To combat this, so called post-processing techniques have been developed to improve the error floor.

Kreshchuk et al. noticed that, when the iterative decoder fails, it corrects all errors outside of an error submatrix and then stops. However, if one would insert erasures, the decoding process could be continued. Therefore, they proposed the following decoder [3].

  1. Run the iterative decoder described above. Return its results if it succeeds, otherwise continue with Step 2.

  2. Denote all rows that changed or were rejected by the row code during the last iteration by . Similarly denote the corresponding columns by .

  3. Let . Take the last word produced by the iterative decoder and insert erasures at every position found in . Denote this word by .

  4. Run the iterative decoder with as input.

  5. If the decoder succeeds, return its result. Otherwise, reject this word and return a failure.

This decoder can be seen as a post-processing technique; first the iterative decoder is run and whenever it fails the post-processing is applied to the result of the iterative decoder.

A similar post-processing technique is proposed in [5]. Whenever the iterative decoder fails, mark any rows where a decoding failure occurred as erased. Then decode this word with a slightly modified iterative decoder; whenever there is a row or column decoding failure mark the corresponding row or column as erased.

Another post-processing technique is proposed in [4] by Condo et al.. Their technique is only directly applicable to extended-polynomial codes, i.e., polynomial codes with an additional parity symbol added. To describe their method we introduce some notation. Suppose that the iterative decoder cannot correct the given word. Denote all rows were we had a decoding failure during the last iteration by , and similarly denote the corresponding columns by . Let . Their post-processing technique for binary codes is simple; flip the bits in every position found in , and run the iterative decoder on the result.

Whenever the code is non-binary the symbol at the intersection of a row and column has many bits so simply flipping these bits is not particularly useful. Instead Condo et al. use the extra parity symbols of the extended codes to determine which bits should be flipped. The iterative decoder is then run on the word that results from the bit flipping. More precisely, Condo et al. prescribe one iteration of iterative decoding after the bit flipping for both the binary and non-binary case.

Since this post-processing technique needs the extra parity symbols of the extended codes, it is not possible to use it as such for product codes with general non-binary codes. One can, however, use a slightly simplified version where every symbol at positions found in are erased. This gives a post-processing technique that is very similar to the one proposed by Kreshchuk et al.. In fact the only difference is in the definition of ; Kreshchuk et al. also include rows and columns that changed during the last iteration, while Condo et al. only include rows and columns were the decoding failed.

There are other post-processing algorithms for product code like codes such as half-product codes and braided codes. We will not consider these techniques, but the interested reader can find further information in, for instance, [6, 19].

Ii-C Other decoding algorithms for product codes

In [7], Reddy and Robinson presented a decoding algorithm for product codes that can correct all error patterns up to half of the minimum distance. There are also other algorithms that are capable of maximal error correction, most notably [8, 10], but we will not discuss these due to the similarity to the algorithm proposed by Reddy and Robinson.

In order to describe the algorithm, we need to introduce some notation. Let be the received word, and let denote the result after decoding every column of with the column code and let denote the corresponding error matrix. Given a word (as an -matrix), let , and denote the -th column and row of respectively. Finally, let .

The algorithm proceeds as follows:

  1. Decode all the columns with the column code to obtain and .

  2. Assign reliability weights to each column by letting

  3. Decode every row of with a generalized minimum distance decoder for the row code (using as the reliability weight vector).

This algorithm can correct every error pattern such that the generalized minimum distance criterion (1) holds for all rows after the column decoding. Therefore, as proved in the original paper, the algorithm can correct all error patterns of weight less than half the minimum distance. It can also correct many error patterns of larger weight, but apparently the iterative decoder can correct more error patterns.

The well informed reader might notice that the reliability weights defined here are slightly different from the ones given by Reddy and Robinson. We have chosen to employ the weights as defined by Wainberg [9], reduced to the case of no erasures, since this way the weighing scheme is more coherent.

Wainberg provides a proof of the correctness of his weighing scheme in [20], which seems hard to come by. Therefore – and for completeness – we include a proof of correctness for the algorithm.

The idea of the proof is simple; Suppose that and show that (1

) holds for every row after the column decoding. We start by estimating the number of errors in every column. First, suppose that the column decoder decodes correctly,

i.e., . Then

(2)

If, on the other hand, the -th column is incorrectly decoded to another word, then

(3)

Finally, if we have a decoding failure, then , and hence .

Let and be the index sets of the columns that were correctly and erroneously decoded respectively. also contains the indices of columns where a decoding failure occurred. Clearly

and thus

Letting

and gives

(4)

This shows that (1) holds for every row of . More precisely, , for . Hence we can conclude that the algorithm decodes all error patterns of weight less than .

This result is satisfactory, but it can easily be improved. Let

Both (2) and (3) are still valid if we replace the left hand side with , and hence it follows that the algorithm can correct any error pattern that satisfies .

The computational complexity of the algorithm is simple to analyze. The decoder for the column code is run times and the decoder for the row code is run at most times, where is the maximum number of trials the GMD decoder needs to run to recover from .

Recall that trials suffice, where is the number of reliability classes. However, any symbol with reliability weight zero can always be erased. To see this, suppose that . Clearly , and hence

Since, for all , the claim follows. This means that the GMD decoder needs to run at most trials,

From the definition of the reliability weights we see that we have reliability classes. Therefore can be recovered in at most

trials.

We will end this section with a few observations that are useful when implementing the algorithm. These observations do not, unfortunately, affect the worst case complexity of the algorithm. They do, however, provide a way to eliminate unnecessary trials after we know the received word.

If there are no columns with reliability weight , then , and hence we do not need to run the trial for this value of . Furthermore, by Theorem 3, if is such that is even and , then this trial can be omitted. We say that is viable if or if does not fulfill either of the two previous conditions. The GMD decoder only needs to run trials with viable .

Iii Improving the Reddy and Robinson algorithm

The Reddy and Robinson algorithm can be optimized in a way that lowers the worst case complexity significantly. As noted in Section II-B, after decoding the columns there exists such that . This means that there exists one such that every row will be correctly decoded with the erasure set . Hence the GMD decoder does not need to start from for every row. Instead it can start from the same that was used when the previous row was correctly decoded. This way the row decoder needs to be run at most instead of times, where . We call this optimized version of the Reddy and Robinson decoder the gmd decoder.

To the best of our knowledge this small improvement has not been presented anywhere in previous papers.

The algorithm can also be modified to an algorithm that, according to our simulations, performs significantly better. The change is as simple as modifying the GMD decoder used for the row decoding slightly; instead of only decoding up to the GMD – which means only whenever (1) is satisfied – we choose the word that maximizes the left hand side of (1). More precisely, given a received word along with a reliability weight vector , the modified decoder operates as follows,

  1. For all viable , decode with an error-and-erasure decoder with the erasure set , and denote the result by .

  2. Return the that maximizes .

We call this variation of the GMD decoder a generalized distance (GD) decoder. We use the name gd to refer to the improved version of the Reddy and Robinson decoder that uses the GD decoder for the row decoding.

Proposition 2

gd correctly decodes any word that gmd decodes correctly.

Proof:

Suppose that gmd decodes correctly. Then, after the column decoding, every row satisfies (1). Hence, , and since is the unique codeword of that satisfies this condition, the GD decoder will also correctly decode the -th row to .

Iv A new post-processing technique

To improve the iterative decoder one needs to successfully deal with the stall patterns. There are essentially two options: use a post-processing technique or resort to another decoding algorithm whenever the iterative decoder fails. In this case the other decoder would be run on the received word.

The gd algorithm seems to be a good choice for an algorithm to combine with the iterative decoder; gd can correct all the stall patterns with sufficiently low error weight and thus the algorithms complement each other. This would result in the following algorithm. Let be the received word. Try to decode with the iterative decoder. If it succeeds, return the results. Otherwise, try to decode with gd.

While this seems like a good approach, it turns out that if we apply gd as a post-processing technique, then the resulting algorithm performs significantly better than the algorithm outlined above. Thus we propose the following algorithm. Let be the received word. Try to decode with the iterative decoder. If it succeeds, return the results. Otherwise, let denote the word where the iterative decoder stalls and try to decode with gd.

Note that, while it is possible that

, this seems to be the exception rather than the norm (at least with larger symbol error probabilities). Hence, these two approaches give very different results.

To determine how this new post-processing technique fares, we have chosen to compare it to other known technique with simulations. We have chosen to compare it to the techniques proposed by Kreshchuk et al., Emmadi et al., and Condo et al..

V Performance

We have compared four different post-processing techniques:

  1. The technique by Kreshchuk et al. [3];

  2. The technique proposed by Emmadi et al. in [5];

  3. The technique used by Condo et al. in [4] modified for use with non-extended codes as described in Section II-B;

  4. The technique proposed in Section IV.

We also compare each of these techniques to the standard iterative decoder, meaning no post-processing at all. Furthermore, we compare gd to gmd.

All of these post-processing techniques introduce erasures, and thus the column and row decoders are required to be error-and-erasure decoders. Such decoders are widely available for Reed-Solomon codes. Therefore, we have chosen to run the simulations with product codes constructed from Reed-Solomon codes. The simulations where performed with the pcdecode software package [21] over a -ary symmetric channel.

V-a Simulation results

The simulations show that gd performs significantly better than gmd, see Figure 1 and 2. We have obtained similar results for codes of other lengths and error correcting capabilities. There is also a very clear pattern between code length, error correcting capability of the code and the gap between gd and gmd: the longer the code and the more errors it can correct, the bigger the gap between the algorithms. These results are not surprising, since the difference between the performance achieved with minimum distance decoding is significantly worse than what can be achieved with maximum likelihood decoding.

The simulation results for the different post-processing techniques are presented in Figures 8, 7, 6, 5, 4 and 3. The results are the same whether we consider the FER or BER. We have chosen to only show the FER in an effort to make the plots more readable.

Symbol error probability gmdgdgmdgdgmdgd
Figure 1: Simulations with different codes of length . The component codes are Reed-Solomon codes of length .
Symbol error probability gmdgdgmdgdgmdgd
Figure 2: Simulations with different codes of length . The component codes are Reed-Solomon codes of length .
Symbol error probability Iterativepp – Kreshchukpp – Emmadipp – mod. Condopp – proposed
Figure 3: Simulation with a code that is the product of and Reed-Solomon codes.
Symbol error probability Iterativepp – Kreshchukpp – Emmadipp – mod. Condopp – proposed
Figure 4: Simulation with a code that is the product of and Reed-Solomon codes.
Symbol error probability Iterativepp – Kreshchukpp – Emmadipp – mod. Condopp – proposed
Figure 5: Simulation with a code that is the product of and Reed-Solomon codes.
Symbol error probability Iterativepp – Kreshchukpp – Emmadipp – mod. Condopp – proposed
Figure 6: Simulation with a code that is the product of and Reed-Solomon codes.
Symbol error probability Iterativepp – Kreshchukpp – Emmadipp – mod. Condopp – proposed
Figure 7: Simulation with a code that is the product of and Reed-Solomon codes.
Symbol error probability Iterativepp – Kreshchukpp – Emmadipp – mod. Condopp – proposed
Figure 8: Simulation with a code that is the product of and Reed-Solomon codes.

We see that the post-processing techniques can be roughly ordered as follows. The technique by Kreshchuk et al., performs better than the iterative decoder, but worse than all others. The modified version of the technique by Condo et al. performs slightly better, but not much. Emmadi’s method outperforms both of the former techniques, while our proposed technique gives the lowest error rates. We say that this is a rough order since there is slight variability to this depending on the specific code used.

V-B Column-first vs. row-first decoding

All of the decoding algorithms reviewed here start by decoding the columns. Due to the symmetric properties of product codes, we could as well have chosen to use the row-first versions of the algorithms. This question is only relevant whenever the row and column code have different error correcting capabilities.

The question of column-first versus row-first decoding can equivalently be stated as: if we have component codes with differing minimum distance, which one should we choose as the column code. Here we assume that columns-first decoding is used.

Simulations suggest the following results. The gmd decoder performs better if one decodes the less powerful code first, while gd and the iterative decoder perform better if the more powerful code is decoded first.

It seems reasonable to use the more powerful code first, and thus the results for the iterative decoder and gd are hardly surprising. A short explanation for gmd behaving differently is as follows.

Suppose that we have codes and as in Section II such that . If we consider the product code , then gmd can correct all error patterns such that

(5)

On the other hand, if we swap the minimum distances of and , then gmd can correct all patterns that satisfy

(6)

There are clearly more error patterns that satisfy (5) then there are patterns that satisfy (6).

V-C Notes on computational complexities

Applying post-processing techniques to the iterative decoder can significantly lower the error rate at medium to low FER/BER. It does however come at the cost of increased computational complexity. To analyze the impact of post-processing on the average computational complexity we consider the ratio of the number of times the post-processing was invoked and the total number of words processed. Denote this ratio by . Figure 9 show how depends on the FER for a variety of different codes. The ratios are computed using data gathered from the simulations with our proposed post-processing technique. However, only depends on the properties of the iterative decoder, and hence the results are the same for all post-processing techniques.

Frame error rate
Figure 9: as a function of the frame error rate for a variety of codes.

We can clearly see that, for any reasonable channel quality, the post-processing is applied very seldom. Therefore, there is no reason not to apply post-processing (at least not for the majority of applications). Furthermore, the computational complexity of the post-processing technique does not matter unless it is orders of magnitude larger than that of the iterative decoder.

The computational complexities of all the post-processing technique reviewed are similar and hence one should choose the one that gives the lowest FER/BER.

Vi Reaching very low FER/BER

Simulations with short and relatively high rate codes show that the iterative decoder along with post-processing techniques still exhibits an error floor at quite high FER. The underlying problem is that, even with post-processing, the decoder cannot correct all error patterns with weight below half of the minimum distance of the code. Although we cannot verify it with simulations, the same error floor – albeit at a much lower FER – should also be present with longer high rate codes if the symbol error probability is low enough. One potential solution to this problem is to combine the gmd decoder with the iterative decoder plus post-processing in the following way. First run the gmd decoder on the received word. If it succeeds, then stop and return the decoded word. Otherwise, run the iterative decoder with post-processing on the received word.

This decoder can correct all error patterns of weight less than half the minimum distance and also all the error patterns that the iterative decoder with post-processing can correct.222 Here we opportunistically assuming that the gmd decoder does not misscorrect any received word. The probability of this happening is very low when the symbol error probability is low enough. Hence the error floor will be considerably lower.

Simulations with short codes (length 36, 64 and 100) suggest that this algorithm fares well. More precisely, these simulations suggest that the error rate of the algorithm is upper bounded by the iterative decoder with post processing, and that the gap between the algorithms increases when the symbol error probability is lowered. Any meaningful improvement in FER is only achieved with very low symbol error probabilities.

For long codes the error floor of the iterative decoder with post-processing is only reached at a FER that is so low that it is unfeasible to run reliable simulations at this FER. Therefore other techniques must be applied to validate the performance of the algorithm for very low FER.

It is also possible to apply this algorithm to half-product codes. Further work in this direction and comparisons to techniques used for half product codes, for instance those in [6], will be left for future work.

Vii Conlusions

In this paper, we have studied and compared different hard-decision decoding algorithms for product codes. We have improved the Reddy and Robinson decoder [7] by allowing decoding beyond the generalized minimum distance. Furthermore, we have presented a new post-processing technique that utilizes this improved decoder. Our simulations suggest that our new post-processing technique outperforms other known post-processing techniques that are not optimized for specific codes.

In addition, a new algorithm that is targeted towards high rate codes is presented. Heuristic arguments are in favor for the algorithm but further work is needed to give useful upper bounds for the FER achievable with this method.

References

  • [1] P. Elias, “Error-free coding,” IRE Trans. Inform Theory, vol. IT-4, pp. 29–37, 1954.
  • [2] N. Abramson, “Cascade decoding of cyclic product codes,” IEEE Transactions on Communication Technology, vol. 16, no. 3, pp. 398–402, 1968.
  • [3] A. Kreshchuk, V. Zyablov, and E. Ryabinkin, “A new iterative decoder for product codes,” in Fourteenth International Workshop on Algebraic and Combinator ial Coding Theory, 2014, pp. 211–214.
  • [4] C. Condo, F. Leduc-Primeau, G. Sarkis, P. Giard, and W. J. Gross, “Stall pattern avoidance in polynomial product codes,” in 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP).   IEEE, 2016, pp. 699–702.
  • [5] S. Emmadi, K. R. Narayanan, and H. D. Pfister, “Half-product codes for flash memory,” in Proc. Non-Volatile Memories Workshop, vol. 312, 2015.
  • [6] T. Mittelholzer, T. Parnell, N. Papandreou, and H. Pozidis, “Improving the error-floor performance of binary half-product codes,” in 2016 International Symposium on Information Theory and Its Applications (ISITA).   IEEE, 2016, pp. 295–299.
  • [7] S. Reddy and J. Robinson, “Random error and burst correction by iterated codes,” IEEE Transactions on Information Theory, vol. 18, no. 1, pp. 182–185, 1972.
  • [8] E. Weldon, “Decoding binary block codes on q-ary output channels,” IEEE Transactions on Information Theory, vol. 17, no. 6, pp. 713–718, 1971.
  • [9] S. Wainberg, “Error-erasure decoding of product codes (corresp.),” IEEE Transactions on Information Theory, vol. 18, no. 6, pp. 821–823, 1972.
  • [10] E. Blokh and V. V. Zyablov, “Linear concatenated codes,” Moscow, USSR: Nauka, 1982.
  • [11] V. A. Zinoviev and V. V. Zyablov, “Decoding of non-linear generalized concatenated codes,” Probl. Peredachi Inf., vol. 14, no. 2, pp. 46–52, 1978.
  • [12] ——, “Correction of error bursts and independent errors by generalized concatenated codes,” Probl. Peredachi Inf., vol. 15, no. 2, pp. 58–70, 1979.
  • [13] T. Ericson, “A simple analysis of the blokh-zyablov decoding algorithm,” in International Conference on Applied Algebra, Algebraic Algorithms, and Error-Correcting Codes.   Springer, 1986, pp. 43–57.
  • [14] V. Zyablov, S. Shavgulidze, and M. Bossert, “An introduction to generalized concatenated codes,” European Transactions on Telecommunications, vol. 10, no. 6, pp. 609–622, 1999.
  • [15] G. D. Forney, “Concatenated codes.” MIT Technical Report 440, December 1965.
  • [16] R. M. Pyndiah, “Near-optimum decoding of product codes: Block turbo codes,” IEEE Transactions on communications, vol. 46, no. 8, pp. 1003–1010, 1998.
  • [17] R. Le Bidan, C. Leroux, C. Jego, P. Adde, and R. Pyndiah, “Reed-solomon turbo product codes for optical communications: from code optimization to decoder design,” EURASIP Journal on Wireless Communications and Networking, vol. 2008, p. 14, 2008.
  • [18] F. J. MacWilliams and N. J. A. Sloane, The theory of error-correcting codes.   Elsevier, 1977, vol. 16.
  • [19] Y.-Y. Jian, H. D. Pfister, K. R. Narayanan, R. Rao, and R. Mazahreh, “Iterative hard-decision decoding of braided bch codes for high-speed optical communication,” in 2013 IEEE Global Communications Conference (GLOBECOM).   IEEE, 2013, pp. 2376–2381.
  • [20] S. Wainberg, “Burst-error and random-error correction over q-ary input, p-ary output channels,” Ph. D dissertation, Polytech. Inst. Brooklyn, June 1972.
  • [21] F. Blomqvist, “pcdecode, tools for simulations with product codes,” 2019. [Online]. Available: https://github.com/fblomqvi/pcdecode