Performance Bounds and Estimates for Quantized LDPC Decoders

11/07/2019 ∙ by Homayoon Hatami, et al. ∙ 0

The performance of low-density parity-check (LDPC) codes at high signal-to-noise ratios (SNRs) is known to be limited by the presence of certain sub-graphs that exist in the Tanner graph representation of the code, for example trapping sets and absorbing sets. This paper derives a lower bound on the frame error rate (FER) of any LDPC code containing a given problematic sub-graph, assuming a particular message passing decoder and decoder quantization. A crucial aspect of the lower bound is that it is code-independent, in the sense that it can be derived based only on a problematic sub-graph and then applied to any code containing it. Due to the complexity of evaluating the exact bound, assumptions are proposed to approximate it, from which we can estimate decoder performance. Simulated results obtained for both the quantized sum-product algorithm (SPA) and the quantized min-sum algorithm (MSA) are shown to be consistent with the approximate bound and the corresponding performance estimates. Different classes of LDPC codes, including both structured and randomly constructed codes, are used to demonstrate the robustness of the approach.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Low-density parity-check (LDPC) codes [1] are a class of error correcting codes with asymptotic performance approaching the Shannon limit. However, practical LDPC decoders, such as those that implement message-passing algorithms based on belief propagation (BP), can introduce an error floor

that limits error probability at high signal-to-noise ratios (SNRs). A number of structures in a code’s Tanner graph representation have been identified as significant factors in error floor performance –

e.g., near-codewords [2], trapping sets [3], and absorbing sets [4]. Absorbing sets are known to be problematic in a variety of LDPC codes and stable under bit flipping decoding [5, 6]. Other classes of trapping sets, such as elementary trapping sets and leafless elementary trapping sets, have been shown to be the dominant cause of the error floor for certain codes [7, 8, 9, 10].

Several papers have addressed the problem of predicting the error floor performance of LDPC codes on the additive white Gaussian noise (AWGN) channel based on the existence of these problematic structures. In [3], Richardson proposed a variation of importance sampling to estimate the frame error rate (FER) of a code based on trapping sets. In [11], an error floor estimate was introduced based on the dominant absorbing sets (those empirically determined to cause most errors) in structured array-based codes, and the results were compared to those derived from importance sampling. In [12], a method similar to [11] was applied to the min-sum algorithm (MSA). In [13], the contribution of the shortest cycles in a code’s graph was used to estimate its performance. Also, [14] and [15] developed a state-space model for a code’s dominant absorbing sets to estimate its FER. Later, [16] applied this method to the case where the log-likelihood-ratios (LLRs) used for decoding are constrained to some maximum saturation value. Each of these references considered the problematic structures of a particular code. In contrast, the authors of [5] derived a real-valued threshold associated with a particular absorbing set irrespective of the code; the threshold indicates if the absorbing set can be “deactivated” and hence not contribute to the FER at high SNR in any code that contains it.

This paper obtains sub-graph specific, or code-independent, lower bounds on the performance of an LDPC code when a finite precision (quantized) LDPC decoder is used. These bounds are general, in that they apply to any code containing a particular problematic sub-graph; however, calculating the bound is complex, so we introduce assumptions and approximations to simplify its calculation, resulting in what we call an approximate lower bound. Given a description of a dominant problematic sub-graph and its multiplicity in a code, an estimate of the resulting FER performance is obtained. Extensive simulation results justify the validity of the assumptions and approximations used for various decoders, quantizers, problematic sub-graphs, and codes.

We first create a simplified model for the Tanner graph of a code containing a particular problematic sub-graph; this model captures the structure of the code outside the sub-graph with a single edge connected to each check node incident to a variable node inside the sub-graph. We use this model to identify the sets of quantized received channel LLR values observed at the sub-graph’s variable nodes that cannot be corrected even under the most favorable LLR conditions for the variable nodes outside the sub-graph. These sets are deterministic for a given sub-graph, i.e., they cause a decoding error regardless of the channel SNR, and thus they can be used to lower bound the FER performance of any code containing that sub-graph, and it is not necessary to re-derive the sets for every SNR. Furthermore, deriving these sets is typically much faster than performing a Monte-Carlo simulation for a particular SNR. The probabilities of these sets of received values are functions of the SNR and can be derived analytically; the same bound can be used for two different codes with the same absorbing set but different rates, one bound being a simple SNR-derived shift of the other. We refer to these bounds as “code-independent”. To verify the accuracy of the lower bound and the corresponding performance estimates, we have considered a variety of codes, including array-based codes of different rates, Euclidean Geometry codes, Tanner codes, and randomly constructed codes for both sum-product algorithm (SPA) and MSA decoders and uniform and non-uniform quantizers. Our focus is on absorbing sets, since they have been well-studied in the literature.

Ii Background

Ii-a LDPC Codes/Quantized Decoders

Assume that a codeword is binary phase shift keying (BPSK) modulated such that each zero is mapped to and each one is mapped to . The modulated signal is transmitted over an AWGN channel with mean

and standard deviation

. The received samples from the channel are multiplied by

to form the channel LLR vector

corresponding to . As a result, for , the element of corresponding to , denoted

, has a Gaussian distribution with mean

or , depending on whether the modulated symbol is or , respectively. The standard deviation of each is , and since LDPC codes are linear, we can assume the transmission of the all-zero codeword. Therefore,

(1)

where is the Gaussian distribution with mean and standard deviation .

Let the sets and represent the set of variable nodes and check nodes, respectively, of a bipartite Tanner graph representation of an LDPC code parity-check matrix. In practical decoder implementations, the channel LLRs and variable node and check node LLRs must be quantized, and the calculations at check nodes and variable nodes are implemented with finite precision. At a given iteration, let represent the quantized LLR passed from to . Similarly, let represent the quantized LLR passed from to . The set of check nodes that are neighbors (connected) to are denoted by , and the set of variable nodes that are neighbors to are denoted by . To initialize decoding, each variable node passes a quantized version of , denoted by , to the check nodes in . At the check nodes, the LLR passed from to is calculated as follows for quantized SPA and MSA decoders:

  • Quantized SPA: The check node operation can be written as

    (2)

    where the two functions and are defined as and , , and the function returns the quantized value of . In [11], it is shown that this quantized implementation suffers from a significant error floor, i.e., at high SNRs there is little additional reduction in the FER as the channel quality improves.

  • Quantized MSA: The check node computation simplifies to

    (3)

    The MSA is an approximation of the SPA that reduces implementation complexity.

For both the SPA and MSA, at the variable nodes, the hard decision estimate is checked to see if it is a valid codeword, where iff

(4)

If is not a valid codeword and fewer than iterations have been carried out, the next iteration is performed and the LLRs passed from the variable nodes to the check nodes are

(5)

Ii-B Trapping Sets & Absorbing Sets

Let denote a subset of of cardinality . Let and represent the subsets of check nodes connected to variable nodes in

with even and odd degrees, respectively, where

. Let the sub-graph induced by be , where represents the check nodes connected to , , and represents the set of edges connecting to . The sub-graph of that is induced by is called an trapping set, with graphical representation . is further defined to induce an absorbing set if each variable node in is connected to fewer check nodes in than . As an illustration, Fig. 1 shows a absorbing set with degree-one check nodes in , where each of the variable nodes in is connected to fewer elements in than . This absorbing set is a structure that appears often in -regular LDPC codes, for example, and we see that it contains a cycle of length six (the highlighted edges in the figure). The girth of an absorbing set is the length of its shortest cycle, and it can be readily observed that the girth of the absorbing set in Fig. 1 is six.

Fig. 1: An illustration of a absorbing set with girth . This sub-graph can also be referred to as an elementary trapping set or a leafless elementary trapping set.

Other classifications of problematic sub-graphs have been referred to as elementary trapping sets (ETS), which contain only degree-1 and degree-2 check nodes [9], and leafless elementary trapping sets (LETS), in which each variable node is connected to at least two even-degree check nodes [7]. As such, Fig. 1 can also be referred to as a ETS or LETS.

Ii-C Quantizers

Since quantized decoding may have different performance characteristics than unquantized decoding, considering the effect of quantization on decoder performance is of great importance:

  • Uniform Quantization: Following convention, we let denote a quantizer that represents each message with bits: bits to represent the integer part of the message, bits to represent the fractional part, and one bit to represent the sign. In this case, there are quantization levels, where the levels (i.e., the quantized message values) range from to , with step size between levels. The quantizer thresholds are equidistant between the levels and range from to , where for .

  • Quasi-Uniform Quantization: In [17], the authors proposed a non-uniform quantizer, denoted as “quasi-uniform” due to its structure, which uses bits for uniform quantization, thus maintaining precision, plus an extra bit to increase the range of the quantizer compared to a bit uniform quantizer. It is shown in [17] that the increased range of this quantizer improves the error-floor performance.

Iii System model

In this section, we propose a general model for representing a problematic sub-graph in an arbitrary code. We also formulate expressions for the quantized LLR values received at the variable nodes and check nodes in the sub-graph. As mentioned earlier, we focus on absorbing sets as our sub-graph of interest in the development of our system model; however, the system model can be generalized in a straightforward manner to any sub-graph.

Iii-a Absorbing Set Model

We consider the general case of an absorbing set with an unspecified number of edges connected to each of its check nodes. The variable nodes are represented by . We partition the edges connected to each into two groups depending on whether they connect to a variable node in or . We denote the neighboring nodes of in as and the neighboring nodes of in as . If there are edges connected to and edges connected to , then and . In Fig. 2, a absorbing set is illustrated in which , , , and is arbitrary for (note that can be zero).

Fig. 2: An illustration of a absorbing set with an unspecified number of edges connected to each check node.

To simplify the calculation of the LLRs sent from each check node to the variable nodes in the case where , we represent the edges from the variable nodes in with a single edge (see Fig. 3). This edge has an LLR that is a function of all the external LLRs coming from the set to and can be derived as follows:

  • SPA:

    (6)
  • MSA:

    (7)

LLR can then be used in equations (2) and (3), in conjunction with the internal LLRs coming from the set , to form the LLRs sent from to the variable nodes in . (Note that if for any , then the single edge representation described above is not necessary since outgoing messages from will be a function only of internal messages from .)

This simplification, where we consider only one external edge connected to check nodes with in from outside the absorbing set, is depicted in Fig. 3 for a absorbing set. We refer to this graph as the absorbing set decoder graph , where is the set of auxiliary variable nodes, corresponds to the number of check nodes in with , and is the set of single edges connecting each to . We also refer to a decoder operating on as an absorbing set decoder.111We remind the reader that the concept of an absorbing set decoder can be applied to any sub-graph of interest. Later, we will use an absorbing set decoder operating on to develop a lower bound on the FER of any code containing . No detailed information about the code containing the absorbing set is required in this approach, except the code rate, which is needed to determine the channel SNR in terms of . In the next two sub-sections, we discuss how the possible inputs to the variable nodes and check nodes of a quantized absorbing set decoder are determined.

Fig. 3: An illustration of a absorbing set decoder graph with single edges connected from auxiliary variable node to each check node , where each represents the LLR input to check node from outside .

Iii-B Variable Node Inputs

Let denote the quantized version of corresponding to the variable nodes in as described in (1). The portion of (respectively ) corresponding to the variable nodes of is denoted by (resp. ). Each element of , denoted by , , can take one of values for a -bit quantizer. These values are labeled to , from smallest to largest. The quantizer boundaries are represented by to . The probability that takes on the value , , is equal to the probability that , where and . For the AWGN channel, this probability is given by

(8)

where represents the complementary error function of Gaussian statistics. The vector can take on different values, representing the possible combinations of quantizer levels , , . The set of all possible vectors is denoted by , and the probability that takes on the value is given by

(9)

Iii-C Check Node Inputs

We use a column vector, denoted by , to represent the single edge LLRs input to the check nodes in from the auxiliary variable nodes at iteration . If decoding iterations are performed, all the single edge LLRs input to at all iterations can be represented by the matrix , where each element of can take one of values. Therefore, has possible realizations. We denote a given realization as and the set of all possible realizations by , where can be extremely large for practical values of , , and . As an illustration, for (i.e., a 5-bit quantizer), check nodes with external edges connected to , and decoder iterations.

Iv Bounding the Error Probability of an Absorbing Set Decoder

For an absorbing set decoder operating on with independently chosen variable and check node inputs (from the channel) and (from outside ), we define to be the event that there remains at least one bit error in after decoding iterations. The probability of error for an absorbing set decoder performing on can then be written by conditioning the event on all possible and as follows:

(10)

where is either or , based on whether or not the variable node input vector is decoded correctly after iterations when is the check node input matrix.

To help visualize (10), we define a decodability array for an absorbing set decoder, with columns corresponding to all possible variable node input vectors and rows corresponding to all possible check node input matrices . The columns are indexed by -tuples over the set of quantizer levels, while the rows are indexed by matrices over the set of quantizer levels. We can then fill out the decodability array with

(11)

The resulting array is deterministic, i.e., it is not a function of the channel SNR. A pictorial representation of the decodability array is shown below:

(12)

We now define the absorbing region of an absorbing set decoder as the set of all pairs with ‘1’ entries in the decodability array.222A related definition of an absorbing region was defined in [11]. We note that, generally, the decodability array can be constructed in this way for any problematic sub-graph and the corresponding “absorbing region” would refer to the portion of the array with ‘1’ entries. Letting represent the absorbing region, i.e., , in (10) can be written as

(13)

where (8) and (9) indicate the dependence of on SNR. Evaluating (13) is computationally complex, since the size of the decodability array is typically extremely large. In the rest of this section, we propose an approach to simplify the problem of finding the probability of the absorbing region.

We proceed by proposing to lower bound . Assuming that and are chosen independently, (13) becomes

(14)

where we note that, in an absorbing set decoder, we are independently choosing an and a , running the decoder to see if it is decoded incorrectly, which results in a “1” in the decodability array, and then repeating this process for every possible combination in the array. After the process is complete, each entry in the array is either a “1” or a “0”.

We now define the following sets, which can be understood by referring to the decodability array. First, for a given (row of the decodability array), denote the set of all (columns of the decodability array) for which the pairs cannot be decoded correctly as , i.e., . This is equivalent to the set of all columns with entries ‘1’ in a given row of the decodability array. Additionally, we let denote the set of all columns in the decodability array with ‘1’ entries in every row, i.e., , where we note that

(15)

In (14), the error probability is a function of , which involves computing the probability of a particular set of check node inputs (from outside ) for each of the iterations. If we are interested only in a lower bound on the probability of belonging to the absorbing region, this term can be eliminated from the calculation by including in the sum only entries whose columns have a ‘1’ in every row, i.e., the set , which results in the following lower bound

(16)

The lower bound in (16) implies that

(17)

so that instead of including all the pairs in the decodability array with ‘1’ entries, we only need to include the columns with all ‘1’ entries, which leads to the removal of the term from the expression for . This makes the evaluation of the lower bound in (17) dependent only on the absorbing set and not on the structure of the code containing .333If every column of the array has at least one “0” entry, that means that every possible input to the “absorbing set” can be decoded with some combination of check node inputs and we would obtain the trivial bound ; however, since such an object isn’t problematic by our definition, a lower bound of zero makes sense.

V Bounding the FER of an LDPC Code

In this section, we begin by deriving a lower bound on the FER of any LDPC code whose Tanner graph representation contains at least one instance of a given absorbing set in Section V-A. We then provide a series of approximations in Section V-B to reduce the complexity of evaluating the bound. Finally, in Section V-C we provide some remarks concerning the application, evaluation, and merits of a code-independent bound on the FER of an LDPC code.

V-a A Lower Bound on the FER of an LDPC Code

We define as the event that there is at least one bit error in the set of variable nodes after the quantized received vector is decoded using a quantized decoder operating on the full code graph for iterations. Then the FER of the LDPC code can be written as

(18)

since there are possible realizations of .

Now let represent the event that there is at least one bit error in after is decoded using a quantized decoder operating on the full graph for iterations. Then determines the contribution of to the FER, and we can therefore write

(19)

Now we make the important observation that, since depends only on the input vector received by the variable nodes of and the input matrix received by the check nodes connected to during the iterations of decoding, it can also be written as

(20)

where we note that represents the probability that is in error for a full graph decoder, whereas in (10) represents the probability that is in error for an absorbing set decoder. Here, unlike in (14), and are dependent variables, since the absorbing set check node input matrix depends on the variable node input vector in a full graph decoder. We can now state the following theorem.

Theorem 1.

For any LDPC code containing the absorbing set , defined in (17) lower bounds , i.e.,

(21)

Proof: We begin by defining a decodability array for , similar to (12), but for a full graph decoder. In this case, however, each of the columns, representing a given variable node input vector (-tuple) in , contains entries in at most rows, one for each of the possible check node input matrices that result from using the full graph decoder to decode combined with one of the input vectors in , where we note that some of the decoder results may give the same check node input matrix. Also, some of the entries in the decodability array may be blank, corresponding to cases where the full graph decoder never results in a particular combination of and .

We next fill in the non-blank entries in the decodability array according to whether the pair is decoded correctly (a ‘0’) or incorrectly (a ‘1’) by the full graph decoder. We now define the absorbing set region of a full graph decoder as the set of all pairs with ‘1’ entries in the decodability array and denote it as . We can then express (20) in terms of this absorbing set region as

(22)

Further, let be the set of all columns in the decodability array with either ‘1’ or blank entries in every row, i.e., the set of all variable node input vectors that are not decoded correctly by the full graph decoder. We can now write

(23)

An important observation now follows: if a column in the decodability array for the absorbing set decoder contains all ‘1’ entries, i.e., if , then it must contain either ‘1’ or blank entries in every row of the decodability array for the full graph decoder, i.e., . Note, however, that the converse is not true. In other words, if , it does not follow that , since blank entries in the decodability array for the full graph decoder (corresponding to check node input matrices that never occur) could be decoded correctly by the absorbing set decoder.

Now defining

(24)

it follows that

(25)

and

(26)

If there are occurrences of an absorbing set, denoted by , , in a given code, the contribution of all absorbing sets of this type to the FER is given by . Since these absorbing sets may not be the only cause of decoding errors, gives a lower bound on the FER, i.e.,

(27)

Assuming that all absorbing sets within a given code have the same , denoted by ,444This assumption is based on the symmetry of the channel and is particularly relevant for the structured codes, due to their additional symmetry. A similar assumption is made in [11, 15, 16]. an immediate result of (27) is that

(28)

Furthermore, since

(29)

(27) and (29) can be combined to give the following lower bound

(30)

We now assume that any two error events and associated with the same absorbing set are independent, i.e.,

(31)

This assumption is made for simplicity and is based on the observation that most pairs of a given absorbing set appearing in a code are disjoint, in the sense that they do not have any nodes in common. Using this assumption, the right hand side of (30) can be written as

(32)

Further, as noted in [11], the fact that the channel LLRs in the error floor region are typically large implies that the chance of more than one absorbing set receiving low channel LLRs, and thus causing decoding errors, is small. This, combined with the fact that the second term in (32) will not have a significant impact (since will be small and thus in the error floor) and can thus be neglected, results in the following approximate lower bound on the FER in the error floor region of an LDPC code containing instances of the absorbing set :

(33)

where the accuracy of the approximate bound in (33) depends on the tightness of the bound in (17). Furthermore, if is the most harmful or dominant absorbing set in a code, represents an estimate of its FER performance in the error floor region.555In the case where more than one absorbing set is believed to be dominant, the maximum of all the lower bounds can be used to form an error estimate.

Expressions (28) and (33) represent a true lower bound and an approximate lower bound, respectively, valid in the error floor region, in terms of , defined in (17). The multiplicities of the different absorbing sets needed to evaluate (33) may be derived either using analytical or semi-analytical methods, such as those given in [4, 18, 19],

V-B Approximating the Lower Bound on FER

In this section, we propose a reduced complexity method to approximate . Although the term was eliminated from the expression for in (16), thus making the lower bound code-independent and simplifying the expression, calculating in (17) still depends on finding , which, in-turn requires examining all as shown in (15). In other words, all rows of the decodability array should be examined for each of the columns . Therefore, instead of finding , we consider the less computationally complex set

(34)

which involves examining only a subset of rows of the decodability array. By properly choosing the rows and finding the columns with all ‘1’ entries in these rows, it is possible to obtain a good approximation to the set of columns with ‘1’ entries in every row, allowing us to compute

(35)

which results in the approximate lower bound666We use this term to emphasize the fact that approximations are used in calculating .

(36)

In the following, we explain how the approximate lower bound is calculated. We first assume that rows of the decodability array, denoted by for , have been selected. The calculation of (35) then involves two steps:

  1. Finding the set . This is achieved by operating the absorbing set decoder on for each . Then, using (34), if the decoder fails to correctly decode for all the , it follows that . Otherwise, is discarded.

  2. Summing the for all , where is obtained using  (8) and (9).

In order to obtain a computationally efficient approximation, we should choose rows expected to have a small number of ‘1’s, since they eliminate more columns than rows with a large number of ‘1’s . In other words, the rows should be chosen as a set of check node input matrices that we expect to result in a small number of input vectors to the absorbing set that cannot be decoded correctly. Rows which we expect will lead to incorrect decoding of most input vectors , on the other hand, are not useful. Therefore, we try to avoid such rows. Before proceeding, we review some important facts regarding the dynamics of absorbing sets in the high SNR region (with highly reliable input channel values). For such absorbing sets, after a certain number of iterations, it is common for the LLRs received by the check nodes in from the variable nodes in to grow rapidly and reach the maximum quantizer level (or the saturation level) within a few iterations [20]. For example, the analysis in [5] starts from the point where all the LLRs have already converged to . This motivates our choice of Row Set I, where we consider only the row , where . Further, in [15, 16], and [21] it is stated that slowing down convergence to the maximum level for the LLRs inside an absorbing set often leads to an increase in the probability of correct decoding. This motivates our choice of Row Set II, where we consider a more gradual increase of the input LLRs to the check nodes in , which corresponds to choosing another row in the decodability array. We also propose Row Set III, which combines Row Sets I and II using (34). In general, we have found that rows with a high probability of correct decoding have increasing LLRs with iterations, and no negative LLRs (assuming all-zero transmission), a point also noted in [15].

V-B1 Row Set I

The first candidate check node input matrix that we consider is , which is based on the following assumption.

Assumption 1.

For any absorbing set input vector such that , i.e., cannot be decoded correctly, any other input pair also cannot be decoded correctly, i.e.,

(37)

In other words, it is assumed that, if the row in the decodability array associated with has a ‘1’ entry in a column, the remaining rows will also have ‘1’ entries in the same column. This assumption is based on the behavior of maximum likelihood (ML) decoding, where the best performance is achieved if the distance from the received vector to the most reliable channel LLR vector is minimum, i.e., all the variable nodes in receive the maximum quantizer output . Extending this logic to an iterative MP decoder would imply that, if the variable nodes in send to the check nodes in , the best decoding performance is achieved. Although this is not necessarily the case for iterative decoders, this assumption along with similar earlier arguments from [5][13], and [20] motivates choosing

(38)

Note that this choice of yields a that is significantly less complex to calculate than , since requires examining only one row , while rows must be examined to calculate .

Fig. 4 shows the approximate lower bound of (36) based on (38) for a absorbing set in a array code [22] of length and rate with a uniform quantizer and an SPA decoder, where a multiplicity of 334,890 was assigned to the absorbing set (see [4]). This absorbing set was chosen because it was shown to be the dominant one for array codes with an SPA decoder and a uniform quantizer [20].777We remark again that this (4,2) absorbing set can also be considered as an ETS or LETS. The FER of the simulated array code is also shown for comparison. We observe that the approximate lower bound closely follows the simulated performance in the error floor region, thus supporting the choice of .


Fig. 4: Approximate lower bound of (36) based on a absorbing set for a array code with a uniform quantizer with and an SPA decoder using Row Set I.

V-B2 Row Set II

The results of [17] indicate that for certain LDPC codes, the MSA decoder with a quasi-uniform quantizer can have error floor performance very close to that of an unquantized SPA decoder. As an example, for array codes decoded using the MSA decoder with a 5-bit quasi-uniform quantizer, we find that the dominant error patterns are absorbing sets with girth , as shown in Fig. 5. This absorbing set is the support of a codeword and represents the minimum distance of these codes (which illustrates the efficiency of the quasi-uniform quantizer for MSA decoding).888

This (6,0) absorbing set, or codeword, can also be classified as an ETS or LETS.

When we apply Assumption 1 to the absorbing set with different quasi-uniform quantizers and use the multiplicity of the absorbing set from [18], however, we find that the approximate lower bound is, in fact, larger than the associated simulation result for the MSA decoder. In other words, our results in this case show that there must exist columns in the decodability array with ‘1’ entries in the row associated with but ‘0’ entries in another row. This is consistent with the results of [15, 16, 21, 23], i.e., that slowing down the convergence of the LLRs inside the absorbing set can increase the probability of correct decoding. Therefore, we conclude that Assumption 1, which is based on ML decoders, is not necessary valid for all MP decoders. This suggests choosing , where is a check node input matrix, corresponding to some other row of the decodability array, that can lead to correct decoding of some absorbing set input vectors when does not lead to correct decoding.

Fig. 5: An illustration of a absorbing set with girth 8.

In [14, 15, 16], the authors model the dynamics of an absorbing set by applying Density Evolution (DE) to the messages coming from outside the absorbing set, where a Gaussian distribution for the LLRs received by the check nodes in

at each iteration is assumed. These distributions are represented by their mean and variance, which are shown to be increasing with iteration number. Here, we make use of those results and extend them to our code-independent framework by considering a check node input matrix

for which the LLRs increase gradually until reaching the maximum quantizer levels, thereby slowing down the convergence speed of the LLRs passed along the edges of the absorbing set decoder. To this end, the elements of are set equally to the lowest positive quantizer level. We then let this value increase with the iteration number , so that each of the positive quantizer levels is used times before moving to the next larger level, for a total of iterations, resulting in the check node input matrix999We do not use negative quantizer values because we are interested in rows that can decode most of the input patterns, and check node input matrices with negative values typically have a small probability of correct decoding (e.g., see [17]).

(39)

and the set

(40)

As in the case of Row Set I, is significantly less complex to calculate than . The choice of and the general trajectory of the increasing quantizer levels give us some options for choosing . According to our experience, for a absorbing set with a 5-bit quasi-uniform quantizer, increasing beyond did not improve the approximate lower bound based on .

V-B3 Row Set III

Finally, we can apply (34) to the two proposed sets and to obtain

(41)

again yields a that is significantly less complex to calculate than . The procedure to find the proposed is described in Algorithm 1.

1:   (the empty set)
2:  for all  do
3:     The absorbing set decoder tries to decode with ;
4:     if the absorbing set decoder fails then
5:        The absorbing set decoder tries to decode with ;
6:        if the absorbing set decoder fails then
7:           ;
8:        end if
9:     end if
10:  end for
11:  return  
Algorithm 1 Calculate

As noted previously, the calculation of can be seen as a two-step process: finding the set by operating the absorbing set decoder and then calculating the probability of using (35). In Fig. 6, for the absorbing set, the approximate lower bound of (36) based on the set , the bound based only on the set , and the simulated performance are shown for a array code [22] with a -bit quasi-uniform quantizer and an MSA decoder. We observe that in this case the approximate lower bound based on Row Set III gives a better result than the one obtained using only , i.e., Row Set I. It is worth noting that, to reduce the complexity of applying Algorithm 1, we start with , since it is likely to eliminate the most input vectors . Then we look for other rows that might succeed where fails, so that, after checking , it is only necessary to run the absorbing set decoder for those ’s with a ‘1’ in the row of the decodability array associated with .


Fig. 6: Approximate lower bound of  (36) based on Row Sets I and III based on a absorbing set in a array code with a 5-bit quasi-uniform quantizer and an MSA decoder ().

V-C Remarks

Due to its generality and simplicity, the code-independent approximate lower bound on the FER in (36) is a useful tool in predicting the high SNR performance of quantized LDPC decoders based on the presence of a given absorbing set (or general problematic sub-graph). Below we summarize this concept and pinpoint its strengths.

  • Application: The lower bound indicates that any code containing at least one instance of a given absorbing set cannot achieve an FER lower than that value. This statement, although not strictly true for the approximate lower bound , can loosely be considered to have the same implication, as our numerical results show in the next section. In the same fashion, given that the multiplicity of the absorbing set is , the approximate bound indicates that one cannot achieve an FER lower than . In the case that is the dominant absorbing set in a code, the approximate lower bound becomes an estimate of its FER performance. Since and its approximation are based only on an absorbing set, rather than a specific code,  (28), (33), and (36) apply to any code containing that absorbing set.

  • Complexity: An advantage of the code-independent bound is that its computational complexity relative to similar code-dependent methods, such as the error floor approximation of [4], is only on the order of , since the evaluation of is performed solely on the absorbing set of length and not on the entire code of length . For example, for the array code and the absorbing set, the complexity of the code independent bound is only is times that of the code-dependent bound.101010This is true under identical conditions, such as dividing the variable nodes into two groups as proposed in [11]. Furthermore, the time needed to evaluate the code-independent approximate lower bound is much less than for Monte Carlo simulation for small values of the FER. For example, for the array code and the absorbing set, an FER of only can be achieved with Monte Carlo simulation in the time needed to evaluate the code-independent bound, which can accurately predict performance at FERs many orders of magnitude lower.

  • Code rate dependency: Assuming the BPSK mapping described in Section II-A, calculating in (35

    ) requires the probability distribution given in (

    8) and thus the bound is a function of the channel noise parameter . It follows that

    (42)

    where , or, expressing