## I Introduction

The security of today’s asymmetric cryptography and public key exchange systems like Rivest-Shamir-Adleman and Diffie-Hellman are based on mathematical complexity assumptions of basic problems like the discrete log problem and factorization of large primes [1]. The advent of the quantum computer or even an unexpected algorithmic innovation will immediately compromise their security with drastic consequences on the internet [2, 3].

One possible solution is quantum key distribution (QKD) which provides information theoretical secure cryptographic key exchange based on the properties of quantum mechanics. However both communication distance as well as the key generation rate is severely limited by the performance of information reconciliation which is an important part in every QKD protocol to ensure that both parties generate the same cryptographic key. This is in particular true for continuous variable (CV) QKD which is based on the modulation of coherent states and measurements of the amplitude and phase quadratures of the electromagnetic light field. To achieve high transmission distances reverse reconciliation has to be applied, i.e. Alice has to reconcile on Bob’s measurement results. The main challenge here is the design of capacity approaching error correction codes for very low signal-to-noise ratios (SNR). For instance in [4] an SNR of dB was reported for a transmission distance of km and in [5] an SNR of dB for km.

Low density parity check codes (LDPC) in the multi-edge type (MET) variant [6] can be used in combination with multilevel coding - multistage decoding (MLC-MSD) to perform capacity approaching error correction for QKD with low SNR [7]. In the aforementioned paper about QKD over km the authors developed a new code with rate with an efficiency of %. This code was used in other works [8], however, the code has considerable complexity in its degree distribution (DD). The design of the codes with low complexity is desirable in terms of implementation complexity for encoder and decoder blocks. Designing an efficient capacity achieving DD with low complexity has been investigated in many works and different analytical as well as numerical techniques have been introduced [9, 10, 11].

Traditionally, code design for LDPC code is a time consuming process using a density evolution algorithm. In each iteration of the density evolution a vector of real values representing the density has to be updated which is computationally expensive. Due to this complexity many approximation methods for density evolution have been developed, for instance Gaussian approximation and Extrinsic Information Transfer (EXIT) charts

[11, 12]. Nowadays, for a variety of binary memory-less channels, these asymptotic analysis tools are used for the optimization of the degree distribution. Specifically, for the binary erasure channel (BEC), design of capacity achieving codes can be carried out by matching the two curves of EXIT functions related to the variable node and check nodes degree distributions due to the area theorem [9]. In other binary input-output symmetric memory-less channels the Generalized area theorem and generalized EXIT (G-EXIT) charts can be used for the optimization problem [13].Here, we introduce G-EXIT charts for MET-LDPC codes to provide a practical tool for their design and optimization. We use our tool to design new codes with rates and with lower complexity. In general for a given input distribution we calculate the Shannon capacity for each level in the MLC-MSD scheme and present high efficiency MET-LDPC codes for various SNRs.

The organization of this paper is as follows. In Section II, we explain the system model for the reconciliation of the secret key rate and calculate the designed capacity rate for each level for a given input distribution. Section III, briefly reviews the basic concepts of the MET-LDPC codes and the extension of the density evolution for these codes. Then in Section IV, we introduce the concept of G-EXIT charts for MET-LDPC codes. Simulation results are presented in Section V, where we show how to use a G-EXIT chart for designing MET-LDPC codes. Finally, Section VI concludes the paper.

## Ii System Model

### Ii-a Source Coding with side information and equivalent channel coding model

Information reconciliation is a method by which two parties that each possess a sequence of numbers agree on a *common*

sequence of bits by exchanging one or more messages. Mathematically speaking, in CV-QKD the two sequences of numbers are joint instances of a bivariate random variable that follows a bivariate normal distribution. Physically, these sequences are obtained by one party generating coherent states in the quadrature phase space and the other party measuring them. In other words, in QKD two parties share correlated random variables and wish to agree on a common bit sequence. However, imperfect correlations introduced by the inherent shot noise of coherent states and noise in the quantum channel and the receiver, give rise to discrepancies in the two sequences of numbers which has to be corrected by exchanging additional information.

In reverse reconciliation, which is the focus of this paper, we assume that Alice reconciles her values to match Bob’s. The reconciliation process can be fully described as a conventional information theory problem. This problem was first addressed by [14] as source coding with side information: Let Alice and Bob have access to two correlated information sources and

which follow a joint probability distribution

. The two parties wish to distill a common binary string by exchanging information as shown in Fig. 1. In this configuration Bob sends to Alice a compressed version of his quantized symbols and knows that Alice has access to the side information . Based on the results presented in [14, 15], the conditional entropy is (asymptotically) the minimum number of required bits for the reconciliation.Furthermore, it is more convenient to generate the syndromes using the codes with performance close to the Shannon capacity [15, 16]. In this case the parity check matrix of the error correction code can be used to generate the syndrome for the reconciliation problem. Thus an equivalent channel coding problem can be solved instead of the above mentioned source coding with side information. In the following we use an equivalent MLC-MSD scheme in order to design a lossless encoder decoder block.

### Ii-B Slice Reconciliation based on MLC-MSD

Slice reconciliation using error correction codes can be described in two steps. The first step is called quantization and transforms the continuous Gaussian source into an bit source . There is an inherent information loss due to the discretization process of the source. The second step can be modeled with the channel coding scheme for MLC-MSD. In reverse reconciliation, Bob sends an encoding (compressed version) of to Alice, such that she can infer with high probability using her own source as side information. In MLC-MSD each of these levels are encoded independently at rate corresponding to the channel coding problem and the related compress rate for the source coding problem for each level would be . The block diagram for MLC-MSD scheme for the reverse reconciliation is depicted in Fig. 2.

In the following we calculate the maximum capacity of the individual levels. Let’s define the efficiency as follows

(1) |

where is the mutual information and is the net shared information between two parties [17] where, is the entropy function and

Let’s consider the ideal situation where the codes are capacity achieving with individual rates . Thus we have

(2) |

where denotes the deficiency when only the quantization part is considered. It is noteworthy to mention that and .

In general we can write

(3) |

where denotes the individual code rates. Thus the practical efficiency of the reconciliation depends to the ability to design very good quantizers and very efficient error correction codes at rates close to . This efficiency belongs to equivalent channel coding problem. On the other hands, in [14], a lower bound to the correction rate , is given by .

We will now apply the MLC-MSD approach to design capacity achieving codes. Consider a modulation scheme with , , signal points in a -dimensional signal space, where the signal points are taken from the signal set with . Each signal point has its equivalent binary form defined by a (bijective) mapping of binary address vectors to signal points . Two well defined mappings are binary and Gray mapping. As an example for the amplitude-shift keying (ASK) modulation with , in one dimensional signal space (), the signal points are taken from . Any subset of the signal set can be labeled by a unique path. At partitioning level , each subset is labeled by a unique path with the following elements:

(4) |

where . For more details about set partitioning and mapping see [7]. For example for the -ASK modulation with binary partitioning we have:

### Ii-C Capacity of Multi Level Coding

The derivation of the capacity of the multilevel coding scheme is presented by Theorem 1 in [7]. Based on that, the capacity of a -ary digital modulation scheme is equal to the sum of the capacities of the equivalent channels of a multilevel coding scheme,

(5) |

This capacity can be approached via multilevel encoding and multistage decoding if and only if the individual rates are chosen to be equal to the capacity of the equivalent channels, i.e. . As presented in [7], for given and fixed a-priori probabilities of signal points the capacity of the equivalent channel is given by the respective mutual information ,

(6) | |||||

where denotes the capacity when using (only) the (sub)set for a given and fixed a-priori probabilities . For instance, according to (6) for -ASK we have

Using (6), the problem of finding the individual capacities simplified to finding the capacity of the additive white Gaussian noise (AWGN) channel with discrete input variables . In general, for a set of discrete input variables from the input set and continuous output variable where

has a Gaussian distribution with mean zero and variance

, the average mutual information between and is given by(7) |

where in (II-C), for denotes the discrete input probabilities and denotes the conditional channel probability density function (PDF). The unconditional PDF for outcome is given by

(8) |

When working with Gaussian distribution (II-C) can be simplified even more by replacing

(9) |

thus we have

which is the closed form formula for the individual levels in an AWGN channel with discretized input.

### Ii-D Individual and correlated rates

For uniform input distributions all the are equal, while for the (discrete) Gaussian distributed inputs

(10) |

where

(11) |

normalizes the distribution. The parameter governs the trade off between average power of signal points and entropy . For

= 0, we have a uniform distribution, whereas for

, only the two signal points closest to the origin remain ( even).As an example, we demonstrate the simulation results for slice capacities for -ASK modulation, when the channel input as presented in Figure 3 is set to be (discrete) Gaussian distributed with which fixes the entropy .

In Figure 4 the individual capacities for the input distribution above are shown versus the SNR and the summation of the capacities is compared with the Shannon capacity of the AWGN channel.

In the MLC-MSD scheme, each level use different encoder and transmit its compressed data separately. We can assume that the individual levels are equi-probable binary sources but there exists some correlation between the levels. For example, in Figure 5, a -level quantizer is depicted for an input with Gaussian distribution also the equivalent binary outputs are presented under the curve, where rows represent the output of the levels and columns represent the binary mapping. As depicted, each row can be considered as an equi-probable binary source with elements zero and one. To show the correlation between the levels, assume that, the three least significant bits are known and are equal to (denoted by red color), then the probability of being one for the most significant bit is not equal to being zero as denoted in Figure 5.

If we consider the correlation between the levels based on the input discrete Gaussian distribution then the individual rates for each level can be reformulated as follows:

(12) |

where . From (6) and (12) it is clear that the is equal to the source coding rate given by Slepian-Wolf theorem [14]

. Also using the chain rule for entropy a recursive calculation can be used to find the value of the conditional entropy as follows

For the same discrete Gaussian distribution, presented above we calculated the rates by considering the correlation between the individual sub-levels. The results are presented in Figure 6.

## Iii MET-LDPC codes

### Iii-a MET-LDPC code ensemble

The Multi-edge-type LDPC (MET-LDPC) codes are a generalization of the concept of irregular LDPC codes. These codes provide improvements in performance and complexity by giving more flexibility over different edge types. In this structure each node is characterized by the number of connections (sockets) to edges of each edge-type. It is noteworthy to mention that an irregular LDPC code is a single-edge-type LDPC (SET-LDPC) code. Using MET-LDPC codes we are able to design capacity achieving codes without using very high-degree variable nodes which provides a less complex implementation. Also it exploits the advantage of using degree one variable nodes, which are very useful for designing LDPC codes at low rate and low SNR[6]. It is important to recall that in the case of SET-LDPC code the minimum variable node degree is .

A graph ensemble is specified through two multi-variable-polynomials, one associated to variable nodes and the other associated to check nodes. We denote these multi-variable-polynomials by

(13) |

respectively, where in (13) we define the vectors and the coefficients and as follows. Let denote the number of edge types and denote the number of different channels over which the code-word bits can be transmitted. To represent the structure of the graph we introduce the following node-perspective multi-variable-polynomial representation. We interpret degrees as exponents. Let be a multi-edge degree and let denote (vector) variables. We write for . Similarly, let be a received degree and let denote variables corresponding to received distributions. By we mean . Typically, vectors will have one entry set to and the rest set to . Finally, the coefficients and , are non-negative reals corresponding to the fraction of variable nodes of type and the fraction of constraint nodes of type in the graph.

For example, let be the length of the code-word, then for each constraint node degree type the quantity is the number of constraint nodes of type in the graph. Similarly, the quantity is the number of variable nodes of type in the graph. We store these information in a table to introduce the structure of the graph. For instance a full description of a rate MET-LDPC code ensemble with the following structure is presented in Table. I and Fig. 7.

[ ] |

The edge perspective degree distribution can be described as a vector of multi-variable polynomials, for variable nodes and check nodes, respectively,

(14) |

where

and denotes a vector of all 1’s where the length being determined by context. The coefficients of and are constrained to ensure that the number of sockets of each type is the same on both sides (variable and check) of the graph. This gives rise to linear conditions on the coefficients of and as follows

Finally, the nominal code rate is given by

## Iv Generalized Extrinsic Information Transfer Chart

### Iv-a Belief Propagation and asymptotic analysis tools

Density evolution (DE) is the main tool for analyzing the average asymptotic behavior of the belief propagation (BP) decoders for MET-LDPC code ensembles with infinite block length and infinite number of iterations. The DE analysis is in general simplified by the all-one code word assumption, the channel symmetry and by going to log-likelihood ratio (LLR) domain [18, 19, 20]. Let us denote by , vectors of symmetric densities where is the density of messages carried on edge type . Also assume that denote the vector of messages passed from variable nodes to check nodes in iteration assuming that . Similarly, let be the received distributions. Then the following recursion represents the density evolution for MET-LDPC codes:

(15) |

where, and are presented in (III-A). Detailed calculation of the density evolution for MET-LDPC codes can be found in Section II-B of [18].

### Iv-B Generalized-EXIT function, G-EXIT curve and Dual G-EXIT curve

The original idea behind the G-EXIT chart method is to demonstrate the decoding process using a suitable one-dimensional representation of the densities [21]. The G-EXIT chart is visualized on the basis of two G-EXIT curves that represent the action of the different types of nodes. Considering the fact that for MET-LDPC codes, the DE tracks message densities as presented in (III-A) and (15), the G-EXIT chart for MET-LDPC codes is also expanded to a vector of components. This makes the G-EXIT analysis tools unpractical when the number of edges are more than three (). Intuitively we present again a one-dimensional G-EXIT chart by exploiting appropriate convolution in variable nodes and constraint nodes before applying the G-EXIT projection to the densities.

Based on the results of [13], given two families of -densities and parameterized by , the G-EXIT function can be represented as follows:

(16) |

and the G-EXIT kernel is defined as

(17) |

Consequently, the G-EXIT curve is given in parametric form by , where

According to (15), the DE provides two vectors with -components of densities for the variable nodes and check nodes, respectively. In order to plot the one-dimensional G-EXIT chart, these densities corresponding to each edge type will be combined to a single family of densities based on (13). Thus for the MET-LDPC codes the combination of densities for variable nodes and check nodes with edges are

(18) | |||||

(19) |

where denotes , similarly denotes , and denotes convolution in variable nodes. In a similar way denotes , and denote the convolution of check nodes.

It is proven that for a binary linear code and transmission over Binary Memoryless Symmetric (BMS) channel that the G-EXIT and dual G-EXIT curves have an area equal to , the rate of the code [13].

## V Implication of the G-EXIT chart in the code design

### V-a Examples of G-EXIT charts for MET-LDPC codes

In this section we present some examples of MET-LDPC codes and try to find the threshold of the codes using the G-EXIT chart method. We start with the rate MET-LDPC code in Table. I. The Shannon limit for rate is equal to SNR dB () and our proposed code has the threshold equal to dB () which is is just dB away from capacity. With being the energy per bit and being the energy of the noise, the relation between , the SNR and for an AWGN channel with binary transmission is

Also, using (III-A) the density evolution vector of multi-variable polynomials can be written as

where by replacing the vectors of variables of and with vectors of densities , and for channel, check nodes and variable nodes respectively we have the MET-DE. Figure 8 shows the convergence behavior of each edge for the above mentioned code.

It is noteworthy to mention that there is a single edge type variable node for this MET-LDPC code (c.f. Table I). This node applies a fixed channel density at each iteration of the DE. The corresponding G-EXIT curve for this edge is plotted in Fig. 8(c), which is constructed from two completely matching vertical lines at a specific -value which denotes the entropy of the channel .

Finally, to see the convergence of a MET-LDPC code in a single plot, we used the overall combination of the edges with appropriate combination in check nodes and variable nodes according to (18)-(19). The results are presented in Figure 9.

As a second example, Figure 10 demonstrates the G-EXIT chart for a rate MET-LDPC code. The node perspective degree structure of this code is presented in Table II and the polynomial form for this code is

[ ] |

The code has edge type and the threshold of this code in an AWGN channel using DE is equal to dB (). The Shannon limit is equal to dB () and this code is just dB away from capacity. The corresponding G-EXIT curve for this code is plotted in Fig. 10.

As a third example, Table III shows the degree structure of a rate MET-LDPC code. The threshold of this code is equal to dB (). The Shannon limit is equal to dB () and this code is just dB away from capacity. The G-EXIT chart for this code is plotted in Fig. 11.

[ ] |

As fourth example, Figure 12 shows the G-EXIT chart for a rate 0.5 MET-LDPC code presented in Table IV. This code was first published in [6] and has a threshold equal to dB ().

### V-B Convergence behavior using the G-EXIT chart

Now we use the graphical presentation to demonstrate the convergence behavior of the code structure which we will exemplify for the rate MET-LDPC code (c.f. IV). As depicted in Fig. 12, the two curves are matched to each other and the threshold of this code is dB. In Fig. 13 the G-EXIT charts are plotted for this code for two different , which are . It is possible to translate convergence behavior of the code by monitoring the status of the G-EXIT curves.

For smaller than the threshold the code is not able to correct errors. In this case the two curves cross each other in the G-EXIT chart, see Fig. 13(a). For dB, a value larger than the threshold, the corresponding G-EXIT chart is plotted in Fig. 13(b). The extra gap shows that the corresponding MET-LDPC code is still able to correct the errors even with a worse channel.

### V-C Gaussian assumption and complexity reduction

For SET LDPC codes for the BI-AWGN channel the well-known one dimensional Gaussian approximation can be used to determine the convergence threshold [16, 12, 22, 23, 24, 25, 26]. In case the check node degrees are small and the variable degrees are large enough, the PDF of both the variable and the check nodes can be approximated by a Gaussian distribution for all input and intermediate densities. The Gaussian PDF is thereby determined by its mean. It is thus enough to trace only a single parameter during the BP decoding algorithm.

For MET-LDPC this Gaussian approximation is not valid [18]. In this part we introduce a new analysis tool for MET-LDPC codes on AWGN channels which is significantly more accurate than the conventional Gaussian approximation. In our proposed method we assume a Gaussian distribution only for messages from variable nodes to check nodes. In comparison to other existing methods which assume Gaussian approximation for both check nodes and variable nodes [18], our method calculates the check node PDFs based on check node operations. To show the accuracy of this method we combined the G-EXIT operator to our approximation method and found the threshold and convergence behavior of the codes. Simulation results show that our proposed method provides an accurate estimate of the convergence behavior and the threshold of the code.

For better understanding we plotted the evolution of the intermediate densities in the DE algorithm for the rate MET-LDPC code. As depicted in Fig. 14, the Gaussian approximation is not valid for the check node output densities, but at the variable node outputs, the intermediate densities can be described by symmetric Gaussian distributions. Then in the process of the G-EXIT chart we can gradually change the mean of the Gaussian distribution from to ^{1}

Comments

There are no comments yet.