I Introduction
Low density lattice codes (LDLC) is an efficiently decodable subclass of lattice codes that approach the capacity of additive white Gaussian noise (AWGN) channel [1]. Analogous to low density parity check codes (LDPC), the inverse of LDLC generating matrix is sparse, allowing messagepassing decoding. LDLC is a natural fit for wireless communications since both the codeword and channel are realvalued. Recently, LDLC has been applied to many different communication systems, e.g., multipleaccess relay channel [2], halfduplex relay channels [3], fullduplex relay channel [4], and secure communications [5]. However, the basic disadvantage of LDLC is its high decoding complexity, which limits its practical application. The commonly used LDLC are relatively short, e.g., in [2, 3, 4]. For longer code length, the authors in [6] suggest to use LDPC lattice codes [7] instead of LDLC, even though LDLC have better performance than LDPC lattice codes.
The messagepassing decoder of LDLC is slow because both the variable and check nodes need to process continuous functions. The bottleneck occurs in variable nodes, which have to compute the product of periodic continuous functions, where is the degree of LDLC. This complicated operation dramatically slows down the whole decoding process. To reduce the decoding complexity, Gaussian approximation (GA)based decoders have been proposed in [8, 9, 10]. The basic idea is to approximate the messages exchanged between variable and check nodes by Gaussian functions. The operation at each variable node reduces to compute the product of periodic Gaussian functions, ending up with a Gaussian mixture. However, it is costly to approximate a Gaussian mixture by a single Gaussian function. Current approaches suggest to use the dominating Gaussian in the mixture, by sorting [9] or exhaustive search [10]. Even then, the computational complexity remains high, e.g., in [10]. Since the value of is commonly set to [1], current GAbased LDLC decoders are slow for long codes. The other open question is, the convergence of GAbased LDLC decoders has not yet been proved.
The main contribution of this paper is twofold: first, we prove that in the high signaltonoise ratio (SNR) region, all GAbased LDLC decoders converge sublinearly or faster. This result verifies the goodness of Gaussian approximation in LDLC decoding. Second, we propose a novel GAbased LDLC decoder which requires only operations at each variable node. The key idea is to make use of the tail effect of Gaussian functions, i.e., if two Gaussian functions have very distant means, then the product of them is approximately . This fact allows us to approximate the Gaussian mixture by Gaussian functions, without sorting as in [9] or exhaustive search as in [10]. Simulation results confirm that the performance of proposed decoder is the same as the best known one in [10], give the same number of iterations. Note that having lower decoding complexity enables us to run more iterations to further improve the performance.
Section II presents the system model. Section III describes the convergence of GAbased LDLC decoders. Section IV demonstrates the proposed decoder. Section V shows the simulation results and comparisons with other decoders. Section VI sets out the theoretical and practical conclusions. The Appendix contains the proofs of the theorems.
Ii System Model
Iia LDLC Encoding
In an dimensional lattice code, the codewords are defined by
(1) 
where
is an information integer vector,
is a realvalued generator matrix, and is a realvalued codeword. An LDLC is defined by a sparse parity check matrix, which is related to the generator matrix by(2) 
The sparsity of enables the use of a massagepassing algorithm to decode LDLC.
In the original construction of LDLC [1], every row and column in has same nonzero values, except for random sign and change of order. These values are referred to as generating sequence , which is chosen in [1] as
(3) 
Note that there are only two distinct values, and . The value of is referred to as the degree of LDLC.
When a LDLC is used over an additive white Gaussian noise (AWGN) channel, we have
(4) 
where is a noise vector,
is each dimension noise variance, and
is a dimensional unit matrix. The power of each LDLC codeword, denoted as , may be very large. Therefore, a shaping algorithm is required by LDLC, in order to make distributed over a bounded region, so called shaping region. Various shaping methods have been proposed in the literature [11, 12, 13]. In this work, we consider the hypercube shaping in [11], where the shaping region is a hypercube centered at the origin. Note that our results are directly applicable to any shaping method.IiB LDLC Decoding
Since is sparse, LDLC can be decoded by a message passing algorithm [1]. The process takes four steps:

Initialization: The variable node, denoted as , sends a single Gaussian pdf to its neighbor check nodes:
(5) for , where is the element in .

Checktovariable passing: The check node, denoted as , sends a message via its edge. Without loss of generality, we assume that receives from its edge, for . Let be the label of edge. The computation of takes four steps:

convolution: all , except , are convolved:
(6) 
stretching: the function is stretched by :
(7) 
periodic extension: is extended to a periodic function with period :
(8)


Variabletocheck passing: The variable node sends a message via its edge. Similarly, we assume that receives from its edge, for . The computation of takes two steps:

product: all , except , are multiplied:
(9) 
normalization: is normalized as:
(10)
Steps and are repeated until the desired number of iteration is reached.


Final decision: The variable node computes the product of all received messages:
(11) The estimation of
in (4) and in (1) are obtained by(12) (13) The operation rounds a number to the closest integer.
According to (6)(10), the messages exchanged between variable and check nodes are continuous functions. In Sommer’s implementation [1], each continuous message is quantized and represented by a vector of elements. The convolution phase at each check node, as well as the product phase at each variable node, have very high memory and computational requirements. This limits its application to relatively small dimensional LDLC.
IiC GaussianApproximation based LDLC Decoding
Simplified decoding algorithms have been proposed in [8, 9, 10]. The key idea is to approximate the variable message in (10) by a single Gaussian pdf:
(14) 
where and are the mean and variance of .
As a consequence, the check message in (8) reduces to a periodic Gaussian pdf:
(15) 
where all component Gaussian pdfs in have the same mean and variance, denoted as and , respectively.
From (14) and (15), we see that both variable and check nodes only need to pass two values: the mean and variance of a Gaussian function. This will greatly reduce the memory requirement for the messages. However, it is still costly to perform the Gaussian approximation in (14). The problem lies in the computation of the unnormalized variable messages , which now reduce to
(16) 
which is a Gaussian mixture of infinitely many components.
To simplify (16), the authors in [10] replace each periodic Gaussian by only two Gaussians^{1}^{1}1In [10] the case using three Gaussians is also presented and it is shown to provides marginal performance improvements for a much higher complexity. with a mean value close to :
(17) 
where and are the two Gaussian pdfs in the periodic Gaussian with mean closest to . Recalling the fact that the product of Gaussian functions is still a single Gaussian. The simplified can be written as a sum of Gaussian pdfs. This means that in each iteration, the computational complexity at each variable node is proportional to . Note that with the value of used in [1], there complexity is relatively large.
In summary, although the Gaussianapproximation (GA) based LDLC decoders use less memory than the original decoder described in [1], they are still too complex to be used for long codes. In Section IV we will propose a much faster decoder (still based on GA) to overcome these limitations. Before that, we will tackle the other open question whether it is possible to prove that GAbased LDLC decoders actually converge.
Iii Convergence Analysis of GABased LDLC Decoders
In this section, we study the convergence speed of GAbased LDLC decoding algorithms. Recalling that the original LDLC decoder has the following property [1]:
(18) 
where is the number of iterations and is a finite number. In other words, each variable node generates narrow messages, whose variance converges to as . This implies the convergence of the pdf to Dirac centered at :
(19) 
which ensures the convergence of the original LDLC decoder in [1].
To study the impact of Gaussian approximation in (14) on convergence, we also track the changes in variance as increasing. We have the following theorem.
Theorem 1
For all GAbased LDLC decoders with , at high SNR, the variances of all variable messages satisfy
(20) 
Proof. See Appendix A.
Theorem 1 shows that all GAbased LDLC decoders converge sublinearly or faster at high SNR. This result demonstrates the goodness of Gaussian approximation in LDLC decoding since each variable message also generates narrow messages. In practice, the bounds (20) is tight even when SNR is close to the Shannon limit. An example is given bellow.
Example 1
We test a GAbased decoder in [10]. We consider a LDLC in [14], where and . We tune the size of input alphabet such that the code rate is bits/symbol. With hypercube shaping and uniform channel input, the Shannon capacity is dB. At SNR = dB, we simulate the average of the variances of all variable messages with . We define the ratio as convergence speed. In Fig. 1, we compare the simulated convergence speed with the estimated one, i.e., from (20), as a function of . We see the bound is very tight even for finite SNRs.
Iv A Fast Decoding Algorithm of LDLC
Theorem 1 shows that at high SNR, all GAbased decoders have a common upper bound on the convergence speed. This implies that they will all have a similar performance at high SNR. The question is to find the most efficient GAbased decoder. In what follows we identify a GAbased decoder with a much lower complexity than the ones in the literature.
Iva Idea
Recalling that the product of Gaussians returns a scaled single Gaussian, e.g., for two Gaussians, we have
(21) 
where
(22) 
We have the following property:
Property 1:We define as the height of product of Gaussians, i.e., . Let the operation return the location of peak of a function , e.g., . If the component Gaussians have very distant means/peaks (i.e., a large ), then , and we can assume that their product is .
This property allows us to simplify (17) by ignoring a large number of vanishing products. Specifically, let be the Gaussians with , and with . We approximate (17) as follows:
(23) 
We ignore the crossterms which involve elements from both and . For simplicity, we refer to the first term in (23) as left product, and the second term as right product.
To avoid the case where a crossterm has a greater height than a left/right product, we need to select and carefully. Note that the crossover occurs when we have staggered pairs of Gaussians. As shown in Fig. 2, we consider
where the two pairs of Gaussians are staggered around the threshold, since is closer to than . As a result, is greater than or . It also means that both and are close to . Since the distance between and is either or , then both and are far from , at a distance up to or . This fact inspires us to break staggered pairs by deleting Gaussians whose means/peaks are far from .
Left/Right Product Selection Criterion: Consider the intervals , where .

If and , we select
(24) 
If and , we select
(25) 
If and , we select
(26)
When , we can set for , and for . A further discussion on the choice of will be given in the journal version. As a result, (23) is updated to
(27) 
A detailed explanation of proposed algorithm is given below.
IvB Algorithm
We only demonstrate the operation at each variable node, since the operation at each check node is the same as [10]. The process takes three steps:

Mother message: To avoid redundant computation, we compute the left and right products from all inputs:
(28) which is referred to as the mother message. Using (22), we obtain a sum of two scaled Gaussians:
(29) 
Individual message: The message for the edge can be obtained by subtracting and from :
(30) We normalize and apply Gaussian approximation
(31)
The values of and are obtained from [15, Eq. (23)].
IvC Complexity
V Simulation Results
This section compares the performance of the proposed LDLC decoder to the best known decoder in [10]. Monte Carlo simulations are used to estimate the symbol error rate (SER).
Fig. 3 shows the SER for LDLC with and bits/symbol, using hypercube shaping. With hypercube shaping and uniform channel input, the Shannon capacity at bits/symbol is dB. Both short and long codes are tested, i.e., and . Given iterations, the performance of proposed decoder coincides with the reference one in both cases. Since the complexity of proposed decoder is much lower than the reference one, we can run more iterations, e.g., for . In that case, the proposed decoder outperforms the reference one, by about dB at SER . Meanwhile, the gap to capacity (with cubic shaping) is about dB. This result confirms that our decoder works well for both short and long LDLC codes.
Vi Conclusions
In this work, we have proved that all Gaussianapproximation based LDLC decoders have the same convergence speed at high SNR. Inspired by this result, we proposed a fast decoding algorithm which requires only operations at each variable node. The new decoder provides the same error correcting performance as the best known decoder, but with much lower complexity. The proposed decoder enables the decode much longer LDLC which provide even better error correcting performance.
Appendix
Via Proof of Theorem 1
Recalling the unnormalized variable message in (16)
We assume that is small, i.e., is very narrow, such that for each periodic Gaussian, there will be only one Gaussian component that contributes to the product. In this case, the variance of normalized message equals to that of unnormalized one, since they reduce to a single Gaussian.
In each iteration, we compute the variances of exchanged messages. At the iteration, let and be the variances of check messages passed via the edges labeled by and , respectively. Similarly, let and be the variances of variable messages passed via the edges labeled by and , respectively.
Due to space limit, we ignore the computation for iterations 1 and 2. Full details will be reported in the journal version.
References
 [1] N. Sommer, M. Feder, and O. Shalvi, “Lowdensity lattice codes,” IEEE Trans. Inf. Theory, vol. 54, no. 4, pp. 1561–1585, Apr. 2008.
 [2] B. Chen, D. N. K. Jayakody, and M. F. Flanagan, “Lowdensity lattice coded relaying with joint iterative decoding,” IEEE Trans. Commun., vol. 63, no. 12, pp. 4824–4837, Dec. 2015.
 [3] ——, “Distributed lowdensity lattice codes,” IEEE Communications Letters, vol. 20, no. 1, pp. 77–80, Jan. 2016.
 [4] N. S. Ferdinand, M. Nokleby, and B. Aazhang, “Lowdensity lattice codes for fullduplex relay channels,” IEEE Trans. Wireless Commun., vol. 14, no. 4, pp. 2309–2321, Apr. 2015.
 [5] R. Hooshmand and M. R. Aref, “Efficient secure channel coding scheme based on lowdensity lattice codes,” IET Communications, vol. 10, no. 11, 2016.
 [6] H. Khodaiemehr, D. Kiani, and M. R. Sadeghi, “LDPC lattice codes for fullduplex relay channels,” IEEE Trans. Commun., vol. 65, no. 2, pp. 536–548, Feb. 2017.
 [7] M. R. Sadeghi, A. H. Banihashemi, and D. Panario, “Lowdensity paritycheck lattices: Construction and decoding analysis,” IEEE Trans. Inf. Theory, vol. 52, no. 10, pp. 4481–4495, Oct. 2006.
 [8] B. Kurkoski and J. Dauwels, “Messagepassing decoding of lattices using Gaussian mixtures,” in Proc. IEEE Int. Symp. Inform. Theory (ISIT’08), 2008, pp. 2489–2493.
 [9] Y. Yona and M. Feder, “Efficient parametric decoder of low density lattice codes,” in Proc. IEEE Int. Symp. Inform. Theory (ISIT’09), Jun. 2009, pp. 744–748.
 [10] R. A. P. Hernandez and B. M. Kurkoski, “The three/two Gaussian parametric LDLC lattice decoding algorithm and its analysis,” IEEE Trans. Commun., vol. 64, no. 9, pp. 3624–3633, Sep. 2016.
 [11] N. Sommer, M. Feder, and O. Shalvi, “Shaping methods for lowdensity lattice codes,” in Proc. IEEE Information Theory Workshop (ITW’09), Oct. 2009, pp. 238–242.
 [12] N. S. Ferdinand, B. M. Kurkoski, B. Aazhang, and M. Latvaaho, “Shaping lowdensity lattice codes using Voronoi integers,” in Proc. IEEE Information Theory Workshop (ITW’14), Nov. 2014, pp. 127–131.
 [13] F. Zhou and B. M. Kurkoski, “Shaping LDLC lattices using convolutional code lattices,” IEEE Communications Letters, vol. 21, no. 4, pp. 730–733, 2017.
 [14] http://www.cs.cmu.edu/ bickson/gabp/.
 [15] B. Kurkoski and J. Dauwels, “Reducedmemory decoding of lowdensity lattice codes,” IEEE Communications Letters, vol. 14, no. 7, pp. 659–661, Jul. 2010.
Comments
There are no comments yet.