Learning to Decode Linear Codes Using Deep Learning

07/16/2016 ∙ by Eliya Nachmani, et al. ∙ Tel Aviv University 0

A novel deep learning method for improving the belief propagation algorithm is proposed. The method generalizes the standard belief propagation algorithm by assigning weights to the edges of the Tanner graph. These edges are then trained using deep learning techniques. A well-known property of the belief propagation algorithm is the independence of the performance on the transmitted codeword. A crucial property of our new method is that our decoder preserved this property. Furthermore, this property allows us to learn only a single codeword instead of exponential number of code-words. Improvements over the belief propagation algorithm are demonstrated for various high density parity check codes.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In recent years deep learning methods have demonstrated significant improvements in various tasks. These methods outperform human-level object detection in some tasks [resnet], and achieve state-of-the-art results in machine translation [nmt] and speech processing [graves2013speech]

. Additionally, deep learning combined with reinforcement learning techniques was able to beat human champions in challenging games such as Go

[d_silver]. There are three different reasons for the outstanding results of Deep Learning models:

  1. Powerful computing resources such as fast GPUs.

  2. Utilizing efficiently large collections of datasets, e.g. ImageNet

    [imagenet] for image processing.

  3. Advanced academic research on training methods and network architectures [batch_norm], [alexnet], [adadelta], [dropout].

Error correcting codes for channel coding are used in order to enable reliable communications at rates close to the Shannon capacity. A well-known family of linear error correcting codes are the low-density parity-check (LDPC) codes [galmono]. LDPC codes achieve near Shannon channel capacity with the belief propagation (BP) decoding algorithm, but can typically do so for relatively large block lengths. For high density parity check (HDPC) codes [jiang2006iterative], [dimnik2009improved], [yufit2011efficient], [zhang2012adaptive], such as common powerful algebraic codes, the BP algorithm obtains poor results compared to the maximum likelihood decoder [helmling2014efficient]

. In this work we focus on HDPC codes and demonstrate how the BP algorithm can be improved. The naive approach to the problem is to assume a neural network type decoder without restrictions, and train its weights using a dataset that contains a large amount of codewords. The training goal is to reconstruct the transmitted codeword from a noisy version after transmitting over the communication channel. Unfortunately, when using this approach our decoder is not given any side information regarding the structure of the code. In fact it is even not aware of the fact that the code is linear. Hence we are required to train the decoder using a huge collection of codewords from the code, and due to the exponential nature of the problem, this is infeasible, e.g., for a BCH(63,45) code we need a dataset of

codewords. On top of that, the database needs to reflect the variability due to the noisy channel. In order to overcome this issue, our proposed approach is to assign weights to the edges of the Tanner graph that represent the given linear code, thus yielding a “soft” Tanner graph. These edges are trained using deep learning techniques. A well-known property of the BP algorithm is the independence of the performance on the transmitted codeword. A major ingredient in our new method is that this property is preserved by our decoder. Thus it is sufficient to use a single codeword for training the parameters of our decoder. We demonstrate improvements over BP for various high density parity check codes, including BCH(63,36), BCH(63,45), and BCH(127,106) .

Ii The Belief Propagation Algorithm

The renowned BP decoder [galmono], [ru_book] can be constructed from the Tanner graph, which is a graphical representation of some parity check matrix that describes the code. In this algorithm, messages are transmitted over edges. Each edge calculates its outgoing message based on all incoming messages it receives over all its edges, except for the message received on the transmitting edge. We start by providing an alternative graphical representation to the BP algorithm with full iterations when using parallel (flooding) scheduling. Our alternative representation is a trellis in which the nodes in the hidden layers correspond to edges in the Tanner graph. Denote by , the code block length (i.e., the number of variable nodes in the Tanner graph), and by

, the number of edges in the Tanner graph. Then the input layer of our trellis representation of the BP decoder is a vector of size

, that consists of the log-likelihood ratios (LLRs) of the channel outputs. The LLR value of variable node , , is given by

where is the channel output corresponding to the th codebit, .

All the following layers in the trellis, except for the last one (i.e., all the hidden layers), have size . For each hidden layer, each processing element in that layer is associated with the message transmitted over some edge in the Tanner graph. The last (output) layer of the trellis consists of processing elements that output the final decoded codeword. Consider the th hidden layer,

. For odd (even, respectively) values of

, each processing element in this layer outputs the message transmitted by the BP decoder over the corresponding edge in the graph, from the associated variable (check) node to the associated check (variable) node. A processing element in the first hidden layer (), corresponding to the edge , is connected to a single input node in the input layer: It is the variable node, , associated with that edge. Now consider the th () hidden layer. For odd (even, respectively) values of , the processing node corresponding to the edge is connected to all processing elements in layer associated with the edges for ( for , respectively). For odd , a processing node in layer , corresponding to the edge , is also connected to the th input node.

The BP messages transmitted over the trellis graph are the following. Consider hidden layer , , and let be the index of some processing element in that layer. We denote by , the output message of this processing element. For odd (even, respectively), , this is the message produced by the BP algorithm after iterations, from variable to check (check to variable) node.

For odd and we have (recall that the self LLR message of is ),

(1)

under the initialization, for all edges (in the beginning there is no information at the parity check nodes). The summation in (1) is over all edges with variable node except for the target edge . Recall that this is a fundamental property of message passing algorithms [ru_book].

Similarly, for even and we have,

(2)

The final th output of the network is given by

(3)

which is the final marginalization of the BP algorithm.

Iii The Proposed Deep Neural Network Decoder

We suggest the following parameterized deep neural network decoder that generalizes the BP decoder of the previous section. We use the same trellis representation for the decoder as in the previous section. The difference is that now we assign weights to the edges in the Tanner graph. These weights will be trained using stochastic gradient descent which is the standard method for training neural networks. More precisely, our decoder has the same trellis architecture as the one defined in the previous section. However, Equations (

1), (2) and (3) are replaced by

(4)

for odd ,

(5)

for even , and

(6)

where

is a sigmoid function. The sigmoid is added so that the final network output is in the range

. This makes it possible to train the network using a cross entropy loss function, as described in the next section. Apart of the addition of the sigmoid function at the outputs of the network, one can see that by setting all weights to one, Equations (

4)-(6) degenerate to (1)-(3). Hence by optimal setting (training) of the parameters of the neural network, its performance can not be inferior to plain BP.

It is easy to verify that the proposed message passing decoding algorithm (4)-(6) satisfies the message passing symmetry conditions [ru_book, Definition 4.81]. Hence, by [ru_book, Lemma 4.90], when transmitting over a binary memoryless symmetric (BMS) channel, the error rate is independent of the transmitted codeword. Therefore, to train the network, it is sufficient to use a database which is constructed by using noisy versions of a single codeword. For convenience we use the zero codeword, which must belong to any linear code. The database reflects various channel output realizations when the zero codeword has been transmitted. The goal is to train the parameters to achieve an dimensional output word which is as close as possible to the zero codeword. The network architecture is a non-fully connected neural network. We use stochastic gradient descent to train the parameters. The motivation behind the new proposed parameterized decoder is that by setting the weights properly, one can compensate for small cycles in the Tanner graph that represents the code. That is, messages sent by parity check nodes to variable nodes can be weighted, such that if a message is less reliable since it is produced by a parity check node with a large number of small cycles in its local neighborhood, then this message will be attenuated properly.

The time complexity of the deep neural network is similar to plain BP algorithm. Both have the same number of layers and the same number of non-zero weights in the Tanner graph. The deep neural network architecture is illustrated in Figure 1 for a BCH(15,11) code.

Fig. 1: Deep Neural Network Architecture For BCH(15,11) with 5 hidden layers which correspond to 3 full BP iterations. Note that the self LLR messages are plotted as small bold lines. The first hidden layer and the second hidden layer that were described above are merged together. Also note that this figure shows 3 full iterations and the final marginalization.

Iv Experiments

Iv-a Neural Network Training

We built our neural network on top of the TensorFlow framework 

[abadi2015tensorflow] and used an NVIDIA Tesla K40c GPU for accelerated training. We applied cross entropy as our loss function,

(7)

where , are the deep neural network output and the actual th component of the transmitted codeword (if the all-zero codeword is transmitted then for all ). Training was conducted using stochastic gradient descent with mini-batches. The mini-batch size was

examples. We applied the RMSPROP 

[rmsprop] rule with a learning rate equal to . The neural network has hidden layers, which correspond to full iterations of the BP algorithm. Each processing element in an odd (even, respectively) indexed hidden layer is described by Equation (4) (Equation (5), respectively). At test time, we inject noisy codewords after transmitting through an AWGN channel and measure the bit error rate (BER) in the decoded codeword at the network output. When computing (4), we also clip the input to the tanh function such that the absolute value of the input is always smaller than some positive constant . This is also required for practical (finite block length) implementations of the BP algorithm, in order to stabilize the operation of the decoder. We trained our decoding network on few different linear codes, including BCH(15,11), BCH(63,36), BCH(63,45) and BCH(127,106).

Iv-B Neural Network Training With Multiloss

The proposed neural network architecture has the property that after every odd

layer we can add final marginalization. We can use that property to add additional terms in the loss. The additional terms can increase the gradient update at the backpropagation algorithm and allow learning the lower layers. At each odd

layer we add the final marginalization to loss:

(8)

where , are the deep neural network output at the odd layer and the actual th component of the transmitted codeword. This network architecture is illustrated in Figure 2.

Fig. 2: Deep Neural Network Architecture For BCH(15,11) with training multiloss. Note that the self LLR messages are plotted as small bold lines.

Iv-C Dataset

The training data is created by transmitting the zero codeword through an AWGN channel with varying SNRs ranging from to . Each mini batch has codewords for each SNR (a total of examples in the mini batch). For the test data we use codewords with the same SNR range as in the training dataset. Parity check matrices were taken from [ParityCheckMatrix].

Iv-D Results

In this section we present the results of the deep neural decoding networks for various BCH block codes. In each code we observed an improvement compared to the BP algorithm. Note that when we applied our algorithm to the BCH(15,11) code, we obtained close to maximum likelihood results with the deep neural network. For larger BCH codes, the BP algorithm and the deep neural network still have a significant gap from maximum likelihood. The BER figures 3, 4 and 5 show an improvement of up to in the high SNR region. Furthermore, the deep neural network BER is consistently smaller or equal to the BER of the BP algorithm. This result is in agreement with the observation that our network cannot perform worse than the BP algorithm. Figure 6 shows the results of training the deep neural network with multiloss. It shows an improvement of up to compared to the plain BP algorithm. Moreover, we can observe that we can achieve the same BER performance of 50 iteration BP with 5 iteration of the deep neural decoder, This is equal to complexity reduction of factor 10.

We compared the weights of the BP algorithm and the weights of the trained deep neural network for a BCH(63,45) code. We observed that the deep neural network produces weights in the range from to , in contrast to the BP algorithm which has binary or weights. Figure 7

shows the weights histogram for the last layer. Interestingly, the distribution of the weights is close to a normal distribution. In a similar way, every hidden layer in the trained deep neural network has a close to normal distribution. Note that Hinton

[hinton2010practical] recommends to initialize the weights with normal distribution. In Figures 8 and 9

we plot the weights of the last hidden layer. Each column in the figure corresponds to a neuron described by Equation  (

4). It can be observed that most of the weights are zeros except the Tanner graph weights which have value of 1 in Figure 8 (BP algorithm) and some real number in Figure 9 for the neural network. In Figure 8 and  9 we plot a quarter of the weights matrix for better illustration.

Fig. 3: BER results for BCH(63,36) code.
Fig. 4: BER results for BCH(63,45) code.
Fig. 5: BER results for BCH(127,106) code.
Fig. 6: BER results for BCH(63,45) code trained with multiloss.
Fig. 7: Weights histogram of last hidden layer of the deep neural network for BCH(63,45) code.
Fig. 8: Weights of the last hidden layer in the BP algorithm for BCH(63,45) code.
Fig. 9: Weights of the last hidden layer in deep neural network for BCH(63,45) code.

V Conclusions

In this work we applied deep learning techniques to improve the performance of the BP algorithm. We showed that a “soft” Tanner graph can produce improvements when used in the BP algorithm instead of the standard Tanner graph. We believe that the BER improvement was achieved by properly weighting the messages, such that the effect of small cycles in the Tanner graph was partially compensated. It should be emphasized that the parity check matrices that we worked with were obtained from [ParityCheckMatrix]. We have not evaluated our method on parity check matrices for which an attempt was made to reduce the number of small cycles. A notable property of our neural network decoder is that once we have trained its parameters, we can improve performance compared to plain BP without increasing the required computational complexity. Another notable property of the neural network decoder is that we learn the channel and the linear code simultaneously.

We regard this work as a first step in the implementation of deep learning techniques for the design of improved decoders. Our future work include possible improvements in the error rate results by exploring new neural network architectures and combining other decoding methods. Furthermore, we plan to investigate the connection between the parity check matrix and the deep neural network decoding capabilities.

Acknowledgment

This research was supported by the Israel Science Foundation, grant no. 1082/13. The Tesla K40c used for this research was donated by the NVIDIA Corporation.

References