Efficient Scheduling of Serial Iterative Decoding for Zigzag Decodable Fountain Codes

04/16/2018
by   Yoshihiro Murayama, et al.
0

Fountain codes are erasure correcting codes realizing reliable communication systems for the multicast on the Internet. The zigzag decodable fountain (ZDF) code is one of generalization of the Raptor code, i.e, applying shift operation to generate the output packets. The ZDF code is decoded by a two-stage iterative decoding algorithm, which combines the packet-wise peeling algorithm and the bit-wise peeling algorithm. By the bit-wise peeling algorithm and shift operation, ZDF codes outperform Raptor codes under iterative decoding in terms of decoding erasure rate and overhead. However, the bit-wise peeling algorithm spends long decoding time. This paper proposes a fast bit-wise decoding algorithm for the ZDF codes. Simulation results show that the proposed algorithm drastically reduces the decoding time compared with the existing algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/13/2018

Erasure Correcting Codes by Using Shift Operation and Exclusive OR

This paper proposes an erasure correcting code and its systematic form f...
12/17/2018

Scalable Block-Wise Product BCH Codes

In this paper we comprehensively investigate block-wise product (BWP) BC...
08/09/2019

On Product Codes with Probabilistic Amplitude Shaping for High-Throughput Fiber-Optic Systems

We consider probabilistic amplitude shaping (PAS) as a means of flexibly...
07/13/2020

Orthogonal Sparse Superposition Codes

This paper presents a new class of sparse superposition codes for effici...
04/16/2021

Autoencoder-Based Unequal Error Protection Codes

We present a novel autoencoder-based approach for designing codes that p...
12/29/2018

The Crossover-Distance for ISI-Correcting Decoding of Convolutional Codes in Diffusion-Based Molecular Communications

In diffusion based molecular communication, the intersymbol interference...
04/10/2019

StegaStamp: Invisible Hyperlinks in Physical Photographs

Imagine a world in which each photo, printed or digitally displayed, hid...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

On the Internet, each message is split into several packets. We regard the packets which are not correctly received as erased. Hence, the Internet is modeled as a packet erasure channel. The sender cannot retransmit packets in the case of user datagram protocol (UDP).

Fountain codes [1] are erasure correcting codes which realizes reliable communication via UDP, in particular, multicasting. We assume that the original message is split into source packets. In the fountain coding system, the sender generates infinite output packets from the source packets. Each receiver decodes the original message from received packets, where is referred to as packet overhead. Hence, in the fountain coding system, the receiver need not request the retransmission.

Raptor code [2] is a fountain code which achieves arbitrarily small as with a linear time encoding and decoding algorithm. Encoding of Raptor code is divided into two stages. At the first stage, the encoder generates precoded packets from the source packets by using a precode, which is a high rate erasure correcting code, e.g, low-density parity-check (LDPC) code. At the second stage, the encoder generates output packets from precoded packets by using LT code [3]. More precisely, each output packet is generated from the bit-wise exclusive OR (XOR) of randomly chosen precoded packets. Decoding of Raptor codes is in a similar way to LDPC code over the binary erasure channels. In other words, the decoder constructs the factor graph from the received packets and the parity check matrix of the precode and recovers the precoded packets by using the peeling algorithm (PA) [4].

Zigzag decodable fountain (ZDF) code [5] is a generalization of Raptor code. Similarly to Raptor code, encoding of ZDF code is divided into two stages. At the first stage, the encoder generates precoded packets from the source packets by using a precode. At the second stage, the encoder generates output packets from the precoded packets in the following way; Encoder randomly chooses precoded packets and those shift amounts, executes the bit-level shift to the chosen precoded packets, and perform the bit-wise XOR to the shifted precoded packets.

A decoding algorithm for the ZDF codes is also two-stage algorithm. Similarly to Raptor code, a factor graph for the ZDF code is constructed before stating the decoding algorithm. At the first stage, a packet-wise PA works on the factor graph and recovers the precoded packets in packet-wise. Unless the packet-wise PA succeeds, the remaining precoded packets are decoded by the bit-wise PA, which is the PA over the bit-wise representation of the factor graph.

As shown in [5], ZDF codes outperform Raptor codes in terms of packet overhead. However, the decoding algorithm for ZDF codes requires large decoding time. The purpose of this research is to propose a fast decoding algorithm for ZDF codes. As a related work, the work in [6] slightly reduces the number of decoding iterations.

In this paper, we propose a fast decoding algorithm for ZDF codes by reducing the number of decoding processes in the bit-wise PA. By a numerical example shown in Section III-A, we ascertain that only particular edges contribute to recovering the bits of precoded packets. The main idea of this work is to execute the decoding processes only to such edges. Moreover, the algorithm makes a list which records the order of edges which contribute the decoding and execute the decoding processes as in the order of this list. As a result, we significantly reduce the decoding time compared with the existing algorithm.

The rest of paper is organized as follows. Section II briefly reviews ZDF codes and those existing decoding algorithm. Section III gives a numerical example which shows that the only particular edges contribute the decoding, and proposes a fast decoding algorithm for the ZDF codes via scheduling. The simulation results in Section IV show that the proposed algorithm reduces the number of decoding processes and decoding time compared with exist one. Section V concludes the paper.

Ii Preliminaries

This section gives some notation and introduces the encoding and decoding algorithm of the ZDF codes. Section II-A explains the encoding of the ZDF codes. Section II-B gives factor graph representations of the ZDF codes. Section II-C explains the original decoding algorithm [5] of the ZDF codes.

Ii-a Encoding of the ZDF Codes [5]

A polynomial representation of the packets is defined as .

The ZDF codes is defined by precode , the distribution for the inner code , and shift distribution . Here,

represents the probability that the shift amount is

.

Similarly to the Raptor codes, the ZDF codes generates the precoded packets from the source packets by the precode at the first stage. At the second stage, the ZDF codes generates the infinite output packets as the following procedure for .

  1. Choose a degree of the -th output packet according to the degree distribution . In other words, choose with probability .

  2. Choose -tuple of shift amounts in independent of each other according to shift distribution , where denote the set of integers between and . Define and calculate .

  3. Choose distinct precoded packets uniformly. Let denote the -tuple of indexes of the chosen precoded packets. Then the polynomial representation for the -th output packet is given as

Ii-B Factor graph generated by the receiver

Let be received packets for a receiver, where . Similarly to the Raptor codes, each receiver constructs a factor graph from the precode and the received packets. The generated factor graphs depend on receivers since the received packets depend on receivers.

The factor graphs for the ZDF codes is composed of labeled edges and the four kinds of nodes: variable nodes representing precoding packets , check nodes on the precode code , variable nodes representing received packets , and factor nodes on the inner code .

The edge connection between and is decided from the precode . More precisely, and are connected to an edge labeled by 1 if and only if the -th entry of is equal to 1. The edge connection between and is decided from the header of the -th received packet. If the header of the -th received packet represents and , an edge labeled by connects and for . We denote the label on the edges connecting to and , by . For , an edge connects and . Denote the set of indexes of the variable nodes adjacent to the -th factor node, by .

Ii-C Decoding algorithm for the ZDF codes

The decoding algorithm of the ZDF codes is a two-stage algorithm. At the first stage, packet-wise PA works on the factor graph of the ZDF codes. The details of packet-wise PA is given in [5]. Unless the packet-wise PA successes, bit-wise PA works on its residual graph. In this section, we explain the original bit-wise PA [7].

In the decoding of the ZDF codes, the -th factor node has memory of length , denoted by . Let be the set of indexes of variable nodes which are not recovered by the packet-wise PA. The residual graph is the subgraph composed of the variable nodes in and those connecting edges.

Now, we explain factor node processing of bit-wise PA. Let

be vector representation of packet

. Denote the mapping of the factor node processing, by . The mapping updates the memory of the -th variable node by the -th factor node as follows

where the mapping is

the mapping is

the mapping is

and the mapping is

Bit-wise decoding is done in the procedure of Algorithm 1.

0:  Residual graph , values of memories , and precoded packets
0:  precoded packets
1:  ,
2:  
3:  for ,  do
4:      
5:  end for
6:  
7:  if  then
8:       and go to Step 3
9:  end if
Algorithm 1 Bit-wise decoding (existing algorithm)

If Algorithm 1 outputs for all , the decoding succeeds. Otherwise, decoding fails.

Iii Observation of the bit-wise PA and Proposed Decoding Algorithm

We refer to the process of Step 4 in Algorithm 1 as decoding process. We refer to the edges which recover the bits of precodes as updating edges. In other words, the updating edges contribute the decoding at a decoding round. In the original bit-wise PA, since all the factor nodes are processed, the number of decoding processes per iteration is equal to the total number of edges of the factor graph generated by the receiver. Hence, the decoding process is performed on many edges which do not contribute to decoding.

In this section, we observe the original bit-wise decoding. As a result, we ascertain that the only particular edges contribute the decoding.

Roughly speaking, the proposed bit-wise decoding algorithm generates a set of edges used for updating variable nodes and records the edges to a list in the proper order from the set of edges. After that, the decoding process is performed only on edges in the list.

Section III-A shows that the only particular edges contribute the decoding by the evaluation of original bit-wise PA. Section III-B gives proposed decoding algorithm which reduces the decoding process per iteration.

Iii-a Decoding Process of the original bit-wise PA

In this section, we use the shift distribution . As a precode, we employ (3,30)-regular LDPC codes. The degree distribution for the inner code is . An edge is activate if the edge has become an updating edge.

Figure 1 displays the number of decoding processes, the number of updating edges, and the number of active edges under the original bit-wise PA with and . The horizontal axis of Fig. 1 represents the number of iterations. The symbol # in Fig. 1 stands “the number”. From the curve of active edges in Fig. 1, we see that the only particular edges contribute the decoding.

Fig. 1: Number of decoding processes, updating edges, and activated edges per iteration (existing algorithm)

Iii-B Proposed Algorithm

Let (resp. ) be the set (resp. list) of edges which contribute the decoding by the original bit-wise PA. In other words, represents the set of active edges. Proposed decoding algorithm is divided into three stages. At the first stage, the decoder executes decoding process for all the edges and makes by adding the edges which contribute the decoding of precoded packets. At the second stage, the decoder executes decoding process for the edges in and makes by recording the order of edges which contribute the decoding. At the final stage, the decoding process is performed edges as in the order of the list . Unless edges contribute the decoding, the edges is deleted from the list .

Remark 1

At the second stage, decoder makes only one list . The -th element of list stores the -th contributing edge in the whole second stage. Hence, there is a possibility that an edge is recorded in several times.

Next, we explain the parameters used in the proposed algorithm. Let and represent the size of and the length of , respectively. The number of iterations in the second stage is denoted by . We use time and vector to reduce the time of make . The element of vector indicates whether the -th variable node connects to an edge in . More precisely, , if the -th variable node is already recovered or can be updated by an edge in . Otherwise, . Function determines whether the variable node has been recovered, namely,

We denote the maximum number of decoding iterations at the first stage, by . At the first stage, if there exists such that until the -th iteration, then we expect that the -th precoded packet will not be recovered. Hence, in such case, the decoder halts and outputs decoding failure.

Remark 2

Time is used time out of decoding. Small reduces average decoding time but slightly degrades the decoding performance. Conversely, large causes large average decoding time but does not degrade the decoding performance. We confirm these in Section IV-A.

The details of the proposed algorithm are given in Algorithm 2. Step 3-20 gives the first stage of decoding. Step 16 decides decoding stop. Step 18 decides whether making of is sufficient. Step 22-37 gives the second stage of decoding. Step 38-49 gives the final stage of decoding. At the final stage, the edges that do not contribute the decoding is deleted. In Step 51, if there exists an unrecovered precoded packet, then decoding restarts the first stage. Here, by this process, the decoding performance is improved.

0:  Residual graph , values of memories , precoded packets , time , and time
0:  precoded packets
1:  , , ,
2:  ,
3:  
4:  for ,  do
5:      
6:      if  then
7:          
8:          if  then
9:              
10:              
11:          end if
12:      end if
13:      
14:  end for
15:  
16:  if  or  then
17:      Decoding halts.
18:  else if  then
19:      Go to Step 3.
20:  end if
21:  ,
22:  
23:  for  do
24:      Set s.t. 
25:      
26:      if  then
27:          ,
28:          
29:      end if
30:      
31:  end for
32:  
33:  if  or ( and then
34:       and go to Step 2.
35:  else if  then
36:      Go to Step 22.
37:  end if
38:  
39:  for  do
40:      Set s.t. 
41:      
42:      if   then
43:          Delete from .
44:      end if
45:      
46:  end for
47:  
48:  if  then
49:      Go to Step 38
50:  else if  then
51:      Go to Step 2
52:  end if
Algorithm 2 Scheduled bit-wise decoding

If Algorithm 2 outputs for all , the decoding succeeds. Otherwise, decoding fails.

Iv Simulation Results

In this section, we evaluate the performance of the proposed algorithm. Section IV-A evaluates the decoding erasure rates. Section IV-B compares the number of decoding processes. Section IV-C gives the decoding time. The parameters (i.e, , and ) used in this section are same as Section III-A. The time are given by .

Iv-a Decoding Erasure Rate

Fig. 2: Comparison of decoding erasure rate of the proposed algorithm with existing one (Original)

The decoding erasure rate (DER) is the fraction of the trials in which some bits in the precoded packets are not recovered.

Figure 2 displays the DERs for the proposed algorithm and existing algorithm with . The horizontal axis of Fig. 2 represents the packet overhead . The curve labeled with “Original” in Fig. 2 gives the DER of existing algorithm, and the curve labeled with “Proposed Small ” and “Proposed Large ” in Fig. 2 give the DER of proposed method in the case of and the case of , respectively. As shown in Fig. 2, all the DERs are nearly equal.

As show in Remark 2, is a parameter for timeout, and in the case of , the decoding performance is slightly degraded compared with the existing one.

Iv-B Number of Decoding Processes

In this section, we first evaluate the number of decoding processes and the number of updating edges at each iteration for the proposed algorithm. Next, we compare the number of decoding processes at bit-wise decoding for each overhead.

Fig. 3: Number of decoding processes at each iteration for the proposed algorithm ()

Figure 3 displays an example of the number of decoding processes and number of updating edges at each iteration for the proposed algorithm. The horizontal axis of Fig. 3 represents decoding iteration. As shown in Fig. 3, from the 12th iteration to the 31st iteration, the number of decoding processes at each iteration number is significantly reduced, because decoding processes are only executed the edges in . After 32nd iteration, the proposed algorithm executes the decoding process to the edges in . From Fig. 3, we see that the number of updating edges almost equals to the number of decoding processes after 32nd iteration. Hence, we conclude that the proposed algorithm well constructs edge list . Moreover, the number of iterations required for decoding is smaller than that of the existing algorithm. Therefore, the proposed algorithm has fewer decoding processes than the existing algorithm.

Next, we evaluate the number of decoding processes of an existing algorithm and the proposed algorithm for each overhead . Figure 4 compares the number of decoding processes for existing algorithm with proposed algorithm under . The horizontal axis of Fig. 4 represents overhead . From Fig. 4, the number of decoding processes of the proposed algorithm is significantly less than one of an existing algorithm.

Fig. 4: Comparison of the number of decoding processes for the proposed algorithm with existing one (Original)

Iv-C Decoding Time

In the evaluation of this section, we try 10000 times for each overhead . In this simulation, we use Ubuntu16.04 for OS, Intel(R)Core(TM)i7-4770CPU@3.40GHz for CPU, and 4GB DDR3 memory. Figure 5 displays the decoding time for existing algorithm and proposed algorithm with . The horizontal axis of Fig. 5 represents overhead . As shown in Fig. 5, the decoding time of the proposed algorithm is much shorter than one of existing one.

Fig. 5: Comparison of the decoding time for the proposed algorithm with existing one (Original)

V Conclusion

In this paper, we have proposed an efficient bit-wise decoding algorithm for ZDF codes. Simulation results show that the proposed algorithm drastically reduces the decoding time compared with an existing algorithm.

Acknowledgment

This work was supported by JSPS KAKENHI Grant Number 16K16007.

References

  • [1] J.W. Byers, M. Luby, M. Mitzenmacher, and A. Rege, “A digital fountain approach to reliable distribution of bulk data,” ACM SIGCOMM Computer Communication Review, vol.28, no.4, pp.56–67, 1998.
  • [2] A. Shokrollahi, “Raptor codes,” IEEE transactions on information theory, vol.52, no.6, pp.2551–2567, 2006.
  • [3] M. Luby, “LT codes,” Proceedings. The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002, pp.271–280, IEEE, 2002.
  • [4]

    M.G. Luby, M. Mitzenmacher, M.A. Shokrollahi, D.A. Spielman, and V. Stemann, “Practical loss-resilient codes,” Proceedings of the twenty-ninth annual ACM symposium on Theory of computing, pp.150–159, ACM, 1997.

  • [5] T. Nozaki, “Zigzag decodable fountain codes,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol.100, no.8, pp.1693–1704, 2017.
  • [6] T. Nozaki, “Reduction of decoding iterations for zigzag decodable fountain codes,” 2016 International Symposium on Information Theory and Its Applications (ISITA), pp.601–605, Oct 2016.
  • [7] T. Nozaki, “Fountain code based on triangular coding,” Technical report of IEICE, vol.113, no.228, pp.31–36, 2013.