I Introduction
On the Internet, each message is split into several packets. We regard the packets which are not correctly received as erased. Hence, the Internet is modeled as a packet erasure channel. The sender cannot retransmit packets in the case of user datagram protocol (UDP).
Fountain codes [1] are erasure correcting codes which realizes reliable communication via UDP, in particular, multicasting. We assume that the original message is split into source packets. In the fountain coding system, the sender generates infinite output packets from the source packets. Each receiver decodes the original message from received packets, where is referred to as packet overhead. Hence, in the fountain coding system, the receiver need not request the retransmission.
Raptor code [2] is a fountain code which achieves arbitrarily small as with a linear time encoding and decoding algorithm. Encoding of Raptor code is divided into two stages. At the first stage, the encoder generates precoded packets from the source packets by using a precode, which is a high rate erasure correcting code, e.g, lowdensity paritycheck (LDPC) code. At the second stage, the encoder generates output packets from precoded packets by using LT code [3]. More precisely, each output packet is generated from the bitwise exclusive OR (XOR) of randomly chosen precoded packets. Decoding of Raptor codes is in a similar way to LDPC code over the binary erasure channels. In other words, the decoder constructs the factor graph from the received packets and the parity check matrix of the precode and recovers the precoded packets by using the peeling algorithm (PA) [4].
Zigzag decodable fountain (ZDF) code [5] is a generalization of Raptor code. Similarly to Raptor code, encoding of ZDF code is divided into two stages. At the first stage, the encoder generates precoded packets from the source packets by using a precode. At the second stage, the encoder generates output packets from the precoded packets in the following way; Encoder randomly chooses precoded packets and those shift amounts, executes the bitlevel shift to the chosen precoded packets, and perform the bitwise XOR to the shifted precoded packets.
A decoding algorithm for the ZDF codes is also twostage algorithm. Similarly to Raptor code, a factor graph for the ZDF code is constructed before stating the decoding algorithm. At the first stage, a packetwise PA works on the factor graph and recovers the precoded packets in packetwise. Unless the packetwise PA succeeds, the remaining precoded packets are decoded by the bitwise PA, which is the PA over the bitwise representation of the factor graph.
As shown in [5], ZDF codes outperform Raptor codes in terms of packet overhead. However, the decoding algorithm for ZDF codes requires large decoding time. The purpose of this research is to propose a fast decoding algorithm for ZDF codes. As a related work, the work in [6] slightly reduces the number of decoding iterations.
In this paper, we propose a fast decoding algorithm for ZDF codes by reducing the number of decoding processes in the bitwise PA. By a numerical example shown in Section IIIA, we ascertain that only particular edges contribute to recovering the bits of precoded packets. The main idea of this work is to execute the decoding processes only to such edges. Moreover, the algorithm makes a list which records the order of edges which contribute the decoding and execute the decoding processes as in the order of this list. As a result, we significantly reduce the decoding time compared with the existing algorithm.
The rest of paper is organized as follows. Section II briefly reviews ZDF codes and those existing decoding algorithm. Section III gives a numerical example which shows that the only particular edges contribute the decoding, and proposes a fast decoding algorithm for the ZDF codes via scheduling. The simulation results in Section IV show that the proposed algorithm reduces the number of decoding processes and decoding time compared with exist one. Section V concludes the paper.
Ii Preliminaries
This section gives some notation and introduces the encoding and decoding algorithm of the ZDF codes. Section IIA explains the encoding of the ZDF codes. Section IIB gives factor graph representations of the ZDF codes. Section IIC explains the original decoding algorithm [5] of the ZDF codes.
Iia Encoding of the ZDF Codes [5]
A polynomial representation of the packets is defined as .
The ZDF codes is defined by precode , the distribution for the inner code , and shift distribution . Here,
represents the probability that the shift amount is
.Similarly to the Raptor codes, the ZDF codes generates the precoded packets from the source packets by the precode at the first stage. At the second stage, the ZDF codes generates the infinite output packets as the following procedure for .

Choose a degree of the th output packet according to the degree distribution . In other words, choose with probability .

Choose tuple of shift amounts in independent of each other according to shift distribution , where denote the set of integers between and . Define and calculate .

Choose distinct precoded packets uniformly. Let denote the tuple of indexes of the chosen precoded packets. Then the polynomial representation for the th output packet is given as
IiB Factor graph generated by the receiver
Let be received packets for a receiver, where . Similarly to the Raptor codes, each receiver constructs a factor graph from the precode and the received packets. The generated factor graphs depend on receivers since the received packets depend on receivers.
The factor graphs for the ZDF codes is composed of labeled edges and the four kinds of nodes: variable nodes representing precoding packets , check nodes on the precode code , variable nodes representing received packets , and factor nodes on the inner code .
The edge connection between and is decided from the precode . More precisely, and are connected to an edge labeled by 1 if and only if the th entry of is equal to 1. The edge connection between and is decided from the header of the th received packet. If the header of the th received packet represents and , an edge labeled by connects and for . We denote the label on the edges connecting to and , by . For , an edge connects and . Denote the set of indexes of the variable nodes adjacent to the th factor node, by .
IiC Decoding algorithm for the ZDF codes
The decoding algorithm of the ZDF codes is a twostage algorithm. At the first stage, packetwise PA works on the factor graph of the ZDF codes. The details of packetwise PA is given in [5]. Unless the packetwise PA successes, bitwise PA works on its residual graph. In this section, we explain the original bitwise PA [7].
In the decoding of the ZDF codes, the th factor node has memory of length , denoted by . Let be the set of indexes of variable nodes which are not recovered by the packetwise PA. The residual graph is the subgraph composed of the variable nodes in and those connecting edges.
Now, we explain factor node processing of bitwise PA. Let
be vector representation of packet
. Denote the mapping of the factor node processing, by . The mapping updates the memory of the th variable node by the th factor node as followswhere the mapping is
the mapping is
the mapping is
and the mapping is
Bitwise decoding is done in the procedure of Algorithm 1.
If Algorithm 1 outputs for all , the decoding succeeds. Otherwise, decoding fails.
Iii Observation of the bitwise PA and Proposed Decoding Algorithm
We refer to the process of Step 4 in Algorithm 1 as decoding process. We refer to the edges which recover the bits of precodes as updating edges. In other words, the updating edges contribute the decoding at a decoding round. In the original bitwise PA, since all the factor nodes are processed, the number of decoding processes per iteration is equal to the total number of edges of the factor graph generated by the receiver. Hence, the decoding process is performed on many edges which do not contribute to decoding.
In this section, we observe the original bitwise decoding. As a result, we ascertain that the only particular edges contribute the decoding.
Roughly speaking, the proposed bitwise decoding algorithm generates a set of edges used for updating variable nodes and records the edges to a list in the proper order from the set of edges. After that, the decoding process is performed only on edges in the list.
Section IIIA shows that the only particular edges contribute the decoding by the evaluation of original bitwise PA. Section IIIB gives proposed decoding algorithm which reduces the decoding process per iteration.
Iiia Decoding Process of the original bitwise PA
In this section, we use the shift distribution . As a precode, we employ (3,30)regular LDPC codes. The degree distribution for the inner code is . An edge is activate if the edge has become an updating edge.
Figure 1 displays the number of decoding processes, the number of updating edges, and the number of active edges under the original bitwise PA with and . The horizontal axis of Fig. 1 represents the number of iterations. The symbol # in Fig. 1 stands “the number”. From the curve of active edges in Fig. 1, we see that the only particular edges contribute the decoding.
IiiB Proposed Algorithm
Let (resp. ) be the set (resp. list) of edges which contribute the decoding by the original bitwise PA. In other words, represents the set of active edges. Proposed decoding algorithm is divided into three stages. At the first stage, the decoder executes decoding process for all the edges and makes by adding the edges which contribute the decoding of precoded packets. At the second stage, the decoder executes decoding process for the edges in and makes by recording the order of edges which contribute the decoding. At the final stage, the decoding process is performed edges as in the order of the list . Unless edges contribute the decoding, the edges is deleted from the list .
Remark 1
At the second stage, decoder makes only one list . The th element of list stores the th contributing edge in the whole second stage. Hence, there is a possibility that an edge is recorded in several times.
Next, we explain the parameters used in the proposed algorithm. Let and represent the size of and the length of , respectively. The number of iterations in the second stage is denoted by . We use time and vector to reduce the time of make . The element of vector indicates whether the th variable node connects to an edge in . More precisely, , if the th variable node is already recovered or can be updated by an edge in . Otherwise, . Function determines whether the variable node has been recovered, namely,
We denote the maximum number of decoding iterations at the first stage, by . At the first stage, if there exists such that until the th iteration, then we expect that the th precoded packet will not be recovered. Hence, in such case, the decoder halts and outputs decoding failure.
Remark 2
Time is used time out of decoding. Small reduces average decoding time but slightly degrades the decoding performance. Conversely, large causes large average decoding time but does not degrade the decoding performance. We confirm these in Section IVA.
The details of the proposed algorithm are given in Algorithm 2. Step 320 gives the first stage of decoding. Step 16 decides decoding stop. Step 18 decides whether making of is sufficient. Step 2237 gives the second stage of decoding. Step 3849 gives the final stage of decoding. At the final stage, the edges that do not contribute the decoding is deleted. In Step 51, if there exists an unrecovered precoded packet, then decoding restarts the first stage. Here, by this process, the decoding performance is improved.
If Algorithm 2 outputs for all , the decoding succeeds. Otherwise, decoding fails.
Iv Simulation Results
In this section, we evaluate the performance of the proposed algorithm. Section IVA evaluates the decoding erasure rates. Section IVB compares the number of decoding processes. Section IVC gives the decoding time. The parameters (i.e, , and ) used in this section are same as Section IIIA. The time are given by .
Iva Decoding Erasure Rate
The decoding erasure rate (DER) is the fraction of the trials in which some bits in the precoded packets are not recovered.
Figure 2 displays the DERs for the proposed algorithm and existing algorithm with . The horizontal axis of Fig. 2 represents the packet overhead . The curve labeled with “Original” in Fig. 2 gives the DER of existing algorithm, and the curve labeled with “Proposed Small ” and “Proposed Large ” in Fig. 2 give the DER of proposed method in the case of and the case of , respectively. As shown in Fig. 2, all the DERs are nearly equal.
As show in Remark 2, is a parameter for timeout, and in the case of , the decoding performance is slightly degraded compared with the existing one.
IvB Number of Decoding Processes
In this section, we first evaluate the number of decoding processes and the number of updating edges at each iteration for the proposed algorithm. Next, we compare the number of decoding processes at bitwise decoding for each overhead.
Figure 3 displays an example of the number of decoding processes and number of updating edges at each iteration for the proposed algorithm. The horizontal axis of Fig. 3 represents decoding iteration. As shown in Fig. 3, from the 12th iteration to the 31st iteration, the number of decoding processes at each iteration number is significantly reduced, because decoding processes are only executed the edges in . After 32nd iteration, the proposed algorithm executes the decoding process to the edges in . From Fig. 3, we see that the number of updating edges almost equals to the number of decoding processes after 32nd iteration. Hence, we conclude that the proposed algorithm well constructs edge list . Moreover, the number of iterations required for decoding is smaller than that of the existing algorithm. Therefore, the proposed algorithm has fewer decoding processes than the existing algorithm.
Next, we evaluate the number of decoding processes of an existing algorithm and the proposed algorithm for each overhead . Figure 4 compares the number of decoding processes for existing algorithm with proposed algorithm under . The horizontal axis of Fig. 4 represents overhead . From Fig. 4, the number of decoding processes of the proposed algorithm is significantly less than one of an existing algorithm.
IvC Decoding Time
In the evaluation of this section, we try 10000 times for each overhead . In this simulation, we use Ubuntu16.04 for OS, Intel(R)Core(TM)i74770CPU@3.40GHz for CPU, and 4GB DDR3 memory. Figure 5 displays the decoding time for existing algorithm and proposed algorithm with . The horizontal axis of Fig. 5 represents overhead . As shown in Fig. 5, the decoding time of the proposed algorithm is much shorter than one of existing one.
V Conclusion
In this paper, we have proposed an efficient bitwise decoding algorithm for ZDF codes. Simulation results show that the proposed algorithm drastically reduces the decoding time compared with an existing algorithm.
Acknowledgment
This work was supported by JSPS KAKENHI Grant Number 16K16007.
References
 [1] J.W. Byers, M. Luby, M. Mitzenmacher, and A. Rege, “A digital fountain approach to reliable distribution of bulk data,” ACM SIGCOMM Computer Communication Review, vol.28, no.4, pp.56–67, 1998.
 [2] A. Shokrollahi, “Raptor codes,” IEEE transactions on information theory, vol.52, no.6, pp.2551–2567, 2006.
 [3] M. Luby, “LT codes,” Proceedings. The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002, pp.271–280, IEEE, 2002.

[4]
M.G. Luby, M. Mitzenmacher, M.A. Shokrollahi, D.A. Spielman, and V. Stemann, “Practical lossresilient codes,” Proceedings of the twentyninth annual ACM symposium on Theory of computing, pp.150–159, ACM, 1997.
 [5] T. Nozaki, “Zigzag decodable fountain codes,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol.100, no.8, pp.1693–1704, 2017.
 [6] T. Nozaki, “Reduction of decoding iterations for zigzag decodable fountain codes,” 2016 International Symposium on Information Theory and Its Applications (ISITA), pp.601–605, Oct 2016.
 [7] T. Nozaki, “Fountain code based on triangular coding,” Technical report of IEICE, vol.113, no.228, pp.31–36, 2013.
Comments
There are no comments yet.