I Introduction
Prefixfree codes are generally used in source coding
to transform source letters of an arbitrary probability distribution to code letters of a uniform probability distribution, thereby maximizing entropy per code letter. In this work, we use prefixfree codes for a reverse purpose, i.e., for an operation called
distribution matching (DM)that transforms uniformly distributed source letters to code letters of a desired probability mass function (PMF). In particular, we restrict the source symbols to Bernoullidistributed bits
with equal probabilities , and construct prefixfree codes such that the code letters have the minimum average energy among all possible prefixfree codes of the same size for the desired entropy per code letter. We also propose a framing method, with which variablelength prefixfree codewords can always be contained in a fixedlength frame, thereby facilitating application of prefixfree codes to communications, as will be discussed below.A prominent application of prefixfree code distribution matching (PCDM) is probabilistic constellation shaping (PCS) for capacityapproaching communications. PCS makes lowenergy code letters appear with a higher probability than highenergy code letters, thereby reducing average transmit energy for the same information rate (IR) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11].
A longstanding challenge of PCS has been incorporation of channel coding and DM. If PCDM is embedded between channel encoding and decoding (see Fig. 1 (a)), even a single error from the channel causes insertion or deletion of source letters in prefixfree decoding, leading to synchronization errors or catastrophic propagation of errors, thereby rendering the outer channel coding useless. In a reversed architecture where channel coding is embedded between prefixfree encoding and decoding (see Fig. 1 (b)), a symbol distribution formed by prefixfree encoding is not preserved by the subsequent channel encoding, since (linear) channel encoding generates almost equiprobable code symbols from source symbols of any distribution. There have been approaches to jointly optimizing channel coding and DM [6, 9, 10, 11], but they lack a rate adaptability and involve design of a customized channel coding scheme. Recently, however, an architecture called probabilistic amplitude shaping (PAS) [12] solved the problem. In the PAS architecture, source bits are first encoded by a prefixfree code to produce amplitudes of transmit symbols with a desired probability distribution, as shown in Fig. 1 (c), then a following systematic channel encoder generates parity bits that constitute signs of the transmit symbols. In this manner, the amplitude distribution remains unchanged by equiprobable parity bits, and the error propagation and synchronization errors do not occur since channel decoding corrects errors before prefixfree decoding. PAS enables separate design of channel coding and DM, hence allows for the use of offtheshelf channel codes in conjunction with independently optimized prefixfree codes. In this paper, therefore, the performance of the proposed PCDM in communication systems will be numerically evaluated using the PAS architecture in comparison with the conventional communication schemes where transmit symbols are uniformly distributed.
Ii Prior Works
Let be an ary code alphabet. Also, let and , respectively, denote a set of sourcewords (called a dictionary) that are concatenations of an indefinite number of source letters from alphabet and a set of codewords (called a codebook) that are concatenations of an indefinite number of code letters from alphabet . A code defines a bijective function that maps every sourceword in to exactly one codeword in such that is uniquely decodable. If a codeword is an ordered concatenation of two nonempty words and , then is called a prefix of the word . A code is a prefixfree code if any codeword is not a prefix of another codeword. A prefixfree code is an instantaneously decodable code, whose decoding can be performed as immediately as a codeword is found on successive receipt of the code letters. The size of a code is defined by the number of all codewords in the codebook, hence the number of all sourcewords in the dictionary.
In a special case where all code letters have equal transmit energies, Huffman codes represent optimal prefixfree codes for a known sourceword distribution, optimal in the sense that no other codes can produce a shorter expected codeword length. In a more general case where code letter energies are not necessarily equal, the prefixfree code construction problem has been addressed in the context of PCS in [13], using LempelEvenCohn (LEC) coding [14] and Varn coding [2]. LEC coding solves the problem when all codewords have an equal length. LEC coding is isomorphic to Huffman coding if we translate the transmit energy of the th codeword into a sourceword probability as , with being a root of to ensure that PMF sums to 1. A Huffman codebook therefore becomes the LEC dictionary for parsing source bits, mapped to an LEC codebook consisting of equallength codewords, completing a variabletofixed length (V2F) code. However, LEC coding minimizes the energy per source letter, and the entropy rate is determined by the a priori given Huffman codebook, while we want to minimize the energy per per code letter for an arbitrary entropy rate. Varn coding provides another solution when sourcewords have an equal length, hence constructing fixedtovariable length (F2V) codes. However, Varn coding requires a nonuniform PMF for the equallength sourcewords, which must be given a priori to realize a particular entropy rate determined by the PMF. Another approach to constructing F2V codes is by minimizing the information divergence between the desired and realized PMFs for a binary code alphabet [15], which also assumes a priori knowledge of the optimal PMF for the desired entropy rate. In a general case where both sourcewords and codewords are allowed to have unequal lengths, there are known algorithms to construct optimal prefixfree codes [3, 16, 17, 18, 19] using, e.g., dynamic programming [18, 19]; however, the resulting codes are optimal in terms of the energy per codeword, while we need a minimum energy per code letter. To the best of our knowledge, none of the existing approaches did not address the problem of constructing prefixfree codes that minimize the energy per code letter for arbitrary desired entropy rates.
Iii PrefixFree Codes for Various Entropy Rates
Let denote the length of sourceword , and and denote the length and energy of codeword , respectively. Let , , and
be random variables taking values on the sets
, , and , respectively, all of which are distributed with the PMF . Since our source bits are independent and equiprobable, the probability is dyadic, i.e., with .Examples of size V2F, F2V, and V2V codes for the unipolar 2ary amplitude shift keying (2ASK) code alphabet are shown in Tab. I, where the V2V code has , , and . The corresponding tree diagrams are illustrated in Fig. 2, where double circles, open circles, and closed circles represent the root nodes, branch nodes, and leaf nodes, respectively, which will be defined below. A code is formed by concatenating the roots of two ordered trees, called left and right trees depending on their relative root position. We use the following terminologies throughout the paper to describe the structure of a right tree (defined in a similar fashion for a left tree):

Root: The leftmost node of a tree.

Child: A node directly connected to the right of a node.

Parent: The converse of a child.

Siblings: A group of nodes with the same parent.

Branch: A node with at least one child.

Leaf: A node with no children.

Degree: The number of children of a node.

Path: A sequence of nodes and edges connecting a node with another node.

Depth: The depth of a node is the number of edges from the root node to the node, i.e., the path length connecting the root node and the node.

Height: The height of a tree is the longest path length between the root and leaves.

Size: The size of a tree is the number of all leaf nodes of the tree.
Leaf of a left tree represents sourceword and that of a right tree represents codeword . The depth of leaf equals (in the left tree) or (in the right tree).
With the above notations, we can characterize energy and entropy rate of a rootconcatenated tree. By the dyadic PMF of sourcewords and codewords, the expected energy per code letter can be calculated as
(1) 
where denotes expectation with respect to throughout the paper, unless otherwise specified. The entropy rate is defined as the expected number of source bits per code letter
(2) 
Then, the optimal prefixfree coding problem can be written as:
(3)  
subject to  
Iiia V2F Codes
To solve the above problem, we begin with a balanced ary right tree, i.e., a right tree in which every branch has children and every leaf is at the same depth (see the right tree of Fig. 2 (a)), which represents codebook of a fixed codeword length for all , with . Without loss of generality, assume that code letters are chosen from such that can immediately be obtained for the fixed . In this case, (1) and (2) degenerate, respectively, to , and , where the last equation holds since is dyadic. Therefore, minimizing the expected energy per code letter is equivalent to minimizing the expected energy per codeword for V2F codes. Also, maximizing subject to is equivalent to maximizing subject to .
If we waive the dyadic constraint on , the entropymaximizing distribution under an energy constraint is the MaxwellBoltzmann (MB) distribution [8], represented by a PMF with , where the rate parameter determines the expected energy per code letter. Let denote expectation with respect to a PMF , then the energy per code letter and entropy rate of optimal V2F codes are given, respectively, by and . Indeed, LEC coding [14] yields a special instance of the MB distribution, for which the rate parameter is given by with being a root of the characteristic condition . The dashed curves in Fig. 3 (a) and (b) show and as a strictly monotonically decreasing function of (this is true in general, see [12, Section 5.C]), obtained from the codewords of Tab. I (a). Due to the monotonicity, the optimal of the MB PMF that maximizes subject to can simply be obtained by the bisection method. Then, an optimal dyadic approximate of can be obtained by Geometric Huffman coding (GHC) [20], i.e., . Notice that, since code alphabet and constant completely define the right tree of a V2F code, the optimal can immediately be obtained for an arbitrary desired entropy rate , using a series of operations . The solid lines in Figs. 3 (a) and (b) show all and obtained with and . In this example, there are only five nonzero distinct rates generated by dyadic approximation of .
As we increase the codebook size, the entropy rates created by V2F codes become much more diverse, as shown in Fig. 4 for up to the 16ASK alphabet (producing up to the 1024QAM in the PAS architecture). Here, we find all V2F codes from the desired entropy rates with a rate granularity . The entropy rates become sparser in relatively low entropy rate regimes, which can be supplemented by allowing variable codeword lengths in Section IIIC.] Energy efficiency of the constructed V2F codes are also shown in Fig. 4, evaluated by the ratio of the energy per code letter of the prefixfree code to that of the ideal MBdistributed code letters for the same entropy rate, i.e., , which we call energy gap. It can be seen that the energy gap is below 0.1 dB across a wide range of entropy rates with , or even with for .
IiiB F2V Codes
For F2V codes, now we consider a balanced binary left tree representing dictionary of fixed sourceword length for all , with , such that all are parsed with equal probabilities . In this case, the expected energy per code letter and entropy rate of a code are simplified, respectively, as
(4) 
and
(5) 
with sum depth and sum energy of the tree being and , respectively.
We do not know of any existing method that can solve (3) with (4) and (5) for an arbitrary desired . In this paper, therefore, instead of directly solving the problem for a particular , we try to identify a set of trees that produces all possible sum depths under a finite tree size constraint, hence realizing all possible according to (5). The constructed trees are optimal in the sense that they have the minimum sum energy among all trees with the same sum depth , where tree is a tree in which every node except leaves has a degree not smaller than . Minimum sum energy of a tree leads to minimum energy per code letter for a given , and for a given due to (4) and (5). Indeed, any tree is allowed for a right tree; however, we restrict the type of trees to trees to avoid infinite tree expansion (see Fig. 5 (a) for example). In Fig. 5, and throughout the paper, denotes a tree that has leaves, is a tree whose sum depth is , and is a tree whose sum energy is . Also, let and be the sets of all trees and , respectively. Then, by (4) and (5), all trees in lead to same but not necessarily same . Since distinct sum depth generates distinct , we have as many optimal trees as the number of distinct sum depths of trees in , where an optimal tree with sum depth is defined as . A bruteforce search of in requires exponential time in . There are known problems that are isomorphic to the problem of counting the number of all trees in , including the parenthesizations counting problem [21, Ch. 15.2]. For example, the trees in of Fig. 5 (b) have isomorphic representations of , , , , and in order, in which the edges connecting two siblings are parenthesized together in a recursive manner from the largest depth of the tree. The number of parenthesizations for is , with being the Catalan number defined as [22, 23], which grows as [21, p. 333].
In order to reduce the search space, we take a dynamic programming approach, as suggested in [18, 19] to find a prefixfree code for a known codeword PMF.
We begin from a trivial size1 tree that has only a root, then construct larger trees by appending smaller trees to the root, as shown in Fig. 6. A tree formed by appending subtrees , satisfies the following relations:
(6)  
(7)  
(8) 
where the last two equations hold since the th edge from the root should be taken into account times to calculate sum depth and sum energy of the tree. For example, due to (6), a tree with and the ASK alphabet (hence ) can be constructed from tuples: for , for , and for . Here, we do not need to use all subtrees of size to construct a set of size trees, since we have an optimal subsubstructure property:
Theorem 1.
An optimal tree contains only optimal subtrees in it.
Proof.
Suppose that an optimal tree has a th subtree . If the subtree is not an optimal tree, then there exists a subtree with . By replacing the subtree with from the tree , we can obtain a new tree that has a smaller sum energy than the optimal tree, i.e, with . This is contradiction, hence an optimal tree consists of only optimal subtrees. ∎
It can be easily seen that the height of is minimized when its leaves have maximally uniform depths. The depths can differ from each other at most by one, and the minimum height is given by for the ASK alphabet. The sum depth of a size minimumheight tree is calculated as , which is equal to the smallest sum depth of all trees in . If is a positive integer power of , the minimum sum depth can be simplified as . On the other hand, if there are only two nodes at every depth of a tree (see, e.g., the last tree of Fig. 5 (b)), the tree has height that is the largest of all trees in . The sum depth in this case is calculated as . After some manipulation, it can be seen that the sum depth of a maximumheight tree is also the maximum sum depth of all trees in . Therefore, if we store only one optimal tree to for each , the size of is upperbounded by ; i.e., grows as . Since there are at most choices of the optimal subtree sets (ranging from an empty set to ) for each of the edges of the root for enumerating all trees of size (using the optimal subtree property), the number of all choices to build is of the complexity . As aforementioned, each of the subtree sets is of size at most , hence we have the search space of size to identify . The search time is polynomial in , and exponential in , indicating the intractability of constructing F2V codes for a large alphabet.
To further reduce the search space, we can exploit the fact that a larger subtree is appended as far left as possible from the root of an optimal tree; i.e.,
Theorem 2.
Let for constitute the th subtree of an optimal tree such that the root of is the th child of the root of , then the subtrees satisfy .
Proof.
Let and denote two of the subtrees of an optimal tree with , and assume . Then, by exchanging the two subtrees and , we can construct another tree of the same size and the same sum depth due to (6) and (7), which has a smaller sum energy than by (8). This is contradiction, hence an optimal tree must have for any . ∎
For example, some trees in can be generated from three subtrees that have , , or , but we can discard the latter two triplets by Theorem 2.
We construct one optimal F2V code for each of the entropy rates that can be realized with for and . As shown in Fig. 7, F2V codes offer a greater diversity of entropy rates than V2F codes with the same size, albeit with a slightly larger energy gap. Therefore, a natural consequence is to construct V2V codes to exploit the complementary merits of V2F and F2V codes, as will be illustrated henceforth.
IiiC V2V Codes
We readily have a set of optimal right trees, representing optimal F2V codes. From this, without imposing any fixedlength constraints on sourcewords and codewords, we can enumerate nearoptima V2V codes of size in the following manner:

For each of the desired entropy rates , where for some integer and the rate granularity , and for each of the optimal right trees in , identify the optimal PMF that achieves with the minimum energy per code letter.

For every obtained in Step 1, construct a set of optimal left trees representing that minimizes . The trees in and have onetoone correspondence.

For each of , choose a pair of left and right trees in and that yield the minimum energy per code letter with a rate discrepancy for a small .
Indeed, an optimal V2V code does not necessarily consist of an optimal right tree, hence an exhaustive search of the left tree should be performed over all right trees in . However, due to the exponential growth of in , we restrict the search space to optimal right trees in ; under this constraint, the constructed V2V codes show surprisingly good performance with a very small .
Assume that a right tree is chosen from (hence and are given). Then, from (2), we have . Let , , then we have an equivalence relation by definition as
(9) 
Therefore, by waiving the dyadic constraint on , the optimization problem of (3) translates into
(10)  
subject to  (11)  
(12) 
Let denote the minimum energy found by solving (10)(12), then, the optimal PMF is given by the solution of a convex optimization problem
(13) 
since
(14) 
where the equality in the righthand side equation holds if and only if .
Also, the constraint functions in (11) and (12) are concave and affine, respectively, hence the problem can efficiently be solved by a convex optimization solver such as the CVX [24, 25], on condition that is known. Since we do not know of , we can make an initial guess using a PMF
, then attempt to reduce the error between the energy estimate
of iteration and the true minimum energy in an iterative manner. This initial PMF maximizes in order to satisfy the rate condition (9), where the maximization of by the initial guess can be proven by using a Lagrangian . By the KarushKuhnTucker (KKT) conditions [26, Ch. 5.5.3], a PMF maximizes if the gradient of the Lagrangian vanishes at ; i.e.,Since is a PMF, we have that in the last equation, hence the PMF that maximizes is given by the initial guess . If this does not fulfill the rate condition (9), then no other PMF can satisfy it, hence the corresponding right tree should be discarded. Otherwise, we can solve the convex optimization problem iteratively until the estimation error at iteration reaches within a termination threshold , as shown in Algorithm 1.
Theorem 3.
Given a right tree , Algorithm 1 produces PMF that is asymptotically optimal in iteration for an entropy rate , such that approaches to .
Proof.
The proof follows a similar structure to that of Proposition 4.5 in [27, Sec. 4.3.1]. Suppose that Algorithm 1 terminates at the iteration . Then, converges to as increases, since the termination condition on Line 5 is not satisfied at every , hence for , indicating that is a monotonically decreasing function of , while is lowerbounded by . Furthermore, the converged energy is indeed the minimum energy within a small error. To see this, let be the optimal value of the objective function at the iteration such that for any PMF , where the equality holds if and only if . Then, , since for any PMF , where the equality holds if and only if by (14). Also, since , the termination of Algorithm 1 at the iteration implies . This proves that Algorithm 1 can approach to the minimum energy closely by choosing a sufficiently small , since and is a necessary and sufficient condition for . ∎
In our V2V code construction, Algorithm 1 is terminated mostly in 5 iterations with .
Once the optimal PMF is identified for every optimal right tree and for every desired entropy rates , an optimal dyadic estimate can be obtained as , which creates a left tree. Then, among all pairs of such constructed left and right trees, a pair can be chosen for each entropy rate that achieves the minimum .
Figure 8 shows all V2V codes enumerated by the rate granularity and the rate tolerance for the 2ASK and 4ASK alphabets with . For the 2ASK alphabet, only is required to construct V2V codes to approach the theoretic minimum energy to within 0.05 dB across a wide range of entropy rates. For the 4ASK alphabet, V2V codes are still within 0.1 to 0.2 dB of the limit across a wide range of entropy rates, although the obtained entropy rates are sparser and tends to grow with .
Some codes selected from Figs. 4 and 8 are shown in Fig. 9, whose entropy rates range from 0.15 to 3.83 in a step size of . This shows that we can approach the theoretic minimum energy per code letter to within 0.13 dB across a wide range of entropy rates for up to 1024QAM, using a codebook size not larger than 256.
Iv Framing for FixedRate Transmission
A fundamental problem of PCDM is that the number of output code letters varies depending on input patterns; i.e., the entropy rate of a prefixfree code is inherently probabilistic, whose mean value approaches to asymptotically in the number of encoding iterations. Application of PCDM to data transmission is critically hampered by this, since it can cause buffer over or underflow, insertion or deletion of bits that causes synchronization errors, and unbounded error propagation in the absence of resynchronization [8]. Framing in this paper refers to a method that enables a fixedlength frame to always contain a fixed number of information bits, thereby solving the variable length problem of PCDM. In [28, Sec. 4.8], framing is performed by casting overflow symbols into errors; the probability of error decreases as the frame length grows, but zeroerror prefixfree decoding is not possible within a finitelength frame. To the best of our knowledge, there is no framing method known to date that allows unique errorfree decoding of prefixfree codes.
Iva Algorithm for Framing
To achieve a fixed entropy rate in each fixedlength frame for the ASK alphabet, we use two different codes: a prefixfree code with a probabilistic entropy rate whose mean value is close to , and a trivial code with a constant entropy rate (i.e., is a typical fixedtofixed length mapper for uniform ASK). The idea is that we begin encoding with and then switch to at some point of the successive encoding process if an overflow is predicted. If we keep counting the numbers of input bits and output symbols encoded by in the first part of the encoding process, we can also calculate the number of symbols required to encode all the remaining input bits if we use instead of from that point onward, thereby predicting a potential overflow. In what follows, we will show that this prediction can be made in a way that unique decoding is possible, and that the penalty due to this switching is small.
Let (bits) and (symbols) be fixed input and output frame lengths, respectively. Then, framing enables PCDM to achieve a fixed entropy rate , where
(15) 
in each frame. This shows that, for the chosen and , an additional rate adaptability is also offered by framing, albeit with a small loss of energy efficiency. Let and denote the lengths of sourcewords and codewords of at encoding iteration , respectively, then the cumulative sourceword and codeword lengths at iteration can be calculated respectively as , and .
Assume that the use of up to iteration was assured not to cause overflow as long as we switch to from the next iteration onwards, hence we have used up to iteration . Then, at the next iteration , in order to foresee if does not still cause overflow, we need to ensure that
(16) 
where and define the available and required numbers of symbols in case we switch to after encoding with at , respectively. Codebook is used for encoding at iteration if condition (16) is fulfilled, otherwise from iteration onwards. Notice that (16) can be evaluated only after seeing the incoming bit pattern at iteration to obtain and . This makes unique decoding impossible, since, assuming that unique decoding was successfully performed up to iteration at the receiver such that and are known for all , decoding cannot identify which codebook was used at iteration due to the lack of knowledge of and . This suggests that, for unique decoding, the codebook at iteration must be identified without relying on and , which can be realized by using a bounding technique. Namely, in a pessimistic assumption that the shortest input bits are mapped to the longest output symbols defined in at iteration , we can find the lower bound of the available symbols and the upper bound of the required symbols as
(17) 
and
(18) 
where and . Now, without knowledge of and , the overflowpreventing condition (16) can be conservatively examined by
(19) 
since and .
Theorem 4.
In successive PCDM encoding process, frame overflow can be avoided by switching the code from to at the earliest iteration that does not fulfill (19).
Proof.
If (19) is fulfilled at , we can use at without overflow. Otherwise, we can avoid overflow by using for encoding at every , since by (15). Assume that was used at , fulfilling (19), hence ensured that overflow will not occur if we switch to at iteration . If (19) is fulfilled also at , we keep using at , since encoding with at every suffices to avoid overflow, otherwise, we switch to at without overflow by assumption. This completes proof by mathematical induction. ∎
Theorem 5.
PCDM symbols framed by using Theorem 4 can be uniquely decoded.
Proof.
At , it is trivial to see that a decoder can identify a proper decoding code using (19), which allows for unique decoding and gives knowledge of and . Assume that and for all are known at the beginning of decoding iteration . Then, by assessing (19) at , we identify the decoding code at iteration . This enables unique decoding at and provides and . This completes proof by mathematical induction. ∎
In case where underflow occurs, i.e., if encoding of bits is finished using less than symbols, the encoder can simply fill the unoccupied slots in the output frame with dummy symbols of the smallest energy, e.g., for the ASK alphabet. The decoder can discard these dummy symbols after bits are all decoded. There may also be incidents at the end of encoding that need minor manipulations as follows. At the last encoding iteration, it is possible that there is no sourceword in the dictionary that matches perfectly with the input bits. In this case, there must be multiple sourcewords whose bit prefix matches perfectly with the input bits, where denotes the length of the last remaining input bits, hence an encoder can set a rule for unique encoding; e.g., it can pick the uppermost codeword in such a case. Unique decoding is straightforward. The framing algorithm is summarized in Algorithm 2.
IvB Analysis of FixedLength Penalty by Gaussian Approximation
Suppose that a prefixfree encoder produces an entropy rate using the th codeword of a size code , given by . The probability of occurrence of equals , i.e., the probability that the encoder’s output symbol at a certain time belongs to codeword . Without a fixedlength constraint, the expected entropy rate of code , previously expressed as (2), can alternatively be calculated as , where is a random variable taking values on the ordered set , with a PMF . For example, with the V2V code in Tab. I (c), an entropy rate in 0.14, 0.43, 0.5, 0.6, 1, 1.67, 3, 6 is observed at the encoder’s output with a PMF 0.57, 0.143, 0.122, 0.102, 0.041, 0.015, 0.005, 0.003, which yields on average. Suppose now that a fixedlength constraint is imposed by framing. We consider the ensemble of symbol frames and assume that all symbols encoded by
are independent and identically distributed (IID). Also, though the true entropy rate of a symbol is a discrete random variable, we approximate the entropy rate by a continuous Gaussian random variable with mean
and variance
; i.e., . In this case, the entropy rate accumulated up to theth symbol also follows a Gaussian distribution
, if there has been neither overflow prediction nor early termination of encoding by until the th symbol. To take into account overflow and early termination, we notice that (17) and (18) at the th output symbol can respectively be transformed into and , where is a random variable representing the cumulative entropy rate at the th output symbol.However, characterization of exact becomes infeasible as the overflow and early termination probabilities increase with , hence we approximate for all again by a Gaussian random variable with mean and variance such that whose evolution over is mathematically tractable, as shown in Fig. 10. Then, on condition that code is still being used at the th symbol, i.e., on condition that no overflow prediction and early termination has been made before the th symbol, the probability of an overflow prediction at the th symbol is
(20)  
(21) 
where, in (20), an overflow threshold is defined by , and in (21
) denotes Gaussian cumulative distribution function (CDF) with mean
and variance . Equation (20) gives insight into the behavior of the proposed framing method, since it indicates that overflow is predicted if the entropy rate accumulated until the th symbol is less than a threshold that increases linearly with . Here, and are both linear in with slopes and , respectively, and since by the framing rule, the probability of overflow prediction in (20) gradually increases with . Recall, however, that this is a conditional probability, given no overflow prediction and early termination prior to symbol .In order to take into account the history of previous overflow predictions and early terminations, let and denote the cumulative overflow prediction probability and cumulative early termination probability at symbol . Then, the unconditional probability that overflow is predicted at symbol is
Comments
There are no comments yet.