# The Sphere Packing Bound For Memoryless Channels

Sphere packing bounds (SPBs) ---with prefactors that are polynomial in the block length--- are derived for codes on two families of memoryless channels using Augustin's method: (possibly non-stationary) memoryless channels with (possibly multiple) additive cost constraints and stationary memoryless channels with convex constraints on the composition (i.e. empirical distribution, type) of the input codewords. A variant of Gallager's inner bound is derived in order to show that these sphere packing bounds are tight in terms of the exponential decay rate of the error probability with the block length under mild hypotheses.

## Authors

• 6 publications
• ### The Sphere Packing Bound for DSPCs with Feedback a la Augustin

The sphere packing bound is proved for codes on discrete stationary prod...
06/29/2018 ∙ by Barış Nakiboğlu, et al. ∙ 0

• ### On Approximation, Bounding Exact Calculation of Block Error Probability for Random Codes

This paper presents a method to calculate the exact block error probabil...
03/15/2020 ∙ by Ralf R. Müller, et al. ∙ 0

• ### Non-Asymptotic Converse Bounds Via Auxiliary Channels

This paper presents a new derivation method of converse bounds on the no...
01/27/2021 ∙ by Ioannis Papoutsidakis, et al. ∙ 0

• ### A Simple Derivation of the Refined SPB for the Constant Composition Codes

A judicious application of the Berry-Esseen theorem via the concepts of ...
04/29/2019 ∙ by Barış Nakiboğlu, et al. ∙ 0

• ### Asymptotic Behavior and Typicality Properties of Runlength-Limited Sequences

We study properties of binary runlength-limited sequences with additiona...
05/10/2021 ∙ by Mladen Kovačević, et al. ∙ 0

• ### Minoration via Mixed Volumes and Cover's Problem for General Channels

We propose a method for establishing lower bounds on the supremum of pro...
12/28/2020 ∙ by Jingbo Liu, et al. ∙ 0

• ### Generalized Sphere-Packing Bound for Subblock-Constrained Codes

We apply the generalized sphere-packing bound to two classes of subblock...
01/02/2019 ∙ by Han Mao Kiah, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Most proofs of the sphere packing bound (SPB) have been either for the stationary channels with finite input sets [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] or for the stationarity channels with a specific noise structure, e.g. Poisson, Gaussian, [13, 14, 15, 16, 17, 18]. The proofs using Augustin’s method are exceptions to this observation: [19, 20, 21] do not assume either the finiteness of the input set or a specific noise structure; nor do they assume the stationarity of the channel. However, [19], [20, §31], [21] establish the SPB for the product channels, rather than the memoryless channels; hence proofs of the SPB for the composition constrained codes111According to [11, p. 183], the SPB for the constant composition codes appears in [8] with an incomplete proof. The first complete proof of the SPB for the constant composition codes is provided in [9]. on the stationary channels [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18] —which include the important special case of the cost constrained ones [15, 16, 17, 18, 13, 14]— are not subsumed by [19], [20, §31], or [21]. In [20, §36], Augustin proved the SPB for the cost constrained (possibly non-stationary) memoryless channels assuming a bounded cost function. The framework of [20, Thm. 36.6] subsumes all previously considered models [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], except the Gaussian ones [15, 16, 17, 18].

Theorem 2, presented in §III, establishes the SPB for a framework that subsumes all of the models considered in [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 15, 16, 17, 18, 13, 14, 19, 20, 21] by employing [22], which analyzes Augustin’s information measures. Our use of [22] and Augustin’s information measures is similar to the use of [23] and Rényi ’s information measures in [21]. For the product channels, [21, Thm. LABEL:B-thm:productexponent] improved the previous results by Augustin in [19], [20, §31] by establishing the SPB with a prefactor that is polynomial in the block length for the hypothesis that the order ½ Rényi capacity of the component channels are . For the cost constrained memoryless channels, Theorem 2 enhances the prefactor of [20, Thm. 36.6] in an analogous way, from to . The prefactor of Theorem 2, however, is inferior to the prefactors reported in [3, 4, 5] for various symmetric channels, in [15] for the stationary Gaussian channel, and in [12] for the constant composition codes on the stationary discrete product channels. Determination of the optimal prefactor, in the spirit of [3, 4], remains open for the general case. Similar to [20, Thm. 36.6], Theorem 2 holds for non-stationary channels, as well. Unlike [20, Thm. 36.6], Theorem 2 does not assume the cost functions to be bounded.

The stationarity is assumed in most of the previous derivations of the SPB, [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 15, 16, 17, 18, 13, 14]. Given a stationary product channel, one can obtain a stationary memoryless channel by imposing composition —i.e. type, empirical distribution— constraints on the input codewords. The cost constraints can be interpreted as a convex special case of this more general composition constraints. This interpretation, considered together with the composition based expurgations, is one of the main motivating factors behind the study of constant composition codes on the stationary memoryless channels with finite input sets. The composition based expurgations, however, are useful only when the input set of the channel is finite. Nevertheless, if the constraint set for the composition of the codewords is convex, then one can derive a SPB with a polynomial prefactor using Augustin’s information measures, see Theorem 1 in §III. The derivation of Theorem 1 relies on the Augustin center of the constraint set rather than the Augustin mean of the most populous composition of the code. Note that the most populous composition of the code might not even have more than one codeword when the input set is infinite. The framework of Theorem 1 is general enough to subsume the frameworks of all previous proofs of the SPB for the memoryless of product channels that we are aware of, except the frameworks of the proofs based on Augustin’s method [19, 20, 21]. Theorems 1 and 2 provide asymptotic SPBs; but they are proved using non-asymptotic SPBs presented in Lemmas 9 and 10.

The SPB implies that exponential decay rate of the optimal error probability with the block length —i.e. the reliability function, the error exponent— is bounded from above by the sphere packing exponent (SPE). For the memoryless channels in consideration, the Augustin’s variant of Gallager’s bound implies that the SPE bounds the reliability function from below, as well, provided that the list decoding is allowed. The Augustin’s variant is presented in §II-D. One can use standard results such as [24, 25] with minor modifications in order to establish the SPE as a lower bound to the reliability function for the list decoding, as well. Thus Augustin’s variant is of interest to us not because of what it implies about the reliability function, but because of how it implies it. What is special about Augustin’s variant is that it establishes an achievability result in terms of the Augustin information rather than the Rényi information used in the standard form of the Gallager’s bound [24]. The Augustin’s variant rely on the fixed point property of the Augustin mean described in (13) to do that. It is worth mentioning that [25] implicitly employs the same fixed point property, but in a different way.

Before starting our discussion in earnest, let us point out a subtlety about the derivations of the SPB that is usually overlooked. [26] claimed to prove the SPB for arbitrary stationary product channels, without using any constant composition arguments.222[26, p. 413] reads “An important feature of the lower bound, which will be derived, is that no assumption of constant-composition codewords is made, not even as an intermediate step.” The derivation of [26, Thm. 19], however, establishes an upper bound on the reliability function that is strictly greater than the SPE in many channels. This has been demonstrated numerically in [12, p. 1594 and Appendix A]. An analytic confirmation this observation is presented in Appendix -A. The problematic step in [26] is the application of Lagrange multiplier techniques, see [12, footnote 8]. The proof of [26, Thm. 19] invokes [26, Thm. 16] that is valid for the Lagrange multiplier associated with the satisfying . For an arbitrary , however, the associated Lagrange multiplier may or may not be equal to the one for the optimal . This is the reason why the upper bound to the reliability function established in [26, Thm. 19] is not equal to the SPE in general, contrary to the claim repeated in [27, Lemma 1] and [28, Thm. 10.1.4]. In a nutshell, the proof of [26, Thm. 19] tacitly asserts a minimax equality that does not hold in general. For stationary memoryless channels with finite input alphabets, one can avoid this issue using the constant composition arguments. However, in that case the proof presented in [26] becomes a mere reproduction of the one in [9].

More recently, [29] proposed a derivation of the SPB for stationary channels with a single cost constraint using the approach presented in [26]. Similar to [26], however, the proof in [29] asserts a minimax equality that does not hold in general. In particular, it is claimed that does not depend on in [29, (26)]. In order to assert that, one has to include a supremum over as the inner most optimization in both [29, (25) and (26)]. With the additional supremum, the explanation provided on [29, p. 931] is no longer valid. Considering Appendix -A, we do not believe that the proof in [29] can be salvaged without introducing major new ideas, such as composition based expurgations similar to [9] or codeword cost based expurgations similar to [16]. In short, neither [26] nor [29] successfully proved the SPB for stationary memoryless channels even for the finite input set case.

### I-a Notational Conventions

For any two vectors

and in their inner product, denoted by , is . For any , dimensional vector whose all entries are one is denoted by , the dimension will be clear from the context. We denote the closure, interior, and convex hull of a set by , , and , respectively; the relevant topology or vector space structure will be evident from the context.

For any set , we denote the set of all probability mass functions that are non-zero only on finitely many members of by . For any , we call the set of all ’s in for which the support of and denote it by . For any measurable space , we denote the set of all probability measures on it by and set of all finite measures by . We denote the integral of a measurable function with respect to the measure by or . If the integral is on the real line and if it is with respect to the Lebesgue measure, we denote it by or , as well. If is a probability measure, then we also call the integral of with respect the expectation of or the expected value of and denote it by or .

Our notation will be overloaded for certain symbols; however, the relations represented by these symbols will be clear from the context. We denote the Cartesian product of sets [30, p. 38] by . We use to denote the absolute value of real numbers and the size of sets. The sign stands for the usual less than or equal to relation for real numbers and the corresponding point-wise inequity for functions and vectors. For two measures and on the measurable space , iff for all . We denote the product of topologies [30, p. 38], -algebras [30, p. 118], and measures [30, Thm. 4.4.4] by . We use the short hand for the Cartesian product of sets and for the product of the -algebras .

### I-B Channel Model

A channel is a function from the input set to the set of all probability measures on the output space :

 W:X→P(Y) (1)

is called the output set and is called the -algebra of the output events. We denote the set of all channels from the input set to the output space by . For any and , is the probability measure whose marginal on is and whose conditional distribution given is . The structure described in (1) is not sufficient on its own to ensure the existence of a unique with the desired properties for all , in general. The existence of a unique is guaranteed for all , if is a transition probability from to , i.e. a member of rather than .

A channel is called a discrete channel if both and are finite sets. For any and channels for , the length product channel is defined via the following relation:

 W[1,n](xn1) =⨂nt=1Wt(xt) ∀xn1∈Xn1. (2)

A channel is called a length memoryless channel iff there exists a product channel satisfying both for all and . A product channel is stationary iff for all for some . For such a channel, we denote the composition (i.e. the empirical distribution, type) of each by , where .

For any , an dimensional cost function is a function from the input set to that is bounded from below, i.e. that is of the form for some . We assume without loss of generality that333Augustin [20, §33] has an additional hypothesis, , which excludes certain important cases such as the Gaussian channels.

 infx∈Xρı(x) ≥0 ∀ı∈{1,…,ℓ}.

We denote the set of all cost constraints that can be satisfied by some member of by and the set of all cost constraints that can be satisfied by some member of by :

 Γexρ ≜ {ϱ∈Rℓ≥0:∃x∈X~{}s.t.% ~{}ρ(x)≤ϱ}, (3) Γρ ≜ {ϱ∈Rℓ≥0:∃p∈P(X)~{}s.t.~{}Ep[ρ]≤ϱ}. (4)

Then both and have non-empty interiors and is the convex hull of , i.e. .

A cost function on a product channel is said to be additive iff it can be written as the sum of cost functions defined on the component channels. Given and for , we denote the resulting additive cost function on for the channel by , i.e.

 ρ[1,n](xn1) =∑nt=1ρt(xt) ∀xn1∈Xn1. (5)

### I-C Codes With List Decoding

The pair is an channel code on iff

• The encoding function is a function from the message set to the input set .

• The decoding function is a measurable function from the output space to the set .

Given an channel code on , the conditional error probability for and the average error probability are defined as

 Pme ≜ EW(Ψ(m))[1{m∉Θ(y)}], Pe ≜ 1M∑m∈MPme.

An encoding function , hence the corresponding code, is said to satisfy the cost constraint iff . An encoding function , hence the corresponding code, on a stationary product channel is said to satisfy an empirical distribution constraint iff the composition of all of the codewords are in , i.e. iff for all .

## Ii Preliminaries

The Rényi divergence, tilting, and Augustin’s information measures are central to the analysis we present in the following sections. We introduce these concepts in §II-A and §II-B, a more detailed discussion can be found in [22, 31]. In §II-C we define the SPE and derive widely known properties of it for our general channel model. In §II-D we derive Augustin’s variant of Gallager’s bound.

### Ii-a The Rényi Divergence and Tilting

###### Definition 1.

For any and , the order Rényi divergence between and is

 Dα(w∥q) ≜ ⎧⎪⎨⎪⎩1α−1ln∫(dwdν)α(dqdν)1−αν(dy)α≠1∫dwdν[lndwdν−lndqdν]ν(dy)α=1 (6)

where is any measure satisfying and .

For properties of the Rényi divergence, throughout the manuscript, we will refer to the comprehensive study provided by van Erven and Harremoës [31]

. Note that the order one Rényi divergence is the Kullback-Leibler divergence. For other orders, the Rényi divergence can be characterized in terms of the Kullback-Leibler divergence, as well, see

[31, Thm. 30]. That characterization is related to another key concept for our analysis: the tilted probability measure.

###### Definition 2.

For any and satisfying , the order tilted probability measure is

 dwqαdν ≜ e(1−α)Dα(w∥q)(dwdν)α(dqdν)1−α. (7)

The conditional Rényi divergence and the tilted channel are straight forward generalizations of the Rényi divergence and the tilted probability measure that will allow us to express certain relations succinctly throughout our analysis.

###### Definition 3.

For any , , , and the order conditional Rényi divergence for the input distribution is

 Dα(W∥Q|p) ≜ ∑x∈Xp(x)Dα(W(x)∥Q(x)). (8)

If such that for all , then we denote by .

###### Definition 4.

For any , and , the order tilted channel is a function from to given by

 dWQα(x)dν ≜ e(1−α)Dα(W(x)∥Q(x))(dW(x)dν)α(dQ(x)dν)1−α. (9)

If such that for all , then we denote by .

The following operator was considered implicitly by Fano [8, Ch 9], Haroutunian [9], and Polytrev [25] and explicitly by Augustin [20, §34], but only for orders less than one in all four manuscripts.

###### Definition 5.

For any , , and , the order Augustin operator for the input distribution , i.e. , is given by

 Tα,p(q) ≜ ∑xp(x)Wqα(x) ∀q∈Qα,p (10)

where and the tilted channel is defined in (9).

### Ii-B Augustin’s Information Measures

###### Definition 6.

For any , , and the order Augustin information for the input distribution is

 Iα(p;W) (11)

The infimum in (11) is achieved by a unique probability measure denoted by and called the order Augustin mean for the input distribution . Furthermore, the order Augustin mean satisfies the following identities:

 ≥D1∧α(qα,p∥∥q) ∀q∈P(Y),α∈R+. (12) Tα,p(qα,p) =qα,p ∀α∈R+. (13)

These observations are established in [22, LemmaLABEL:C-lem:information-(LABEL:C-information:one,LABEL:C-information:zto,LABEL:C-information:oti)]; previously they were reported by Augustin [20, Lemma 34.2] for orders less than one. Throughout the manuscript, we refer to [22] for propositions about Augustin’s information measures. A more detailed account of the previous work on Augustin’s information measures can be found in [22], as well.

###### Definition 7.

For any , , and , the order Augustin capacity of for the constraint set is

 Cα,W,A ≜ supp∈AIα(p;W). (14)

When the constraint set is the whole , we denote the order Augustin capacity by , i.e. .

Using the definitions of the Augustin information and capacity we get the following expression for

 Cα,W,A =supp∈Ainfq∈P(Y)Dα(W∥q|p). (15)

If is convex then the order of the supremum and the infimum can be changed as a result of [22, Thm. LABEL:C-thm:minimax]:

If in addition is finite, then [22, Thm. LABEL:C-thm:minimax] implies that there exists a unique probability measure , called the order Augustin center of for the constraint set , satisfying

 Cα,W,A (17)

We denote the set of all probability mass functions satisfying a cost constraint by , i.e.

 A(ϱ) (18)

For the constraint sets defined through cost constraints we use the symbol rather than with a slight abuse of notation. In order to be able apply convex conjugation techniques without any significant modifications, we extend the definition Augustin capacity to the infeasible cost constraints, i.e. ’s outside , as follows:

 Cα,W,ϱ ≜ {supp∈A(ϱ)Iα(p;W)if~% {}ϱ∈Γρ−∞if~{}ϱ∈Rℓ≥0∖Γρ ∀α∈R+. (19)

In order to characterize through convex conjugation techniques, we first define Augustin-Legendre (A-L) information and capacity. These concepts are first introduced in [1, §III-A] and [22, §LABEL:C-sec:cost-AL], as an extension of the analogous concepts in [11, Ch. 8].

###### Definition 8.

For any , channel of the form with a cost function , , and , the order Augustin-Legendre information for the input distribution and the Lagrange multiplier is

 Iλα(p;W) ≜ Iα(p;W)−λ⋅Ep[ρ]. (20)
###### Definition 9.

For any , channel of the form with a cost function , and the order Augustin-Legendre (A-L) capacity for the Lagrange multiplier is

 Cλα,W (21)

Except for certain sign changes, is the convex conjugate of because of an analogous relation between and , see[22, (LABEL:C-eq:information-constrained)-(LABEL:C-eq:Linformation-conjugate), (LABEL:C-eq:Lcapacity-astheconjugate)].

 Cλα,W =supϱ≥0Cα,W,ϱ−λ⋅ϱ ∀λ∈Rℓ≥0. (22)

Then can be expressed in terms of at least for the interior points of :

 Cα,W,ϱ =infλ≥0Cλα,W+λ⋅ϱ. (23)

Furthermore, there exists a non-empty convex, compact set of ’s satisfying provided that is finite, by [22, Lemma LABEL:C-lem:Lcapacity].

On the other hand, using the definitions of , , and we get the following expression for .

 Cλα,W =supp∈P(X)infq∈P(Y)Dα(W∥q|p)−λ⋅Ep[ρ]. (24)

satisfies a minimax relation similar to the one given in (16), see [22, Thm. LABEL:C-thm:Lminimax]. That minimax relation, however, is best understood via the concept of Augustin-Legendre radius defined in the following.

###### Definition 10.

For any , channel with a cost function , and , the order Augustin-Legendre radius of for the Lagrange multiplier is

 Sλα,W (25)

Then as a result of [22, Thm. LABEL:C-thm:Lminimax], for any , with , and we have

 Cλα,W =Sλα,W. (26)

If in addition is finite, then there exits a unique , called the order Augustin-Legendre center of for the Lagrange multiplier , satisfying

 Cλα,W =supx∈XDα(W(x)∥qλα,W)−λ⋅ρ(x). (27)

The A-L information measures are defined through a standard application of the convex conjugation techniques. However, starting with [24, Thms. 8 and 10] —i.e. the cost constrained variants of Gallager’s bound— the Rényi -Gallager (R-G) information measures rather than the A-L information measures have been the customary tools for applying convex conjugation techniques in the error exponent calculations, see for example [16, 17, 18]. A brief discussion of the R-G information information measures can be found in Appendix -B; for a more detailed discussion see [22].

### Ii-C The Sphere Packing Exponent

###### Definition 11.

For any , , and , the SPE is

 Esp(R,W,A) ≜ supα∈(0,1)1−αα(Cα,W,A−R). (28)

We denote case by . Furthermore, with a slight abuse of notation, we denote case by and case by .

###### Lemma 1.

For any , , is nonincreasing and convex in on , finite on , and continuous on where . In particular,

 Esp(R,W,A) (29)

Lemma 1 follows from the continuity and the monotonicity properties of established in [22, Lemma LABEL:C-lem:capacityO]; a proof can be found in Appendix -C. The proof of Lemma 1 is analogous to that of [21, Lemma LABEL:B-lem:spherepackingexponent], which relies on [21, Lemma LABEL:B-lem:capacityO] instead of [22, Lemma LABEL:C-lem:capacityO].

One can express in terms of , using the definitions of , , and :

 Esp(R,W,A) =supα∈(0,1)supp∈A1−αα(Iα(p;W)−R) =supp∈Asupα∈(0,1)1−αα(Iα(p;W)−R) =supp∈AEsp(R,W,p). (30)

Lemma 1 holds for by definition, but it can be strengthened significantly for ’s in