# Maximum Likelihood Upper Bounds on the Capacities of Discrete Information Stable Channels

Motivated by a greedy approach for generating information stable processes, we prove a universal maximum likelihood (ML) upper bound on the capacities of discrete information stable channels, including the binary erasure channel (BEC), the binary symmetric channel (BSC) and the binary deletion channel (BDC). The bound is derived leveraging a system of equations obtained via the Karush-Kuhn-Tucker conditions. Intriguingly, for some memoryless channels, e.g., the BEC and BSC, the resulting upper bounds are tight and equal to their capacities. For the BDC, the universal upper bound is related to a function counting the number of possible ways that a length- binary subsequence can be obtained by deleting n-m bits (with n-m close to nd and d denotes the deletion probability) of a length-n binary sequence. To get explicit upper bounds from the universal upper bound, it requires to compute a maximization of the matching functions over a Hamming cube containing all length-n binary sequences. Calculating the maximization exactly is hard. Instead, we provide a combinatorial formula approximating it. Under certain assumptions, several approximations and an explicit upper bound for deletion probability d≥ 1/2 are derived.

## Authors

• 5 publications
06/21/2021

### Strong Singleton type upper bounds for linear insertion-deletion codes

The insertion-deletion codes was motivated to correct the synchronizatio...
01/09/2018

### Improved Capacity Upper Bounds for the Discrete-Time Poisson Channel

We present new capacity upper bounds for the discrete-time Poisson chann...
07/10/2020

### An Upper Bound on the Error Induced by Saddlepoint Approximations – Applications to Information Theory

This paper introduces an upper bound on the absolute difference between:...
11/02/2020

### At most 4.47^n stable matchings

We improve the upper bound for the maximum possible number of stable mat...
11/18/2018

### Information Theoretic Bounds on Optimal Worst-case Error in Binary Mixture Identification

Identification of latent binary sequences from a pool of noisy observati...
11/18/2018

### Information Theoretic Bound on Optimal Worst-case Error in Binary Mixture Identification

Identification of latent binary sequences from a pool of noisy observati...
05/23/2018

### Guessing with a Bit of Help

What is the value of a single bit to a guesser? We study this problem in...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

The information stable channels were introduced by Dubrushin in  [1]. Under the information stability condition, sufficiently, their capacities can be expressed as

 C=liminfn→∞1n supX I(X;Y(X)). (1)

Essentially, a channel satisfies information stability is equivalent to having the capacity expression above [2]. Preceding works have considered a variety of more general frameworks, e.g., a formula for channel capacity [3] based on the information-spectrum method; a general capacity expression for channels with feedback [4]; general capacity formulas for classical-quantum channels [5], to list just a few.

Despite the simplicity of the formula in (1), for some channels with memory, explicitly computing the capacities directly using the general formula is often not trivial. A famous example is the binary deletion channel (BDC), which was introduced by Levenshtein in [6] more than fifty years ago to model synchronization errors. In his model, a transmitter sends an infinite stream of bits representing messages over a communication channel. Before reaching at a receiver, the bits are deleted independently and identically with some deletion probability . The receiver wishes to recover the original message based on the deleted bits, with an asymptotically zero (in the length of the stream) probability of error. The BDC satisfies the information stability [7]. Thus, the channel capacity denoted by can be expressed via the formula in (1). However, a precise characterization of is still unknown; it does not seem even possible to accurately compute the capacity numerically replying on the existing methods, for instance, the Blahut-Arimoto Algorithm (BAA) [8, 9, 10].

In this work, we consider discrete channels with finite alphabets and derive a general upper bound (called the maximum likelihood (ML) upper bound in Section III) on the capacities of information stables channels by analyzing a system of equations derived from the general formula in (1). We demonstrate that for channels without memory, e.g., the binary erasure channel (BEC) and the binary symmetric channel (BSC). The corresponding upper bounds are tight for the BEC and BSC and equal to their channel capacities. For channels with memory, as a case study, we apply the ML upper bound to derive (implicit and explicit) approximations for , under certain assumptions.

### I-a Background

A discrete channel with a finite alphabet can be regarded as a stochastic matrix from an input space of all infinite-length sequences to an output space containing all sequences that can be obtained via the channel law. Formally, we follow the approach in

[11, 12] and define the transmitted and received bit-streams via infinite processes. For each fixed block-length , there is a sequence of elements selected from a finite set

, and there is a probability distribution

over this sequence. Let denote an input process in terms of finite-dimensional sequences such that . Similarly, denote by with each in a finite set the corresponding output process of finite-dimensional sequences induced by via the channel law . So that

 P(Ym=ym|Xn=xn):=Wn(ym|xn).

Note that the block-length of received codewords is not necessarily equal to , the block-length of the transmitted codeword. Moreover, the output block-length is allowed to be flexible

, meaning that it can be regarded as a random variable with distribution specified by the channel law

111Flexible output length allows us to apply this general framework to the BDC later in Section IV.. In the remaining part of this paper, we often omit the superscript in and , to avoid confusion. This indicates the length of the output sequence is not fixed.

### I-B Outline of the Paper

The remaining content of the paper is organized as follows. In Section II we give a simplified version of , which is derived from the capacity formula in (2) for information stable channels. Based on it, we prove a general upper bound (the ML upper bound in Theorem 1) on information stable channels in Section III. Section III-C follows by verifying the tightness of the ML upper bound for the BEC and the BSC. Next, in Section IV, several approximations for the capacity of the BDC are reported.

## Ii Preliminaries

### Ii-a Notational Convention

We use to denote logarithms over base , unless stated otherwise. Let and denote the set of all possible length- sequences and the set of all induced output sequences (having flexible lengths). Let and . We use the lowercase letter to index the -th length- input sequence , and the letter to index the length- output sequence with and respectively. To distinguish between random variables and their realizations, we denote the former by capital letters and the latter by lower case letters, at most of the places throughout this work222Except for the output length , which is a random variable dependent on the channel law..

### Ii-B Capacity Proxies

For a fixed dimension , we maximize the mutual information between and in a way similar to defining the “information capacity” for discrete memoryless channels (DMCs) over the binary alphabet, to obtain the quantity:

 Cn(Wn):=1n supXn I(Xn;Y(Xn)) (2)

where the supremum is taken over all with distributions in the set

 PN:= {PXn∈RN:pj≥0 ∀ j=1,…,N; N∑j=1pj=1}. (3)

### Ii-C Information-stability

It turns out the quantity is asymptotically (in ) the same as the operational capacity under the following condition on channels, which is called information stability333

The way of classifying the channels that have an operational meaning with the capacity expressions in (

) using a condition called information stability was first introduced by Dobrushin and Guoding Hu [1, 2]. It was restated and studied in many equivalent forms. For instance, in [11], information stability was proved to be insufficient to classify whether a source-channel separation holds or not. In [13], the expressions for optimistic channel capacity and optimal source coding rate are given for the class of information stable channels and similarly “information stable” sources respectively..

###### Definition 1 (Information Stability for Channels [1, 2, 3]).

A channel is said to be information stable, if there exists an input process such that for all sufficiently large and

 ∀ γ>0

where denotes the information density for all and . In other words, the normalized information density converges in probability to .

Intuitively, information stability characterizes the types of channels in a manner similar to that the asymptotic equipartition property (AEP) characterizes stochastic sources. In fact, information stability for a channel implies the existence of a class of corresponding input processes such that , on being input to , results in a near-optimal code. For the case of discrete memoryless channels (DMCs), the operational meaning of the single-letter quantity in Eq. (4

) below appears as a natural consequence of the law of large numbers. For general channels, by considering the asymptotic behavior of the information density taking on an optimal input process

(which maximizes the mutual information), information stability provides (with sufficient generality, for a broader class of channels) an analogue of the law of large numbers. Such an optimal input process may be understood to be equivalent to a sequence of codes that are capacity-achieving asymptotically in the block-length . The key idea relies on classical achievability bounds (for instance, Feinstein’s lemma [14], Shannon’s achievability bound [15]).

Dobrushin in [7] proved that BDCs are information stable as defined in Definition 1. For an arbitrary fixed , maximizing in Eq. (2) gives an optimal input distribution . Through appropriate achievability results ( [14, 15]), it is possible to construct an -code whose error probability vanishes as goes to infinity. In addition, the rate approaches for sufficiently large . Hence for information stable channels, the capacities exist and can be written as444Note that this limiting expression does not always hold for general channels. For instance, consider one example in [3]: a binary channel with output codewords equal to the input codeword with probability and changed independently of the input codewords with probability . The capacity of this channel is since the error probability is always strictly positive and hence not vanishes. However, the formula in (4) gives .

 C =liminfn→∞Cn(Wn)<∞. (4)

### Ii-D System of Equations for Optimality

Recall and .

Our approach focuses on bounding . Expressing the mutual information in terms of the channel law, the capacity-proxy defined in (2) equals to

 Cn(Wn) = 1nsupPXn∑x,yp(x)Wn(y|x)logWn(y|x)∑xp(x)Wn(y|x). (5)

Here, the supremum is taken over all distributions in the set (defined in (3)) and the summation is taken over all length- input sequences and all output sequences .

From an optimization perspective, the asymptotic behavior of can be captured by establishing a sequence of capacity-achieving distributions maximizing the following quantity for each :

 N∑j=1M∑i=1pjWn(yi|xj)logWn(yi|xj)∑Nj=1pjWn(yi|xj). (6)

Derived from the Karush-Kuhn-Tucker conditions, the following lemma generalizes Theorem 4.5.1. in [16] (cf[17]), which was established to find channel capacities of DMCs with non-binary input/output alphabets. The lemma states a necessary and cufficient condition of the existence of maximizing (6) and it can be proved along the same line as in [16]. The only difference is that for general channels, the summation is taken over all sequences in (this is in general exponential in ). While for DMCs, the summation can be decomposed and taken over the alphabet set of each individual coordinate of the sequence, thus the number of summations is linear in . For brevity the proof is omitted.

###### Lemma 1 ([9, 8, 16, 17]).

Fix a block-length

. There exists an optimal probability vector

such that the quantity in (6) is maximized if and only if there exists and for all ,

 1nM∑i=1Wn(yi|xj) logWn(yi|xj)∑Nj=1p∗jWn(yi|xj) {=λn if p∗j≠0≤λn if p∗j=0. (7)

Moreover the capacity if the limit exists.

Indeed, (see [16, 17]) a probability distribution for an information stable source satisfying (7) always exists as grows. Thus, the capacity-achieving distribution with fixed block-length can be attained by solving the system (7). Finding such an optimal for the system of equations (7) is equivalent to solving a non-linear system of equations that consists of exponentially (in ) many variables. As introduced in Section I, the BAA is one of the algorithms that can be applied to search for numerical solutions of (7).

However, this approach has several limitations. On the one hand, in direct implementation of the BAA, as grows, it becomes computationally intractable even to store the variables to be computed. One the other hand, as the BAA is itself an iterative algorithm attempting to solve the non-convex optimization problem (7), and to the best of our knowledge for general channels, there are no guarantees on how quickly the numerical solution converges as a function of the number of iterations. Therefore, instead of looking for numerical answers, we concentrate on finding a universal upper bound on the capacities of general channels. This motivates the next section.

## Iii Maximum Likelihood Upper Bound

In the sequel, we present some definitions. First, motivated by the notion of information stability defined in Definition 1, we characterize a subset of the joint set consisting of all possible combinations of input and output sequences. This subset satisfies two vital properties. First, it behaves as a “typical set” and contains nearly all pairs of randomly generated according to an arbitrary distributions for every large . Second, conditioned on the pair belongs to the subset, the conditional mutual information does not differ too much from . Note that the concentration of information densities is stronger than that for information stable sources in two perspectives – the concentration is in expectation; and it is required to hold for every source .

###### Definition 2.

For information stable channels with any source , a subset of is called a concentration set if it satisfies

 liminfn→∞P((Xn,Y)∈A)=1, (8) limsupn→∞E[∣∣∣iXn,Wn(Xn;Y(Xn))nCn(Wn)−1∣∣∣∣∣(Xn,Y)∈A]=0 (9)

where the randomness is over the source and channel law .

Later in Section III-C and Section IV-B1, we provide concrete and nontrivial examples of the concentration sets for the BEC, BSC and BDC respectively.

Since , the next lemma is straightforward.

###### Lemma 2.

For each block-length , there exists a subset of such that

 A⊆Xn×B.

It is useful to introduce the following “constant” version of the stochastic matrix , called the stochastic factors for convenience. Again, we will carefully construct them in Section III-C for both the BEC and the BSC, and in Section IV-B for the BDC.

###### Definition 3.

We call a set of functions stochastic factors if there exists a decomposition of ( is a discrete set) such that

 ∑y∈Bkfk(y|x)=1,∀ x∈Xn,k∈K, (10) ∑k∈Kmax(x,y)∈AkWn(y|x)fk(y|x)≤1 (11)

where .

Based on the concentration set and the stochastic factor defined above, we obtain the following upper bound on the capacity of an information stable channel:

###### Theorem 1 (Maximum Likelihood Upper Bound555For intuition on why we call a maximum likelihood (ML) upper bound, see Section Iii-A1.).

For a discrete information stable channel defined in Section I-A, assume there exist a concentration set and stochastic factors defined above. The following upper bound on the channel capacity holds for any and :

 C≤liminfn→∞¯¯¯¯Cn(Wn) (12)

where denotes the following quantity:

 (13)

An intuitive derivation of the bound (12) is described below by formulating a simplified system in a greedy approach. The formal proof using Jensen’s inequality is provided in Section III-B.

### Iii-a Intuition

Recall that the system of equations in (7) gives, for every fixed dimension , an optimizing probability distribution for the capacity proxy in (5). Since actually solving the system of equations in (7) is computationally intractable for large , it is desirable to relax this system to a computationally tractable system that nonetheless provides a good outer bound to (7). Our starting point is the observation that an information stable input process (with corresponding sequence of probability distributions ) satisfies all but an asymptotically (in ) vanishing fraction of the constraints in (7). To see this, one may notice that by the definition of information stability (in Definition 1), for any fixed it holds that there is a sufficiently large such that for all , in probability (over the input process and the channel law it holds that the ratio of the information density over converges to . Moreover, for an arbitrary but fixed integer and for all ,

 ∑y∈¯¯¯YWn(y|x)=1. (14)

Therefore, for all sufficiently large , w.h.p. using the distribution of the information stable process , the quantity on the LHS of (7) is approximately equal to . Thus any information stable input process , for sufficiently large , becomes a reasonable approximation of the input process optimizing (7). This encourages us to construct a new input process by maximizing, for every integer , the probability in (15) below (using a greedy approach):

 P⎧⎪⎨⎪⎩∣∣ ∣ ∣∣i¯¯¯¯Xn,Wn(¯¯¯¯¯Xn;Y(Xn))nCn(d)∣∣ ∣ ∣∣=1⎫⎪⎬⎪⎭. (15)

Through this process we are able to introduce such a process (with corresponding distribution ) that mimics the one for information stability in Definition 1.

#### Iii-A1 Approximate Information Stable Processes

To find a system that obtains such a sub-optimal input distribution

efficiently, one simple heuristic method is to maximize the probability in (

15) greedily.

For fixed input block-length , we consider the set of all output sequences in the concentration set . For each in , we greedily choose the corresponding that maximizes the a posteriori probability of an instance being transmitted under the channel law (this is intuitively where the term comes from) that shows up in Eq. (13).666This procedure coincides with the greedy decoding suggested in [18] for the BDC, which can be used to derive lower bounds on . However, making use of the system (7), the greedy selection is also capable to give upper bounds.

Now, guided by the intuition in the previous paragraph about the LHS of (7) being approximately equal to for many , we fix the information density

 iXn,Wn(xj;yi)=logmaxx∈Xnfk(yi|x)∑nj=1¯¯¯pnjWn(yi|xj)

for each such drawn in above to equal a certain constant (which shows up later, in Eq. (16)).777We do not claim to have an efficient computational process for determining this constant . However, this has a strong operational meaning – it provides an outer bound on the capacity of the deletion channel, as discussed in Eq. (17). The fixing of the information density is done in a manner such that, using Bayes’ rule, a probability distribution is induced on . In particular, the value of is chosen so that the summation of over all equals .

#### Iii-A2 Simplified System

Formally, we describe the new (as a simplified version of (7)) system as follows.

For all , and some , we let

 =¯¯¯λn. (16)

As explained above, by exhausting the set , the constraints in (16) suggest a greedy approach for finding the sub-optimal distribution .

Recall that (Definition 3 in Section III) for all and . Given an input process satisfying Eqs. (16) for each integer , we can rewrite the constraints in (16) as

 maxx∈Xnfk(y|x)∑nj=1¯¯¯pnjWn(y|xj)=2n¯¯¯λn,∀ y∈Ak.

Summing both sides over all ,

 ∑y∈Bkmaxx∈Xnfk(y|x)2n¯¯¯λn =∑y∈BkN∑j=1¯¯¯pnjWn(y|xj) =N∑j=1¯¯¯pnj∑y∈BkWn(yi|xj) =N∑j=1¯¯¯pnj=1.

Multiplying both sides with and taking logarithms,

 (17)

The number of constraints in (16) is much smaller than in (7)), suggesting (without a proof) for each . This indicates that the ML upper bound defined in Theorem 3 makes sense. Next we prove Theorem 1.

### Iii-B Proof of Theorem 1

Denote by the optimizing probability distribution maximizing the quantity in (6). Based on Definition 2, Lemma 2 and Definition 3, we prove Theorem 1.

Considering the constraints in (7), it follows that

 Cn(Wn)=1nM∑i=1Wn(yi|xj)logWn(yi|xj)∑Nj=1p∗jWn(yi|xj) (20)

for all with .

Now we introduce an auxiliary probability distribution with once in the set . Multiplying both sides of (20) by and summing over all ,

 Cn(Wn) ≤1nN∑j=1M∑i=1qjWn(yi|xj)logWn(yi|xj)∑Nj=1p∗jWn(yi|xj).

Making use of the concentration set in Definition 2, we get (18) where as . Moreover, the decomposition (with ) yields (19). Since logarithmic functions are concave and Eq. (10) implies

 ∑(xj,y)∈Akqjfk(y|xj)=∑y∈BkN∑j=1qjfk(y|xj)=N∑j=1qj=1,

applying Jensen’s inequality to (19), it follows that

 (21)

where the last inequality holds since Eq. (10) guarantees that

 ∑k∈Kmax(x,y)∈AkWn(y|x)fk(y|x)≤1.

Next, we set for all . The quantity inside the logarithm of (21) becomes

 ∑(xj,y)∈Akp∗jWn(y|xj)fk(y|xj)∑Nj=1p∗jWn(y|xj) (22) = ∑y∈Bk∑Nj=1p∗jWn(y|xj)fk(y|xj)∑Nj=1p∗jWn(y|xj) (23) ≤ ∑y∈Bkmaxx∈Xnfk(y|x). (24)

Putting (21) and (22) into (18), for any concentration set and stochastic factors ,

 Cn(Wn) ≤¯¯¯¯Cn(Wn) :=1nmaxk∈Klog(∑y∈Bkmaxx∈Xnfk(y|x))+γn. (25)

Note that the term is vanishing (in ). Hence, for information stable channels, the general formula in (4) implies that

 C≤liminfn→∞¯¯¯¯Cn(Wn).

This completes the proof of Theorem 1.

### Iii-C Verification of the Tightness for the BEC and BSC

This section is devoted to verifying the tightness of the upper bound in Theorem 1 on the BEC and the BSC. Denote by the erasure/bit-flip probability. Note that under the settings of the BEC and BSC, we have the following realizations of the input and output spaces:

 XnBEC=XnBSC:={0,1}n

and

 ¯¯¯¯YBEC:={0,1,E}n, ¯¯¯¯YBSC:={0,1}n.

Denote by the number of erasures (marked as ) in a given the Hamming distance and the Hamming distance. We select the following concentration sets (and the corresponding decompositions) for the two types of channels respectively.

###### Definition 4.

We consider the following concentration sets denoted by and for the BEC and BSC respectively according to Definition 2:

 ABEC:=⋃k∈KεABECk, (26) ABSC:=⋃k∈KεABSCk (27)

where and are defined as follows in agreement with Definition 3:

 Kε :=⌊n(p−ε),n(p+ε)⌉, ABECk :={(x,y)∈{0,1}n×{0,1,E}n:dE(y)=k}, (28) ABSCk :={(x,y)∈{0,1}n×{0,1}n:dH(x,y)=k} (29)

and the sets and are defined as

 BBECk :={y∈{0,1,E}n:dE(y)=k}, (30) BBSCk :={0,1}n (31)

with some satisfying .

###### Lemma 3.

The concentration sets and defined in above satisfy the conditions (8)-(9) in Definition 2.

###### Proof.

For the BEC, since each bit of the length- input sequence is erased i.i.d., the number of erased bits is distributed according to Bernoulli. Therefore, fix input length , by the Chernoff bound  (see [19, 20]), source , the probability of the output sequence being inside the concentration set defined in (26) is always bounded from below by

. Moreover, the information densities corresponding to the outliers must be bounded. Thus, the concentration set

satisfies the condition in (9). For the BSC, we have a trivial decomposition , automatically guarantees the condition in (9). ∎

Furthermore, we associate the following stochastic factors to both channels.

###### Definition 5.

The stochastic factors for the BEC and BSC are defined as follows.

 (32)

and

 (33)

with satisfying

 ⌊n(p−ε)⌋≤k≤⌈n(p+ε)⌉ (34)

for some .

###### Lemma 4.

The stochastic factors and defined in above satisfy the conditions (10)-(11) in Definition 3.

###### Proof.

We check the stochastic factors defined above satisfy the conditions in (10)-(11). For the BEC, plugging in (28) and (32),

 ∑y∈BBECkfBECk(y|x)=∣∣BBECk∣∣2n−k⋅1(nk) =1,  ∀ x∈Xn,k∈K.

Since for all ,

 ∑k∈Kmax(x,y)∈ABECkWn(y|x)fBECk(y|x) =⌈n(p+ε)⌉∑k=⌊n(p−ε)⌋(nk)pk(1−p)n−k∈[0,1]

showing that are stochastic factors.

For the the BSC, similarly, based on the definitions in (33), since for each fixed , there are in total many satisfying , it follows that

 ∑y∈BBSCkfBSCk(y|x) =(nk)⋅1(nk)=1,  ∀ x∈Xn,k∈K.

Moreover, for all , clearly,

 0 (35)

From the definition of the set ,

 ∑k∈Kmax(x,y)∈ABSCkWn(y|x)fBSCk(y|x) = ∑k∈Kmax(x,y):dH(x,y)=kWn(y|x)fBSCk(y|x) = ⌈n(p+ε)⌉∑k=⌊n(p−ε)⌋(nk)pk(1−p)n−k∈[0,1]

verifying that the stochastic factors in (32)-(33) satisfy the conditions in Definition 2 and Definition 3. ∎

Based on the concentration sets and defined in (26)-(31) and the stochastic factors in (32)-(33), the ML upper bound in Theorem 1 is tight, as stated in the following theorem.

###### Theorem 2 (Tightness for the BEC and BSC).

Let be the erasure/bit-flip probability. The ML upper bound in Theorem 1 is tight for the BEC and BSC, i.e.,

 liminfn→∞1nmaxk∈Kεlog(∑y∈BBECkmaxx∈XnBECfBECk(y|x))=1−p, liminfn→∞1nmaxk∈Kεlog(∑y∈BBSCkmaxx∈XnBSCfBSCk(y|x))=1−h(p)

where denotes the binary entropy.

###### Proof.

Note that the parameter can be arbitrarily small, taking and applying Theorem 1, the capacity of the BEC denoted by satisfies

 CBEC(p)≤ liminfn→∞1nlog(∑y∈BBEC⌈np⌉maxx∈XnBECfBEC⌈np⌉(y|x)) = liminfn→∞1nlog(∣∣BBEC⌈np⌉∣∣/(n⌈np⌉)).

Putting into above,

 CBEC(p) ≤liminfn→∞1nlog(2n(1−p)(n⌈np⌉)/(n⌈np⌉)) =1−p. (36)

Furthermore, for the BSC, the capacity is bounded from above as

 CBSC(p)≤ liminfn→∞1nlog(∑y∈BBSC⌈np⌉maxx∈XnBSCfBSC⌈np⌉(y|x)) =

Since , it follows that