# Error-Free Communication Over State-Dependent Channels with Variable-Length Feedback

The zero-error capacity of state-dependent channels with noiseless feedback is determined, under the assumption that the transmitter and the receiver are allowed to use variable-length coding schemes. Various cases are analyzed, with the employed coding schemes having either bounded or unbounded codeword lengths and with state information revealed to the encoder and/or decoder in a strictly causal, causal, or non-causal manner. In each of these settings, necessary and sufficient conditions for positivity of the zero-error capacity are obtained and it is shown that, whenever the zero-error capacity is positive, it equals the conventional vanishing-error capacity. Moreover, it is shown that the vanishing-error capacity of state-dependent channels is not increased by the use of feedback and variable-length coding. A comparison of the results with the recently solved fixed-length case is also given.

## Authors

• 13 publications
• 1 publication
• 83 publications
12/07/2017

### Feedback Capacity and Coding for the (0,k)-RLL Input-Constrained BEC

The input-constrained binary erasure channel (BEC) with strictly causal ...
01/10/2020

### Variable-Length Coding for Zero-Error Channel Capacity

The zero-error channel capacity is the maximum asymptotic rate that can ...
08/09/2021

### Zero-Error Feedback Capacity of Finite-State Additive Noise Channels for Stabilization of Linear Systems

It is known that for a discrete channel with correlated additive noise, ...
11/30/2020

### Minimax Converse for Identification via Channels

A minimax converse for the identification via channels is derived. By th...
10/13/2019

### Variable-Length Source Dispersions Differ under Maximum and Average Error Criteria

Variable-length compression without prefix-free constraints and with sid...
10/25/2020

### Real-Time Variable-to-Fixed Lossless Source Coding of Randomly Arriving Symbols

We address the recently suggested problem of causal lossless coding of a...
10/27/2020

### Algorithms for q-ary Error-Correcting Codes with Limited Magnitude and Feedback

Berlekamp and Zigangirov completely determined the capacity error functi...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

This work considers the zero-error capacity of state-dependent channels. In contrast to the standard notion of capacity, which allows an asymptotically vanishing probability of error at the decoder, the output of a zero-error decoder must always be correct. We focus on variable-length coding schemes with access to noiseless feedback at the encoder, and derive the zero-error capacity under different models of state information availability.

At first glance, the requirement of zero-error decoding is quite stringent. Indeed, the zero-error capacity of many channels, including the binary symmetric channel (BSC), is , and this holds even for the relatively benign binary erasure channel (BEC). However, it became evident after Burnashev’s work on the error exponent of discrete memoryless channels (DMCs) [3] that, with variable-length encoding and noiseless feedback, error-free communication is not only possible over a large class of DMCs, but the zero-error capacity of such channels is equal to their (vanishing-error) Shannon capacity. The fact that the channel capacity can in many cases be achieved with error probability being fixed to zero is quite interesting, and this is what motivated us to study the corresponding problem in the more general context of channels with states. Furthermore, analyzing fundamental limits of communication under variable-length coding is important in systems with feedback, and even more so in systems where side information about the channel is available as well. In fact, most communication schemes in such systems, e.g., the ARQ mechanism, are adaptive in nature and the length of transmission depends on the side information and the information obtained through the feedback link.

Communicating with zero error has been considered previously in many settings, starting with Shannon’s work [12] on the zero-error capacity of DMCs with and without feedback, and under fixed-length encoding. We refer the reader to [8] for a review of this area. The work most closely related to ours is that of Bracher and Lapidoth [2] where the zero-error feedback capacity of state-dependent channels was determined, under the assumptions that fixed-length encoding is being used and that state information is available only at the encoder. Our work extends these results to the variable-length case, while also analyzing other models of state information availability. We also mention here the work of Zhao and Permuter [16] where the authors characterized the zero-error feedback capacity under fixed-length encoding of channels with state information at both the encoder and the decoder, but in which the state process is not necessarily memoryless and is even allowed to depend on the channel inputs.

The main contributions of this work are threefold. First, we show how to relate the zero-error capacity with variable-length encoding and feedback (“zero-error VLF capacity”) to the standard vanishing-error capacity with fixed-length encoding and no feedback (Theorem 1 and Theorem 3). Second, we give necessary and sufficient conditions for the zero-error VLF capacity to be positive under different models of state information availability at the encoder and the decoder, including none, strictly causal (only past states are available), causal (past and current states available), and non-causal (all states including future states available). These results are summarized in Theorem 2. These theorems together completely characterize the zero-error VLF capacity of state-dependent channels. And third, we obtain analogous results for the zero-error feedback capacity under bounded-length coding (Theorem 5 and Theorem 6). We show that the conditions for positivity are in this case the same as for fixed-length coding schemes, although the final capacity expressions may be different.

In addition, as part of our proofs, we derive an upper bound on the -error capacity for state-dependent channels (Lemma 10), which may be of independent interest.

### I-a The Channel Model, Definitions, and Notation

Let

denote the set of channel input letters, the set of channel output letters, and the set of channel states, respectively, all of which are assumed finite. A state-dependent discrete memoryless channel (SD-DMC) is described by conditional probability distributions

, , where the states are drawn i.i.d. across all channel uses according to a distribution on . To avoid discussing trivial cases, we assume throughout the paper that , , ; that all states in have positive probability:

 ∀s∈SQ(s)>0; (1)

and that every channel output is reachable from at least one input in at least one state:

 ∀y∈Y∃x∈X,s∈SW(y|x,s)>0. (2)

We use the symbol to denote the set of messages to be transmitted in a particular communication setting. The symbols

denote random variables taking values in

, respectively, and the lower case versions denote their realizations, i.e., elements of . The random variable representing the transmitted message is always assumed to be uniform over . is a shorthand for a random infinite sequence , for a random finite sequence , and for a subsequence (hence ).

We say that the encoder (resp. decoder) has causal state information if, before the ’th channel use, it can see all the past channel states as well as the current—’th—state, i.e., it is given the state sequence and can use it in the ’th time slot for the encoding (resp. decoding) operation. State information is said to be strictly causal if only past states () are available at time instant , and it is said to be non-causal if all the states () are available at any time instant. We consider the following cases of state information availability:

 SI\coloneqq{(-,-),(% sc,-),(c,-),(nc,-),(sc,c),(c,c% ),(nc,c),(nc,nc)}, (3)

where the first (resp. second) coordinate of denotes state information available at the encoder (resp. decoder) and -/sc/c/nc stand for none/strictly-causal/causal/non-causal. The cases that have been omitted from SI are discussed in Remark 1 to follow.

###### Definition 1.

Consider an SD-DMC with causal state information at both the encoder and the decoder. An variable-length feedback (VLF) code for the message set , where is a positive real and , is defined by:

• A sequence of encoders , , defining channel inputs ;

• A sequence of decoders ,

, defining decoder’s estimates of the transmitted message,

;

• A positive integer-valued random variable (a stopping time of the receiver filtration ) representing the code length and satisfying .

The decoder’s final decision is computed at time , , and it must satisfy .

When , such a code is called a zero-error VLF code.

If there exists a constant such that , such a code is called a bounded-length feedback code, and if , it is called a fixed-length feedback code.

Definitions for the other cases of state information availability are the same except in 1)–3) is replaced by , , or accordingly.

The rate of an code is defined as . The vanishing-error capacity of a given channel is defined in the usual way as the supremum of the code rates that are asymptotically achievable (as ) with arbitrarily small error probability. The zero-error capacity of a given channel is the supremum of the rates of all zero-error codes for that channel [12]. Capacity is always denoted by , with subscripts and superscripts indicating the channel and the coding schemes with respect to which it is defined as follows:

• The first subscript is either “” or “” and serves to distinguish between the zero-error and the vanishing-error case;

• The second subscript is either “f” or “-” depending on whether or not the feedback link is present;

• The third subscript is vl, bl, or fl, indicating that the capacity in question is defined with respect to variable-length, bounded-length, or fixed-length codes;

• Superscripts from the set SI (see (3)) are used to denote state information availability at the encoder and the decoder.

For example, is the vanishing-error capacity under variable-length feedback codes, where the encoder is given state information in a non-causal manner and the decoder is given no state information; is the zero-error capacity under bounded-length coding without feedback, and with state information revealed both to the encoder and to the decoder in a causal manner; etc.

###### Remark 1.

To conclude this section we explain briefly why, of the sixteen possible cases in , only the cases in SI in (3) are being considered.

First, leaving out the four cases with strictly causal state information at the decoder, , is not a loss in generality. This is because strictly causal state information at the decoder can be “made causal” by simply delaying the decoding process by one time slot. Hence, from the viewpoint of capacity issues, is equivalent to for any .

Second, of the four possible cases where the decoder has non-causal state information () we only consider one—. This is also not a loss in generality because knowing future states can be helpful to the decoder only if the encoder also knows future states and is using them in the encoding operation. Otherwise, these states are independent of the channel inputs and no information can be extracted from them. Hence, for our purposes, is equivalent to for any .

Finally, note that has not been included in SI either. This case is quite subtle and we choose to discuss it separately in Section V. The main issue here is that it is not clear how to define the code length, i.e., the stopping time (see Definition 1). Namely, the decoder making a decision at time instant does not necessarily mean that the transmission is over from the encoder’s perspective. This is because the decoder’s decision is based on the channel outputs and the channel states that it sees, and hence the encoder, not knowing the states, may actually never realize that the decoding is completed and may continue transmitting. As we shall see, this is especially important in the zero-error setting. Despite this difficulty, we shall give in Section V a sufficient condition for the zero-error capacity to be positive, and a necessary and sufficient condition for to be positive.

## Ii Vanishing-Error Capacity

As already mentioned in Section I, one of the results of this paper is the statement that the zero-error VLF capacity of an SD-DMC, whenever positive, equals the vanishing-error capacity of the same channel. For this reason, we first study the vanishing-error capacity and show that this fundamental limit remains unchanged even if the transmitter and the receiver are allowed to use variable-length feedback-dependent coding schemes. For SD-DMCs with state information only at the transmitter, the fact that feedback does not increase the capacity under fixed-length coding was shown in [10].

For every , .

###### Proof:

Deferred to the Appendix. ∎

Consequently, we shall denote the vanishing-error capacity (with or without feedback) simply by in the rest of the paper.

By Theorem 1 and the known expressions for the capacity [5, Ch. 7] we get:

 C-,-↓,f,\textscvl=Csc,-↓,f,\textscvl =maxPXI(X;Y) (4) Cc,-↓,f,\textscvl =maxPU,f:U×S→XI(U;Y) (5) Cnc,-↓,f,\textscvl (6) Csc,c↓,f,\textscvl =maxPXI(X;Y|S) (7) Cc,c↓,f,\textscvl=Cnc,c↓,f,\textscvl=Cnc,nc↓,f,\textscvl =maxPX|SI(X;Y|S), (8)

where denotes an auxiliary random variable with alphabet of cardinality .

## Iii Zero-Error Capacity: Variable-Length Codes

For a DMC , a necessary and sufficient condition for is

 ∃x∈X,y∈YW(y|x)=0. (9)

This can be concluded from Burnashev’s characterization of the error exponent of DMCs under VLF coding schemes [3]—the corresponding error exponent is infinite if and only if (9) holds. When , is said to be a disprover111This terminology is from [9] and reflects the fact that such an output “disproves” the possibility that was the channel input. for . Whenever such a disprover exists, one bit can be transmitted error free in a finite expected number of channel uses as follows. In the first two channel uses the transmitter sends for and for 1, where is any other input letter with (such an input letter necessarily exists by our assumption (2)). Due to the fact that , if the letters obtained at the output are , where denotes an arbitrary letter from , the receiver concludes that must have been transmitted; similarly, if the letters obtained at the output are , then must have been transmitted; finally, if the letters obtained at the output are , the procedure is repeated (the transmitter sees the output letters through feedback and knows whether or not it should repeat the transmission). The expected number of retransmissions needed to complete the protocol is finite because the event has positive probability, i.e., . Therefore, in a finite expected number of channel uses the receiver will recover the bit correctly.

For a study of zero-error VLF communication over a special kind of DMC—the so-called Z-channel—see also [14].

In the following statement we identify generalizations of the notion of disprover for state-dependent channels. The idea is that, in each case, there is a disprover output which the sender can choose to avoid (using the disproved input ), so that seeing gives error-free information that was intended as a possible output.

###### Theorem 2.

Necessary and sufficient conditions for positivity of the zero-error VLF capacity of SD-DMCs are as follows:

• if and only if there is a pair such that disproves in all states:

 ∃x∈X,y∈Y∀s∈SW(y|x,s)=0. (10)
• if and only if (10) holds.

• if and only if there is an output letter which is a disprover in each state:

 ∃y∈Y∀s∈S∃x∈XW(y|x,s)=0. (11)
• if and only if (11) holds.

• if and only if there is at least one state with a disprover:

 ∃x,x′∈X,y∈Y,s∈SW(y|x,s)=0∧W(y|x′,s)>0. (12)
• if and only if (12) holds.

• if and only if (12) holds.

• if and only if (12) holds.

###### Proof:
(a) In the case when neither side has any state information, the channel is equivalent to the DMC . The condition (10) is the condition for positivity of the zero-error VLF capacity of this DMC; see (9) (recall that for all by our assumption (1)). (b) Let . Since , we only need to show the “only if” part. So suppose that (10) is not satisfied, meaning that for every input-output pair there exists a state where . Then every output sequence can be produced by every input sequence with positive probability (if the state sequence happens to be ). This means that the decoder cannot decide with certainty at any point in time what the transmitted message was. Therefore, zero-error communication in a finite average number of channel uses is impossible. The main point in this argument, stated informally, is that the encoder cannot avoid the “bad” states because it only sees the past ones, i.e., no matter which letter it chooses to send in the current time slot, the channel state can always be such that the decoder remains confused. (c) Now consider the case . We first prove sufficiency of (11) using a procedure analogous to the one for DMCs outlined in the paragraph following (9). Let be an output letter claimed to exist in (11). For every choose an input letter such that . Also, for every define to be some input letter with , if it exists; otherwise, pick to be any letter different from (note that a letter with exists for at least one state ; see (2)). If the states realized in the first two time slots are , the transmitter sends for and for (the transmitter knows the channel state before it sends a letter). Since , if the letters obtained at the output are , the receiver concludes that must have been transmitted; if the letters obtained at the output are , then must have been transmitted; if the letters obtained at the output are , the procedure is repeated in the next two slots, and so on. In this way the receiver will recover the transmitted bit in a finite expected number of channel uses , implying that . To prove the converse, suppose that (11) does not hold, i.e., for every output letter there exists a state such that for all input letters . Then for any output sequence the state sequence produces with positive probability on any input . This means that the decoder cannot be certain, at any time instant , what was the transmitted message, and hence zero-error communication in a finite average number of channel uses is impossible. (d) The above proof for the case goes through for the case as well. (e) Let . Suppose that (12) holds and let be input letters, an output letter, and a state satisfying , . Let also be an arbitrary input letter. In the first two channel uses the transmitter sends for and for 1. (The third letter is irrelevant and will be ignored by the receiver. It is sent only because state information is delivered to the transmitter with a one-slot delay, so in order for it to learn the states in the first two slots, a dummy letter is transmitted in the third slot.) Now, if the letters obtained at the output are and the channel states in these two slots are , the receiver concludes that must have been transmitted (the receiver can see the states); if the letters obtained at the output are and the channel states in these two slots are , then must have been transmitted; if none of the above two situations occurred, the procedure is repeated. In a finite expected number of channel uses one of the above two situations will happen and the receiver will recover the transmitted bit, implying that . For the converse, notice that if (12) is not satisfied, then in every state an arbitrary output is reachable from either all inputs, or from none of them. Clearly, any such state is useless for zero-error communication. (f)–(h) The proof of (e) goes through in these cases as well (the proof of achievability can in fact be slightly simplified here—there is no need for sending the dummy letter). We should note that, for these models of state information availability, the necessity and sufficiency of (12) also follows from the results in [4], where the error exponent of finite-state ergodic Markov channels with causal state information at both sides has been characterized. Namely, the error exponent of an SD-DMC with causal state information at both sides is infinite at all rates below capacity if and only if (12) holds.

###### Remark 2 (Shannon strategy).

The usual way of proving achievability results for channels with causal state information at the encoder, , is via the so-called Shannon strategy [13, 5]. In this approach, one considers the set of all functions from to and a related DMC with input alphabet and output alphabet defined by . A code is then defined over the alphabet and the communication over the original SD-DMC proceeds as follows: if the channel state in the current—’th—time slot is , and the ’th symbol of the codeword is , then the transmitter sends . It was shown by Shannon that this strategy achieves the capacity of SD-DMCs with causal state information at the encoder [13]. The strategy also achieves the zero-error capacity under fixed-length coding [2]. Thus, in these settings, an SD-DMC (with ) is essentially equivalent to a stateless DMC .

We wish to point out here that optimality of the Shannon strategy continues to hold in the zero-error VLF setting as well. To see this, recall that a necessary and sufficient condition for positivity of the zero-error VLF capacity of the DMC is the existence of an input and an output such that (see (9)). This condition is equivalent to (11) because:

 ∃u∈UW′(y|u)=0 ⇔∃u∈U∀s∈SW(y|u(s),s)=0 (13) ⇔∀s∈S∃x∈XW(y|x,s)=0, (14)

where (13) follows from the definition of , and (14) holds because is a function from to . Notice that the condition in (14) is precisely (11).

###### Remark 3.

Note that the condition (11) for positivity of the zero-error VLF capacity is the same for causal and non-causal state information at the transmitter. This is not the case in the fixed-length and bounded-length settings; see [2] and Theorem 5 ahead.

Likewise, the condition (12) states that the zero-error VLF capacities for the and cases are either both positive or both zero. This is not the case when fixed-length or bounded-length codes are being used; see Theorem 5.

We next characterize the value of the zero-error VLF capacity of SD-DMCs. The statement is that, whenever this quantity is positive, it equals the vanishing-error capacity of the same channel. The analogous result for DMCs is known and can be inferred from Burnashev’s characterization of the error exponent of such channels under VLF coding (as mentioned above, the error exponent is infinite at all rates below if and only if (9) holds). However, there is a simpler and more direct way of proving this statement which can be extended to channels for which error exponents are not known. One such proof was given by Han and Sato for DMCs and bounded-length codes [7], but virtually no changes are required to extend it to the setting we are interested in.

###### Theorem 3.

Let . If , then .

###### Proof:

Obviously, . To demonstrate that the reverse inequality also holds whenever , we shall use a slight modification of the Han–Sato coding scheme [7]. Assume that . Let be the message set. Let be a fixed-length feedback code for this message set having the following properties: length , rate with , and error probability (we know that such a code exists for any and for large enough). Let be a zero-error VLF code for the same message set having the following properties: rate , for some , and average length with . Based on the codes and we shall devise another variable-length zero-error coding scheme of rate arbitrarily close to , which will prove the desired claim. The communication protocol is as follows. To send a message , the transmitter first sends the corresponding codeword from . Depending on whether or not the receiver has decoded the received sequence correctly, something that the transmitter knows because it can simulate the decoding process after receiving feedback, the transmitter then sends one bit of information through the channel. This bit has the meaning of an signal that informs the receiver about the correctness of decoding, and can be transmitted error free in a finite expected number of channel uses because the zero-error capacity is positive by assumption. Now, if the sent signal is ack, meaning that the decoding was correct and that both the transmitter and the receiver are aware of that, the protocol stops. If on the other hand the signal was nack, meaning that the decoding was incorrect, the transmitter sends the same message again, but this time it encodes the message using a zero-error code , rather than the code . This ensures that the receiver will decode the received sequence correctly with probability and the coding scheme just described is therefore zero-error. Moreover, the overall rate of the scheme is approximately equal to the rate of the code used in the first phase of the protocol, because the second phase of the protocol is active only with probability (the probability that the transmission in the first phase fails), and this can be made arbitrarily small. ∎

We conclude this section with a theorem that is meant to demonstrate the power of variable-length coding compared to fixed-length coding in channels with feedback—with fixed-length codes, information obtained by the transmitter through the feedback link is not fully utilized.

###### Theorem 4.

There exists an SD-DMC for which and yet .

###### Proof:

Consider the following binary-input-binary-output channel with two states: in state , we have the so-called Z-channel with and , , and in state the channel is noiseless, .

Zero-error communication with fixed-length feedback codes through this channel is not possible, even if both the transmitter and the receiver have non-causal state information. This is because the state sequence may happen to be in which case every two input sequences of length are confusable, meaning that they can produce the same output sequence with positive probability. Hence, .

However, zero-error communication with variable-length feedback codes is possible even if neither the transmitter nor the receiver have any state information, as one can verify from (10) ( is a disprover for in both states). In fact, not only is it possible, but the zero-error VLF capacity is by Theorem 3 equal to the vanishing-error capacity of the corresponding channel, . ∎

## Iv Zero-Error Capacity: Bounded-Length Codes

In the previous section we have demonstrated how variable-length encoding can significantly increase the zero-error feedback capacity of an SD-DMC. We now investigate the same problem in the situation where one wishes to impose a fixed and deterministic upper bound on the codeword lengths, or equivalently on the stopping time of transmission. Variable-length codes in general have no such bound—even though their average length is finite, each message is mapped to possibly infinitely many codewords of different lengths, which means that the decoding delay can in general be arbitrarily large. It is therefore natural, especially from the practical point of view, to consider the case where the duration of transmission is upper bounded and to investigate the corresponding fundamental limits.

Zero-error feedback capacity of DMCs under bounded-length coding was first studied by Han and Sato [7]. In particular, it was shown in [7] that the condition for positivity of is the same as in the fixed-length case (with or without feedback) [12], namely:

 ∃x,x′∈X∀y∈YW(y|x)W(y|x′)=0. (15)

In words, the zero-error feedback capacity of a DMC under bounded-length (or fixed-length) coding is positive if and only if there exist two non-confusable input letters.

In the following theorem we state the corresponding conditions for the positivity of the zero-error feedback capacity of SD-DMCs. As in the case of DMCs, the conditions for bounded-length coding are the same as those for fixed-length coding; however, the values of the zero-error capacities in the two settings are in general different (see Theorem 6 below).

###### Theorem 5.

For every , if and only if . In particular:

• if and only if

 ∃x,x′∈X∀y∈Y(∀s∈SW(y|x,s)=0)∨(∀s∈SW(y|x′,s)=0). (16)
• if and only if (16) holds.

• if and only if there exists a partition of such that

 ∀s∈S∃x,x′∈XW(Y0|x,s)=W(Y1|x′,s)=1. (17)
• if and only if

 ∀s,s′∈S∃x,x′∈X∀y∈YW(y|x,s)W(y|x′,s′)=0. (18)
• if and only if there exist two input letters that are non-confusable in each state:

 ∃x,x′∈X∀y∈Y,s∈SW(y|x,s)W(y|x′,s)=0. (19)
• if and only if in each state there exist two non-confusable input letters:

 ∀s∈S∃x,x′∈X∀y∈YW(y|x,s)W(y|x′,s)=0. (20)
• if and only if (20) holds.

• if and only if (20) holds.

###### Proof:

We first show that fixed-length and bounded-length zero-error feedback capacities are either both positive or both zero. Since , the “if part” is clear. Conversely, suppose that . Then, for some , there exists a zero-error code of cardinality at least , average length , and maximum length

. By “zero-padding” the codewords we can then construct a fixed-length zero-error code of length

having the same cardinality, which implies that .

By the observation from the previous paragraph, we can focus on fixed-length codes in the rest of the proof.

(a)–(d) The conditions (16)–(18) for positivity of in cases when only the transmitter has state information were derived in [2, Thm 3, Thm 10, Rem. 17]. Note that (16) is the condition for positivity of the zero-error fixed-length feedback capacity of the DMC ; see (15). (e) The case is solved by using the standard technique of treating the output and the state as a joint output of the DMC with noiseless feedback. (Before the ’th channel use the transmitter obtains through feedback and as side information, which is equivalent to saying that it obtains the previous outputs of , namely , through feedback.) The condition (19) is the condition for positivity of the zero-error capacity of this DMC; see (15). It is therefore easy to see that this condition is sufficient for . To show that (19) is also necessary, suppose for the sake of contradiction that every two input letters are confusable in some state . Then, for every , every two input sequences , can produce the same channel output if the state sequence realized in the channel happens to be , meaning that error-free communication with fixed-length codes is impossible. Stated informally, since the transmitter does not know the current state, no matter which letter it chooses to send in the current time slot there is a chance that the state will be one in which is confusable with another letter . Thus, error-free communication cannot be guaranteed, no matter how long the codewords are. (f) Consider now the case . If for every state there exists a pair of non-confusable inputs , which is what the condition (20) means, then the transmitter and the receiver can agree beforehand for to mean and to mean . In this way, one bit can be transmitted error free in one channel use and so . Conversely, if (20) is not satisfied, then there exists a state for which every two inputs are confusable. If this is the case, then for the state sequence it is not possible to transmit one bit error-free in any number of channel uses , and hence . (g), (h) The previous argument holds in these cases too. Namely, knowing the future states cannot help the encoder/decoder if these states remain unfavorable throughout the entire transmission.

As in the (unbounded) variable-length case, whenever the zero-error feedback capacity under bounded-length coding is positive, it equals the vanishing-error capacity of the same channel.

###### Theorem 6.

Let . If , then .

###### Proof:

The proof is analogous to the proof of Theorem 3 for the variable-length case; the only difference is that the zero-error code that is used in the second phase of the protocol is now required to be of bounded length. ∎

## V State Information at the Decoder Only

In this section we discuss SD-DMCs with state information available only at the decoder, the case that has been left out of the discussion thus far. As pointed out in Remark 1, in this channel model it is not quite clear how to define the code length for variable-length codes, i.e., the stopping time of the transmission (see Definition 1). Since the decoder makes a decision based on the outputs and states

, the encoder, not knowing the states, is not able to exactly simulate the decoding process and to determine the moment when the decision has been made. It can only provide an estimate of this moment based on the outputs

which it obtains through feedback. As we shall see, this estimate is good enough when one considers coding with asymptotically vanishing error probability (in fact, it is not even necessary as the vanishing-error capacity can be achieved with fixed-length codes, with or without feedback). However, in the case of zero-error communication the encoder has to be certain that the decoding was successful before it stops transmitting a given message and starts transmitting the next message. It is in this case that the effects of the mismatch in state information at the two sides are most apparent.

### V-a Vanishing-Error Capacity

We know from [5, Ch. 7.4] and (7) that:

 C-,c↓,-,\textscfl=maxPXI(X;Y|S)=C% sc,c↓,f,\textscvl. (21)

Note that the issue with the stopping time mentioned above does not arise when defining the two capacities in (21): for because in the fixed-length setting the stopping time is fixed in advance, and for because in this case both sides have state information.

The quantity we are interested in here, , has not been formally defined. However, it is clear that, for any reasonable definition, the following chain of inequalities must hold: . From this and (21) we then conclude that

 C-,c↓,f,\textscvl=C-,c↓,f,\textscbl=maxPXI(X;Y|S)=Csc,c↓. (22)

In particular, the VLF capacity of the channel can be achieved by using fixed-length codes.

### V-B Zero-Error Capacity: Bounded-Length Codes

We now turn to the zero-error problems and start with the bounded-length case. The following theorem gives a necessary and sufficient condition for positivity of the zero-error capacity in this setting.

###### Theorem 7.

The following statements are equivalent: (a) ; (b) ; (c) ; (d) (19) holds.

###### Proof:
(a) (b): This can be shown by “zero-padding” bounded-length codes to obtain fixed-length codes, as for the other models of state information availability (see Theorem 5). (b) (c): This follows from . (c) (d): This was shown in Theorem 5(e). (d) (a): Suppose that (d) holds, i.e., there exist two input letters that are non-confusable in every state. Then, if the transmitter sends for and for 1, the receiver will be able to tell from the output which of the two possibilities is the correct one because it knows the channel state. Therefore, one bit can be transmitted in one channel use, and hence .

We now know from Theorem 7 that , from Theorem 6 that whenever , and from (22) that . It is then natural to ask if it is also true that whenever ? The corresponding equality for the other models of state information availability has been established in Theorem 6, but the proof technique used there does not apply when . The difficulty is precisely due to the fact we mentioned at the beginning of this section—in this model the transmitter cannot exactly simulate the decoding process because it does not know the states. Hence, the transmitter is in general not able to decide with certainty whether the receiver has decoded the received sequence correctly, and whether it should retransmit the same codeword or start sending the next codeword. Therefore, for it is not clear whether the vanishing-error capacity can always be achieved with zero-error bounded-length codes. We next give an example of a channel for which the answer to this question is affirmative.

###### Example 1.

Consider the following binary-input-binary-output channel with two states: in state the channel flips the input bit, , and in state it leaves the bit intact, .

Suppose that the transmitter sends in the ’th slot and that is produced at the channel output. After the transmitter obtains through the feedback link, it can easily determine the ’th state: the state is if and only if . This means that the transmitter effectively obtains (strictly causal) state information through feedback and therefore , where the second equality holds because (see (19)).

The main point in Example 1 is the following: if the states are uniquely determined by the channel inputs and outputs, then the problem of calculating is reduced to the (easier) problem of calculating , which was solved in Theorems 5(e) and 6. Whether such a reduction is possible in general is an interesting question that we leave for future work.

We end this subsection with a remark that the problem of determining the value of is related to the problem of determining the zero-error capacity under bounded-length coding of DMC’s with noisy feedback. (The latter problem is also unsolved except in some special cases, see [9, 1].) Namely, as already noted in the proof of Theorem 5(e), the standard trick of dealing with state information at the decoder is to think of it as part of a joint channel output of the DMC . Noiseless feedback in the DMC would mean that the transmitter is given before the ’th channel use. This implies that an SD-DMC with and noiseless feedback is equivalent to the DMC with noiseless feedback. However, in the case this equivalence fails as the transmitter now obtains only a “degraded” version () of the joint output () through the feedback link.

### V-C Zero-Error Capacity: Variable-Length Codes

For variable-length codes, we can only give a sufficient condition for positivity of the zero-error capacity at this point. The idea behind this condition is based on the fact that, for some channels, the transmitter obtains state information “for free” through the feedback link, in which case the model is effectively reduced to the model (see Example 1). As we show in Theorem 8, it is in fact not necessary that all states be uniquely determined by inputs and outputs in order for this reduction to work—it is enough that only a group of states exists that contains a “disprover” and that is discernible from other states with positive probability.

###### Theorem 8.

A sufficient condition for is the following:

 ∃x,x′ ∈X,y∈Y,S∗⊆S,S∗≠∅ (23) (∀s∈S∗W(y|x′,s)>0∧W(y|x,s)=0)∧(∀s∈S∖S∗W(y|x′,s)=0).
###### Proof:

Let us first parse the condition (23). The meaning of the statement is the same as before: is a disprover for (in the group of states ). Further, we require the existence of an event () that has positive probability in the group of states , but is impossible in other states. The occurrence of this event can serve to the transmitter as an identifier of the group of states .

Now, assuming that (23) holds, one bit of information can be sent with zero error as follows. In the first two channel uses the transmitter sends for and for 1. If the letters obtained at the output are and the state in the second slot is from , then the receiver concludes that must have been sent. The reason is the following: since the received symbol in the second slot is and the state is from (the receiver can see the states), the transmitted symbol must have been , and then it automatically follows that the symbol sent in the first slot is . Furthermore, the transmitter is also assured that the state in the second slot is from and that the receiver has received the bit correctly because the transition is only possible in states from . Similarly, if the letters obtained at the output are and the state in the first slot is from , then the receiver concludes that must have been sent, and the transmitter is assured that the receiver has received the bit correctly. In summary, if the channel output is either or and the state in the slot in which the output is is from , then the protocol stops and both parties agree on the value of the transmitted bit. If any other situation occurs, the procedure is repeated. The expected number of steps needed to complete the transmission of the bit is finite because the event that both parties are waiting for ( produces in a state from ) has positive probability. Therefore, . ∎

Note that the condition (10) for , together with (2), implies the condition (23). To see that it does, suppose that (10) holds and let . This is kind of a sanity check because clearly implies . The reverse implication does not hold because in (10) we require that in all states, rather than just in states from as in (23).

Note also that the sufficient condition for given in (23) is different from the necessary and sufficient condition for given in (12). Based on Theorem 7, one might wonder whether it holds that , i.e., whether (12) is also necessary and sufficient for ? This is, however, not the case. In the proof of the following theorem we give an example of a channel for which and yet . In other words, strictly causal state information at the transmitter can in some cases enable the parties to communicate error free, even if they were not able to do that in the absence of this information. This is a somewhat curious fact having in mind that in most other settings studied so far, strictly causal state information at the transmitter has been shown equivalent (in the sense of achievable rates) to no state information: , , , (see also Theorem 7).

###### Theorem 9.

There exists an SD-DMC for which and .

###### Proof:

Consider the following binary-input-binary-output channel with two states: in state the channel is a BSC(), where , i.e., , and in state the channel is noiseless, .

By Theorem 2(e) we know that . It is easy to see why this is the case—the transmitter and the receiver can agree on the codeword termination by using the noiseless state since they both know the states (the transmitter obtains the state information with a one-slot delay, but this can be circumvented by sending a dummy letter; see the proof of Theorem 2(e)).

However, if the transmitter has no state information, i.e., if , then it is not possible to communicate with zero-error in a finite expected number of channel uses. To see this, suppose that the transmitter is trying to send one bit through the channel by using a repetition code. The receiver can see the channel states and will therefore recover the bit correctly as soon as the state happens to be