# Queue-Channel Capacities with Generalized Amplitude Damping

The generalized amplitude damping channel (GADC) is considered an important model for quantum communications, especially over optical networks. We make two salient contributions in this paper apropos of this channel. First, we consider a symmetric GAD channel characterized by the parameter n=1/2, and derive its exact classical capacity, by constructing a specific induced classical channel. We show that the Holevo quantity for the GAD channel equals the Shannon capacity of the induced binary symmetric channel, establishing at once the capacity result and that the GAD channel capacity can be achieved without the use of entanglement at the encoder or joint measurements at the decoder. Second, motivated by the inevitable buffering of qubits in quantum networks, we consider a generalized amplitude damping queue-channel – that is, a setting where qubits suffer a waiting time dependent GAD noise as they wait in a buffer to be transmitted. This GAD queue channel is characterized by non-i.i.d. noise due to correlated waiting times of consecutive qubits. We exploit a conditional independence property in conjunction with additivity of the channel model, to obtain a capacity expression for the GAD queue channel in terms of the stationary waiting time in the queue. Our results provide useful insights towards designing practical quantum communication networks, and highlight the need to explicitly model the impact of buffering.

## Authors

• 4 publications
• 6 publications
• 20 publications
• 4 publications
• 8 publications
10/06/2021

### Unital Qubit Queue-channels: Classical Capacity and Product Decoding

Quantum queue-channels arise naturally in the context of buffering in qu...
06/04/2019

### The Classical Capacity of a Quantum Erasure Queue-Channel

We consider a setting where a stream of qubits is processed sequentially...
04/03/2018

### Qubits through Queues: The Capacity of Channels with Waiting Time Dependent Errors

We consider a setting where qubits are processed sequentially, and deriv...
11/22/2021

### Poisson Noise Channel with Dark Current: Numerical Computation of the Optimal Input Distribution

This paper considers a discrete time-Poisson noise channel which is used...
07/09/2019

### Fundamental limits of quantum-secure covert communication over bosonic channels

We investigate the fundamental limit of quantum-secure covert communicat...
02/16/2018

### A Mismatched Joint Source-Channel Coding Perspective of Probabilistic Amplitude Shaping: Achievable Rates and Error Exponents

Probabilistic Amplitude Shaping (PAS) is a novel method of reliable comm...
03/15/2021

### On entanglement assistance to a noiseless classical channel

For a classical channel, neither the Shannon capacity, nor the sum of co...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

There is considerable and growing interest in designing and setting up large-scale quantum communication networks [1]. To that end, understanding the fundamental capacity limits of quantum communications in the presence of noise is of practical importance. In this context, the inevitable buffering of quantum states during communication tasks acts as an additional source of decoherence. One concrete example of such buffering occurs at intermediate nodes or quantum repeaters, where quantum states have to be stored for a certain waiting time until they are processed and transmitted again [2]. Indeed, while quantum states wait in buffer for transmission, they continue to interact with the environment, and suffer a waiting time dependent decoherence [3, 4]. In fact, the longer a qubit waits in a buffer, the more it decoheres.

To characterise the impact of buffering on quantum communication, researchers have recently begun to combine queuing models with quantum noise models [5]. In particular, the buffering process inherently introduces correlations across the noise process experienced by consecutive qubits, since the waiting times are correlated according to the queuing dynamics. Thus, to properly characterise the decoherence introduced due to buffering, we need to look ‘beyond i.i.d’ quantum channels and noise models.

Although the buffering process leads to correlated noise, it is known to have a conditional independence structure, given the sequence of waiting times of the qubits. This conditional independence structure can be exploited for additive channels to compute capacity for the correlated noise model, if the corresponding i.i.d. noise model is well understood in terms of capacity; see [5].

The generalized amplitude damping channel (GADC) has emerged as an important model of noise for quantum communication [6, 3]. Even for the i.i.d case of the well-studied GADC (see [7], for example), several fundamental questions remain unsolved. For instance, (a) can the classical capacity of the channel be achieved without entanglement, and (b) if so, can one construct an explicit encoding-decoding scheme that achieves capacity?

These questions are well-motivated regardless of any buffering considerations. Indeed, it is well known that entanglement can be exploited at the encoder and the decoder for achieving the classical capacity of a quantum channel. For the class of additive channels, the classical capacity can be achieved without using entanglement at the encoder, although the decoding could involve joint measurements at the receiver. Performing such joint measurements typically requires a quantum processor that can carry out quantum gate operations in a high dimensional Hilbert space. Since the availability of such a reliable quantum processor at a communication receiver may not be realistic in the near future, it is practically relevant to ask after the best achievable rate without the use of entangled encoding and joint measurements, as well as the corresponding encoding-decoding.

Thus motivated, we make the following contributions in this paper.

### 1.1 Our Contributions:

The GADC is typically parametrized by two quantities, and . Recent work has characterized the classical capacity of this channel, for certain parameter ranges [7]. In particular, the Holevo information for this channel has been characterized, which is equal to its classical capacity for certain parameter values where the channel is known to be additive.

In the present paper, we first consider a symmetric i.i.d. GADC with and derive the classical capacity of this channel. We do this through the explicit construction of a symbol-by-symbol encoding at the transmitter and qubit-by-qubit POVM at the decoder. Specifically, we show that the classical capacity of a symmetric GADC is achieved without entanglement. For asymmetric GADC, we characterize the loss in capacity due to non-entangled decoding. To the best of our knowledge, such results for GADC have so far been unknown.

Next, we consider the setting in Fig. 1, where qubits are transmitted sequentially, and the qubits decohere as they wait to be transmitted. The extent of noise suffered by a particular qubit is a function of the waiting time spent by that qubit. In such a setting, the "effective" channel experienced by the qubits is non-stationary and has memory. We model this waiting time dependent noise using the quantum queue-channel framework, studied in [5]. Specifically, we study a symmetric GAD queue-channel with and the parameter is made an explicit function of the waiting time of each qubit. Such a symmetric GAD queue channel is known to be additive, which enables the use of the capacity upper bound obtained in [5] for additive queue channels. Further, we propose a specific encoding for the GAD queue channel, which induces a binary symmetric classical queue channel. We show that an achievable rate of this binary symmetric queue channel matches the upper bound enforced by additivity arguments, thus settling the capacity of the GAD queue channel, and giving us a fully classical capacity achieving scheme for the encoder and decoder. Finally, we obtain useful insights for designing practical quantum communication systems by employing queuing theoretic analysis on the queue-channel capacity results.

The paper is organized as follows. In the Sec. 1.2 we discuss related work. To keep this discussion somewhat self-contained, in Sec. 2, we provide an extended discussion of induced channels, classical capacities of quantum channels, and non-i.i.d queue-channel capacities. In Sec. 3, we analyze the generalized amplitude damping channel (GADC). Here we discuss the capacity of various induced channels of GADC (see Figs. 2 and 3), and prove a key result (see Theorem 1) that Shannon capacity, Holevo capacity, and the classical capacity of the symmetric () GADC are all equal. In Sec. 4 we discuss the queue-channel capacity of the symmetric GADC. We offer useful design insights by analyzing and numerically plotting (see Fig. 4) the capacity expression. Sec. 5 contains a brief discussion and outlines potentially interesting future directions.

### 1.2 Related Work

Our work interleaves different aspects of quantum communication networks, from quantum Shannon theory to queuing theory. In quantum Shannon theory, one studies ultimate limits for transmitting information in the presence of quantum noise. The generalized amplitude damping channel (GADC) is a relevant model of noise in a variety of physical contexts including communication over optical fibers or free space [8, 9, 10, 11], relaxation due to coupling of spins with a high temperature environment [12, 13, 14], and super-conducting based quantum computing [15]. Quantum capacities of the i.i.d. GADC have been studied (see [7] and reference therein). Of particular interest to us are expressions for the Holevo information of the GADC, found in [16] using techniques from [17, 18], and channel parameters [7] where additivity of the GADC Holevo information is known.

While the primary focus of quantum Shannon theory [19] has been to study the classical and quantum capacities of stationary, memoryless quantum channels [20], recently there has been a spurt of activity in characterizing the capacities of quantum channels in non-stationary, correlated settings. We refer to [21] for a recent review of the different capacity results obtained in a context of quantum channels that are not independent or identical across channel uses. In particular, we focus on the quantum information-spectrum approach in [22], which provides bounds on the classical capacity of a general, non-i.i.d. sequence of quantum channels.

The idea of a quantum queue-channel was originally proposed in [23] as a way to model and study the effect of decoherence due to buffering or queuing in quantum information processing tasks. The classical capacity of quantum queue-channels has been studied for certain classes of quantum channels, and a general upper bound is known for additive quantum queue-channels [5]. The effect of queuing-dependent errors on classical channels has been studied earlier [24], with motivation drawn from crowd-sourcing. More recently, a dynamic programming based framework for characterising the queuing delay of quantum data with finite memory size has been proposed in [25]. Finally, we note that ideas of queuing theory have also been used to study aspects of entanglement distribution over quantum networks such as routing [26], switching, and buffering [27].

## 2 Preliminaries

### 2.1 Classical Channels

Consider a random variable

that takes discrete values from a finite set and another random variable that takes discrete values from some finite set . A discrete memoryless classical channel takes an input symbol to an output

with conditional probability

. Sometimes it is convenient to represent a discrete memoryless classical channel as a transition matrix where is . The channel

is called memoryless because the probability distribution of the channel output

depends on its current input and is conditionally independent of previous channel inputs. Noise is modelled by a channel mapping its inputs to a noisy output. When noise on several different inputs acts in such a way that noise on each input is described by the same discrete memoryless channel, one says the noise is independent and identically distributed (i.i.d.). In what follows we discuss rates for sending information in the presence of i.i.d. noise described by a discrete memoryless channel .

We define the mutual the channel mutual information,

 I(1)(N)=maxp(x)I(Y;X), (1)

where is a probability distribution over the input of , and is the mutual information given by

 I(Y;X):=H(X)+H(Y)−H(X,Y), (2)

where is the Shannon entropy of the random variable , and is the Shannon entropy of . Channel mutual information (1) represents an achievable rate for sending information across . Since is concave in for fixed , can be computed numerically with relative ease [28, 29]. The channel mutual information is additive: of a channel formed by using two channels and together (sometimes called “used in parallel”) is the sum of of the individual channels; that is,

 I(1)(N×N′)=I(1)(N)+I(1)(N′). (3)

Due to additivity, a classical channel ’s Shannon capacity,

 CShan(N)=limk↦∞1kI(1)(N×k), (4)

where represents parallel uses of the channel , simplifies to

 CShan(N)=I(1)(N). (5)

### 2.2 Induced Channels and Classical Quantum Channels

We now discuss the transmission of classical information using quantum states. We restrict ourselves to quantum systems described by some finite dimensional Hilbert space . Both pure and mixed states on these quantum systems can be described using unit-trace positive semi-definite operators (density operators) that belong to , the space of bounded linear operator on . Suppose classical information is encoded in quantum states using a fixed map , sometimes called a classical quantum channel which takes to a density operator . This classical information can be decoded by a map which represents measuring to obtain an output . This measurement can be described using a POVM (a collection of positive operators in that sum to the identity). Suppose the POVM specifies ; then any input is decoded as with conditional probability,

 p(y|x)=Pr(Y=y|X=x)=Tr(Λ(y)ρ(x)). (6)

This conditional probability defines an induced channel . Induced channels play a vital role in defining the capacity of quantum systems to send classical information. For a fixed encoding , when using a decoding , the induced channel capacity represents a rate at which classical information can be sent using . The maximum rate of this type, sometimes called the Shannon capacity of ,

 CShan(E)=maxDI(1)(N)=maxD,p(x)I(Y;X), (7)

is obtained from maximizing the induced channel capacity over all possible induced channels, i.e., over all possible decoding . Not much is known about how to perform this optimization. For a fixed —that is, fixed can be computed with relative ease (see comments below (1)). However, for a fixed and output alphabet , is convex in , and is linear in the decoding POVM specifying . The resulting convexity of in , for fixed makes it non-trivial to numerically compute .

A -letter message is encoded via , sometimes called a product encoding, into a product state and decoded via , sometimes called a product decoding, using a product measurement on . Product decoding is a special case of joint decoding performed using a joint measurement POVM on resulting in an element on copies of some classical alphabet . Encoding followed by joint decoding results in an induced channel . Maximizing the channel mutual information over all decodings defines . Due to the presence of entanglement in the joint decoding measurements, one may have . This inequality refers to the super-additivity of . Due to super-additivity, a proper definition of the capacity of sending classical information using product encoding and joint decoding is given by a multi-letter formula,

 Cpj=limk↦∞1nI(k)(E). (8)

Remarkably, the Holevo-Schumacher-Westmoreland theorem [30, 31] gives the above multi-letter expression a single-letter form; that is,

 Cpj(E)=χ(1)(E):=max{p(x)}χ(p(x),ρ(x)), (9)

where the Holevo quantity,

 (10)

and , is the von-Neumann entropy of a density operator . Due to the close connection between and , sometimes is also denoted by . There are cases where is strictly greater than  [32, 33]. However, much remains unknown about when and how such separations occurs.

### 2.3 Classical Capacities of a Quantum Channel

The quantum analog of a discrete memoryless classical channel is a (noisy) quantum channel . In general, if and are two finite dimensional Hilbert spaces, then the quantum channel is a completely positive trace preserving (CPTP) map. In what follows, we discuss transmission of classical information using quantum states affected by a quantum channel  [34] (also see Ch.8 in [20]).

Sending classical information across using product encoding and product decoding results in an induced channel . Maximizing the channel mutual information of over and gives the product encoding-decoding capacity , also known as the Shannon capacity of ,

 CShan(B)=maxE,DI(N)=max{E,D}maxp(x)I(Y;X). (11)

The Shannon capacity is bounded from above by the product encoding joint decoding capacity , sometimes called the Holevo capacity or the product state capacity. Using a procedure similar to the one above (8), the capacity can be defined with the aid of induced channels generated from product encoding but joint decoding. Such a definition results in multi-letter expression of the type (9) with a single-letter characterization,

 Cpj(B)=χ(1)(B):=max{ρa(x),p(x)}χ(p(x),ρb(x)), (12)

where .

The product state capacity can be further generalized to include the possibility of using joint encoding at the channel input and joint decoding at the output. This encoding-decoding results in an induced channel . Maximizing the mutual information of this induced channel over all encoding and decoding gives . Due to the presence of entanglement at the encoding, one may have super-additivity of the form . Due to this super-additivity, the joint encoding-decoding capacity , sometimes called the classical capacity of , is defined by a multi-letter expression of the form (8). The capacity can be characterized using the product state capacity (12) as follows,

 Cjj(B)=χ(B):=limk↦∞1kχ(1)(B⊗k). (13)

In general, the limit in (13) is required because the product state capacity can be non-additive [35]; that is, for any two quantum channels and , the inequality,

 (14)

can be strict. For certain special classes of channels, the Holevo information is known to be additive; that is, the inequality above becomes an equality when is any channel and belongs to a special class of channels. These special classes are unital qubit channels [36], depolarizing channels [37], Hadamard channels [38], and entanglement breaking channels [39].

### 2.4 Classical capacity of non-i.i.d. quantum channels

As mentioned in Sec. 1, the effective channel seen by qubits in the presence of decoherence in the transmission buffer is non-i.i.d. Characterizing the capacity is a harder problem in such a setting. In the classical setting, a capacity formula for this general non-i.i.d. setting was obtained using the information-spectrum method [40, 41]. This technique was adapted to the quantum setting in [22], and a general capacity formula was obtained for the classical capacity of a quantum channel.

#### 2.4.1 The Quantum inf-information rate

Recall that a quantum channel is defined as a completely positive, trace-preserving map from the "input" Hilbert space to the "output" Hilbert space . Consider a sequence of quantum channels . Let denote the totality of sequences of probability distributions (with finite support) over input sequences , and denote the sequences of states corresponding to the encoding . For any and , we define the operator,

 Γ{Pn(Xn),ρXn}(a)=N(n)(ρXn)−ean∑Xn∈X(n)Pn(Xn)N(n)(ρXn).

Further, let

denote the projector onto the positive eigenspace of the operator

.

###### Definition 1.

The quantum inf-information rate [22] is defined as,

 (15)

This is the quantum analogue of the classical inf-information rate originally defined in [40, 41]. The central result of [22] is to show that the classical capacity of the channel sequence is given by

 C=sup{→P,→ρ}I–({→P,→ρ},→N).

## 3 Generalized Qubit Amplitude Damping

The generalized qubit amplitude damping channel (GADC) is a two parameter family of channels where the parameters and are between zero and one. The channel has a qubit input and qubit output—— and its superoperator has the form,

 Ap,n(ρ)=3∑i=0KiρK†i, (16)

where

 K0 =√1−n(|0⟩⟨0|+√1−p|1⟩⟨1|), K1 =√p(1−n)|0⟩⟨1|, (17) K2 =√n(√1−p|0⟩⟨0|+|1⟩⟨1|),and K3 =√pn|1⟩⟨0| (18)

are Kraus operators. The GADC (16) can also be expressed as

 Ap,n=(1−n)Ap,0+nAp,1. (19)

The above representation provides the following insightful interpretation. The parameter represents the mixing of with , where each channel  ( or ) is an amplitude damping channel that favours the state by keeping it fixed and maps the orthogonal state to with damping probability . When , we get equal mixing of both and . This equal mixing represents noise where each state  () is mapped to itself with probability and to with probability ; in other words, this noise treats both and identically. However, when is not half, the action of on is different from its action on . In particular, is mapped to itself with probability and to with probability , and is mapped to itself with probability and to with probability .

Any qubit density operator can be written in the Bloch parametrization,

 ρ(r)=12(I+r.→σ):=12(I+xσx+yσy+zσz), (20)

where the Bloch vector

has norm ,

 σx=(0110),σy=(0−ii0),andσz=(100−1) (21)

are the Pauli matrices, written in the standard basis . Using the Bloch parametrization, the entropy

 S(ρ(r))=h((1−|r|)/2), (22)

where is the binary entropy function and is the norm of r. An input density operator is mapped by to an output density operator with Bloch vector,

 r′=(√1−px,√1−py,(1−p)z+p(1−2n)). (23)

The GADC is unital at ; that is, . The GADC is entanglement breaking [7] when

 2(√2−1)≤p≤1and12(1−l(p))≤n≤12(1+l(p)), (24)

where . The Holevo capacity of unital qubit channels and entanglement breaking channels is additive; as a result, when and when the values of parameters and satisfy (24), the Holevo information of the generalized amplitude damping channel equals the classical capacity of the channel. For other values of and , the classical capacity of the GADC is not known because for these parameter values, the Holevo information of the channel is not known to be additive or non-additive. The actual value of the Holevo information can be computed numerically. Next, we briefly discuss this numerical calculation.

### 3.1 Holevo Information

Let and be projectors on states with Bloch vector

 r+=(√1−z2,0,z),andr−=(−√1−z2,0,z), (25)

respectively; here . Notice, and are not orthogonal, except when . It has been shown [16] that the Holevo information,

 χ(1)(Ap,n)=max{−1≤z≤1}S(Ap,n(σ))−[S(Ap,n([α+]))+S(Ap,n([α−]))]/2, (26)

where . In the above equation, the optimizing has the value

 z∗=u−p(1−2n)1−p, (27)

where comes from solving,

 (pu−p2(1−2n)−p(1−p)(1−2n))f′(r∗)=−r∗(1−γ)f′(u), (28)

with

 f(x) :=(1+x)log2(1+x)+(1−x)log2(1−x), (29) f′(x) =log2(1+x1−x),and (30) r∗ := ⎷1−p−(u−p(1−2n))21−p+u2. (31)

Using the value of in (27) gives,

 χ(1)(Ap,n)=12(f(r∗)−log2(1−u2)−uf′(u)). (32)

Solving (24) for gives a range,

 p∗≤p≤1, (33)

where the GADC in entanglement breaking. Here the value,

 p∗=max(2(√2−1),√1+4n(1−n)−12n(1−n)). (34)

As indicated earlier, entanglement breaking channels have additive Holevo capacity. Thus, when satisfies (33), the GADC has additive Holevo capacity. While the Holevo information gives the product state classical channel capacity, it doesn’t give an explicit encoding and decoding that achieves this capacity. In what follows, we construct explicit encoding and decodings—in other words, we construct induced classical channels, and compare the capacity of these channels to the product state classical capacity . For , we find the optimal encoding and decoding which achieves for all .

### 3.2 Induced Channels

To obtain an induced channel for one must choose an encoding and decoding. To choose an encoding, , one fixes a set of input states . To choose a decoding, , one fixes an output measurement POVM . Together the encoding-decoding results in an induced channel with conditional probability . A priori, there is no clear choice for these input states and output measurement. However, the generalized qubit amplitude damping channel satisfies an equation

 Ap,n(σzaρ(σza)†)=σzbAp,n(ρ)(σzb)†, (35)

where the subscripts and on the Pauli operator signify the space on which the operator acts. The above equation implies that the generalized amplitude damping channel has a rotational symmetry around the -axis. Using this rotational symmetry and the fact that is a qubit input-output channel one may choose an encoding where or and are two orthogonal input states that remain unchanged under the symmetry operations; that is, . To decode, one may apply a protocol for correctly identifying a state chosen uniformly from a set of two known states and with highest probability. This protocol comes from the theory of quantum state discrimination [42]. It uses a POVM with two elements , where

is a projector onto the space of positive eigenvalues of

. An unknown state, either or with equal probability, is measured using the POVM. If the outcome corresponding to occurs, the unknown state is guessed to be ; otherwise, the guess is . In the present case, a simple calculation shows that .

The state discrimination protocol outlined above can be used as the decoding map , which measures using the POVM and returns with conditional probability . This choice of decoding, together with the encoding, , results in an induced channel with transition probability matrix

 P′=(1−pnp(1−n)pn1−p(1−n)). (36)

This induced channel corresponding to the above matrix is a binary asymmetric channel that flips to with probability but flips to with probability . The capacity of is given by

 C(M1)=max0≤a≤1h(r)−(1−a)h(pn)−ah(p(1−n)), (37)

where . From the above equation, it is clear that is unchanged when is replaced with . We restrict ourselves to . As can be seen from Fig. 1(a), decreases with for .

Next, we consider a different induced channel where the encoding is performed using possibly non-orthogonal states and decoding is performed using a measurement designed to distinguish these encoded states at the channel output with maximum probability. The encoding maps and to and  (defined via eq. (25)), respectively. The decoding is performed using a POVM where at is the projector onto the space of positive eigenvalues of . This projector is simply , where . This encoding-decoding scheme results in a one-parameter family of induced channels . This family of channels has a transition matrix

 Q(z)=(q(z)1−q(z)1−q(z)q(z)), (38)

where , and . For any , the induced channel is a binary symmetric channel with flip probability . Interestingly, this family of induced channels coming from the GADC does not depend on the parameter in the channel. The Shannon capacity of is simply

 C(M2(z))=1−h(q(z)). (39)

For a fixed , one can easily show that is maximum when ; thus, has the largest Shannon capacity among the one-parameter family of induced channels . This induced channel is simply a binary symmetric channel with flip probability . We compare the capacity of with that of at , defined earlier (see Fig. 1(b)) to find that

 C(M2)≥C(M1), (40)

for all and . In Fig. 3, we compare the capacity of the induced channel with the Holevo information of the GADC for various values . We numerically find that for values of and , . Next we focus on , the GADC at .

###### Theorem 1.

For the GADC, , the Shannon capacity, product-state capacity, and classical capacity are all equal to that of a binary symmetric channel with flip probability .

###### Proof.

Our proof has two key ingredients: first is a well-known additivity of and second is an argument to show that capacity of a binary symmetric induced channel of bounds from above and below.

Notice the GADC, is unital. As a result, the channel’s Holevo information is additive and equals the channel’s classical capacity; that is,

 χ(Ap,1/2)=χ(1)(Ap,1/2). (41)

The Holevo information is the product-state classical capacity of . This product state capacity bounds the Shannon capacity of from above. In turn, this Shannon capacity upper bounds the capacity of any induced channel of that uses product encoding and decoding. Since the induced channel , defined in (39), uses product encoding and product decoding,

 C(M2)≤CShan(Ap,1/2)≤χ(1)(Ap,1/2). (42)

This induced channel is a binary symmetric channel with flip probability . The channel’s capacity is

 C(M2)=1−h(q), (43)

where is the binary entropy function. We now show this capacity bounds from above. From eq. (26),

 χ(1)(Ap,1/2)=max−1≤z≤1{S(Ap,1/2(σ))−[S(Ap,1/2([α+]))+S(Ap,1/2([α−]))]/2}. (44)

Using (25),(23), and (22) we find that

 S(Ap,1/2([α+]))=S(Ap,1/2([α−]))=h((1−|rb|)/2), (45)

where . Using (45) and (44) we bound the value of from above as follows:

 χ(1)(Ap,1/2)≤max−1≤z≤1S(Ap,1/2(σ))−min−1≤z≤1h((1−|rb|)/2) (46)

Notice (1) the maximum value of is at most because has a qubit output; (2) is monotonically decreasing in and takes its minimum value at . Using these two facts in (46) along with (43),

 χ(1)(Ap,1/2)≤1−h(q)=C(M2). (47)

Together, (47), (42), and (41) prove

 C(M2)=CShan(Ap,1/2)=χ(1)(Ap,1/2)=χ(Ap,1/2)=1−h(q), (48)

where . ∎

This equality (48) above shows that the induced channel , obtained from product encoding and product decoding, achieves not only the Shannon capacity but also the product state capacity of . A by-product is an alternate proof for the product state capacity expression (see Ex.8.1 in [20]). Even more notably, the product state capacity in general allows for joint decoding of its product state inputs; however, we find that product decoding of the type in the induced channel suffices to achieve this capacity. For the product state capacity is additive and equals the ultimate channel capacity . This ultimate capacity allows for the more general joint encoding and joint decoding, yet the additivity of , along with the equality in (48), show how this ultimate capacity is simply achieved using product encoding and product decoding.

In what follows, we focus on the GADC . As discussed below (19), this channel describes noise in which both computational basis states and are treated on equal footing. When information about which of these computational basis states decays faster than the other, the GADC with is an apt noise model. However when such information is unavailable, or when it is known that both computational basis states decay but the maximally mixed state doesn’t, one uses the GADC. One simple example of such noise is the qubit thermal channel (analogous to the bosonic thermal channel [7, 13, 14]) in which the channel environment is represented by the maximally mixed state. Another simple example is the effect of dissipation to an environment at a finite temperature [43].

## 4 Decoherence in Buffer: A Queue-channel Approach

For communicating a message over a GAD channel, a sequence of product quantum states has to be transmitted serially. In an idealized i.i.d. setting, it is implicitly assumed that the transmission of each state takes a fixed amount of time and that the qubits are prepared accordingly to avoid any buffering. However, in practice, preparation times as well as transmission (and reception) times are stochastic due to the inherent quantum physical nature of the devices. This naturally leads to queuing of qubits at the buffer of the transmitter, and the queued qubits decohere due to their interactions with the environment of the buffer. A schematic of this setting is presented in Fig. 1.

Note that the decoherence while waiting in the buffer is in addition to the decoherence experienced by the qubits while passing through the channel. The decoherence experienced by a qubit in the buffer depends on its waiting time in the buffer: the longer the wait, the worse the decoherence. Due to the queuing dynamics, the waiting times of the qubits are different and non i.i.d. Hence, the effective decoherence experienced by the qubits are also non i.i.d..

Most information theoretic channel models assume an idealized scenario without any decoherence in the transmission buffer. In Sec. 3, we studied the capacity of GAD channels assuming such an idealized scenario and showed that non-entangled encoding and decoding schemes can achieve the capacity of symmetric () GADC. In this section, we introduce a GAD queue-channel that can model both channel and buffer decoherence. The concept of a quantum queue-channel was introduced in [5] by adapting a related classical notion from [24]. Building on the results in Sec. 3, we characterize the capacity of GAD queue-channels and show that non-entangled encoding and decoding schemes achieve that capacity.

For a symmetric GADC, the parameter captures the level of damping experienced by a qubit while interacting with an environment. In the absence of buffer decoherence, depends on the flight time through the channel and the physical parameters of the channel. Similarly, the level of damping experienced in the buffer depends on the waiting time in the buffer and the physical parameters of the buffer. Hence, the effective GADC parameter experienced by a qubit is a function of its waiting time and its flight time, where the form of depends on the physical parameters of the channel and the buffer. As the flight time is almost deterministic, for simplicity of notations we denote this function by .

In optical quantum communication, the prevalent form of quantum communication, often the transmission channel and the buffer are both made of optical fibers. Also, in practice, it is mostly the case that the noise treats both and identically. So, it is natural to assume that the mixing parameter is and does not depend on the waiting time. On the other hand, the damping parameter depends on the time of interaction with the environment. These motivated the above modeling assumptions regarding symmetric GAD queue-channel with waiting time dependent damping parameter . However, for other modes of quantum communication, the models for buffer decoherence and channel decoherence can be different. This would be an interesting direction for future exploration.

A classical message is encoded into a sequence of classical states , which in turn is transmitted as a sequence of quantum states or qubits . Effectively, qubit is received at the receiver after its passage through a symmetric GADC with parameter , where is the time between the preparation of the th qubit and its transmission. Given the knowledge of at the receiver, qubits experience independent but not identically distributed symmetric generalized amplitude damping decoherence.

We complete the description of the combined channel with the mathematical model of the queuing system that gives rise to the waiting time sequence. The buffering process is modeled as a continuous-time single-server queue. To be specific, the single-server queue is characterised by (i) A server that processes the qubits in the order in which they arrive, that is in a First Come First Served (FCFS) fashion***The FCFS assumption is not required for our results to hold, but it helps the exposition., and (ii) An "unlimited buffer" — that is, there is no limit on the number of qubits that can wait to be transmitted. We denote the time between preparation of the th and th qubits by , where are i.i.d. random variables. These s are viewed as inter-arrival times of a point process of rate where The "service time," or the time taken to transmit qubit is denoted by , where are also assumed to be i.i.d. random variables, independent of the inter-arrival times . The "service rate" of the qubits is denoted by We assume that (i.e., mean transmission time is strictly less than the mean preparation time) to ensure stability of the queue. Qubit has a waiting time . The waiting times of the other qubits can be obtained using the well known Lindley’s recursion:

 Wi+1=max(Wi−Ai,0)+Si+1.

In queuing parlance, the above system describes a continuous-time queue. Under mild conditions, the sequence for a stable queue is ergodic, and reaches a stationary distribution We assume that the waiting times of the qubits are not available at the transmitter during encoding, but are available at the receiver during decoding.

An important difference between the queue-channel introduced above and the usual i.i.d. channels is that this channel is a part of continuous time dynamics. Hence, the usual notion of capacity per channel use for i.i.d. channels is not pertinent here. As mentioned before, the above channel model is closely related to quantum queue-channels studied in [5]. So, we first do a short review of the notion of capacity per unit time and some relevant capacity results in [5].

#### 4.1.1 Classical capacity of additive quantum queue-channels

###### Definition 2.

A rate is called an achievable rate for a quantum queue-channel if there exists a sequence of quantum codes with probability of error as and .

###### Definition 3.

The information capacity of the queue-channel is the supremum of all achievable rates for a given arrival and service process, and is denoted by bits per unit time.

Note that the information capacity of the queue-channel depends on the arrival process, the service process, and the noise model. We assume that the receiver knows the realizations of the arrival and the departure times of each symbol, a realistic assumption in several physical scenarios, as discussed in [5].

###### Definition 4 (Additive quantum queue-channel).

A quantum queue-channel is said to be additive if the Holevo information of the underlying single-use quantum channel is additive. Specifically, additivity of the Holevo information of the quantum channel implies

 χ(1)(NW1⊗NW2)=χ(1)(NW1)+χ(1)(NW2).
###### Proposition 2.

The capacity of the quantum queue-channel (in bits/sec) is given by,

 C=λsup{→P,→ρ}I–({→P,→ρ},→N→W), (49)

where, is the quantum spectral inf-information rate defined in Eq. 15.

We conclude this section by stating the general upper bound for the capacity of additive quantum queue-channels, proved in [5].

###### Theorem 3.

For an additive quantum queue-channel , the capacity is bounded as,

 C≤λ Eπ[χ(1)(NW)] bits/sec.,

where, is expectation with respect to the stationary distribution of . Here denotes the Holevo information of the single-use quantum queue-channel corresponding to waiting time .

### 4.2 Capacity of the symmetric GAD queue-channel

To characterize the capacity of the above channel, we use the additive queue-channel capacity results from [5]. First, we present the converse result, that is the capacity upper bound.

###### Corollary 4.

The capacity of a symmetric GAD queue-channel is upper bounded by

 λ Eπ⎡⎢ ⎢⎣1−h⎛⎜ ⎜⎝1−√1−p\em eff(W)2⎞⎟ ⎟⎠⎤⎥ ⎥⎦.

This corollary follows directly from [5][Theorem 1] using Eq. 48 in Sec. 3.2

 1−h⎛⎜ ⎜⎝1−√1−peff(W)2⎞⎟ ⎟⎠=χ(1)(Ap\small eff(W),12).

This is because the symmetric GAD queue-channel is an additive queue-channel and the upper bound in [5][Theorem 1] (stated here as Theorem 3) is valid for any additive queue-channel.

To prove achievability, we build on the results presented in Sec. 3.2 and obtain the following result.

###### Theorem 5.

There exists a product encoding and a non-entangled decoding scheme that achieves a rate of

 λ Eπ⎡⎢ ⎢⎣1−h⎛⎜ ⎜⎝1−√1−p\em eff(W)2⎞⎟ ⎟⎠⎤⎥ ⎥⎦