# An Elementary Proof of a Classical Information-Theoretic Formula

A renowned information-theoretic formula by Shannon expresses the mutual information rate of a white Gaussian channel with a stationary Gaussian input as an integral of a simple function of the power spectral density of the channel input. We give in this paper a rigorous yet elementary proof of this classical formula. As opposed to all the conventional approaches, which either rely on heavy mathematical machineries or have to resort to some "external" results, our proof, which hinges on a recently proven sampling theorem, is elementary and self-contained, only using some well-known facts from basic calculus and matrix theory.

## Authors

• 33 publications
• 2 publications
• 11 publications
• 48 publications
12/24/2019

### A note on the elementary HDX construction of Kaufman-Oppenheim

In this note, we give a self-contained and elementary proof of the eleme...
04/08/2021

### An Information-Theoretic Proof of a Finite de Finetti Theorem

A finite form of de Finetti's representation theorem is established usin...
12/02/2020

### Information Theory in Density Destructors

Density destructors are differentiable and invertible transforms that ma...
09/02/2021

### An Information-Theoretic View of Stochastic Localization

Given a probability measure μ over ℝ^n, it is often useful to approximat...
01/25/2021

### There are EXACTLY 1493804444499093354916284290188948031229880469556 Ways to Derange a Standard Deck of Cards (ignoring suits) [and many other such useful facts]

In this memorial tribute to Joe Gillis, who taught us that Special Funct...
12/12/2019

### An Integral Representation of the Logarithmic Function with Applications in Information Theory

We explore a well-known integral representation of the logarithmic funct...
08/09/2020

### Representative elementary volume via averaged scalar Minkowski functionals

Representative Elementary Volume (REV) at which the material properties ...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Consider the following continuous-time white Gaussian channel

 Y(t)=∫t0X(s)ds+B(t),t∈R+, (1)

where denotes the standard Brownian motion, and the channel input is an independent stationary Gaussian process with power spectral density . This paper is to give an elementary proof of the following classical information-theoretic formula (see, e.g., Theorem of [11])

 limT→∞1TI(XT0;YT0)=14π∫∞−∞log(1+2πf(λ))dλ. (2)

This renowned formula was first established by Shannon in his seminal work [16]

through a heuristic yet rather convincing spectrum-splitting argument and then treated more rigorously by numerous authors, predominantly using alternative channel formulations obtained via some orthogonal expansion representations in the relevant Hilbert space. Representative work in this direction include

[9, 10, 8, 3, 11], and the heart of all the approaches therein lies in a continuous-time version of the famed Szego’s theorem (see, e.g., the theorem on page of [5]). In a different direction, there have been efforts devoted to analyze continuous-time Gaussian channels using tools and techniques from stochastic calculus [2, 12, 6, 7], where the channel mutual information has been found to linked to an optimal linear filter. These links, together with well-known results from filtering theory [17, 18], will conceivably recover (2).

It appear to us that all existing treatments either rely on heavy mathematical machineries or have to resort to some “external” results. By comparison, our proof, which hinges on a recently proven sampling theorem (Theorem in [15]), is elementary and self-contained, only using some well-known facts from basic calculus and matrix theory: it turns out that the aforementioned sampling theorem enables us to sidestep numerous complications that are otherwise present in the continuous-time regime and allows us to employ a spectral analysis of finite-dimensional matrices, rather than infinite-dimensional operators that some previous approaches would have to deal with. Moreover, as elaborated in Section 4, our approach gives rise to a “scalable” version of Szego’s theorem and naturally connects a continuous-time Gaussian channel to its sampled discrete-time versions, and thereby promising further applications in more general settings.

## 2 A Heuristic Proof

We first explain the aforementioned sampling theorem. For any given and , choose evenly spaced sampling times , , such that and let

 ΔT,n≜{t0,t1,…,tn}.

Sampling the channel (1) over the time interval with respect to , we obtain its sampled discrete-time version as follows:

 Y(ti)=∫ti0X(s)ds+B(ti),i=0,1,…,n. (3)

Loosely speaking, Theorem in [15] says that as the above sampling gets increasingly finer, the mutual information of the discrete-time channel (3) will converge to that of the original continuous-time channel (1).

Note that the mutual information of the channel (3) can be computed as

 I(XT0;Y(ΔT,n)) =I(XT0;{Y(ti)−Y(ti−1)}ni=1) =H({Y(ti)−Y(ti−1)}ni=1)−H({B(ti)−B(ti−1)}ni=1) =12logdet(In+AT,n), (4)

where is the identity matrix and is an matrix whose -th entry is defined as

 AT,n(i,j)=E[nT∫ti+1tiX(s)ds∫tj+1tjX(s)ds].

It then follows from the stationarity of that

 AT,n(i,j)=γj−i,

where, setting for , we have defined

 γl≜E[nT∫t1t0X(s)ds∫tl+1tlX(s)ds],l=−(n−1),…,n−1.

Noting that is a Hermitian (and Toeplitz) matrix and letting

denote all its eigenvalues, we have

 I(XT0;Y(ΔT,n))=12n∑m=1log(1+ψm). (5)

Now consider an matrix defined by

 ^AT,n(i,j)=^γj−i,

where and , for . Obviously, is an circulant matrix whose eigenvalues can be readily computed as

 ^ψm=n−1∑k=0^γke−2πimk/n. (6)

Now, for large , approximating by , we have, for ,

 ^ψm ≈E[X2(0)]Tn+n−1∑k=1E[X(t0)X(tk)]e−2πi(tk−t0)mTTn+n−1∑k=1E[X(t0)X(tn−k)]e−2πi(t0−tn−k)mTTn ≈2πf(2πm/T), (7)

and for ,

 ^ψm ≈E[X2(0)]Tn+n−1∑k=1E[X(t0)X(tk)]e−2πi(tk−t0)n−mTTn+n−1∑k=1E[X(t0)X(tn−k)]e−2πi(t0−tn−k)n−mTTn ≈2πf(−2π(n−m)/T), (8)

where we have used the definition

 f(λ)=12π∫∞−∞R(τ)e−iτλdτ,

where is the autocorrelation function of . Adapting some well-known arguments for establishing aysmptotic equivalence (see, e.g., [5] or [4]), we can prove that for large and large ,

 ∑nm=1log(1+ψm)T≈∑nm=1log(1+^ψm)T. (9)

Now, collecting all the results above, we conclude that, for appropriately chosen large and large ,

 1TI(XT0;YT0) (a)≈1TI(XT0;Y(ΔT,n)) (b)=12logdet(In+AT,n) (c)=∑nm=1log(1+ψm)2T (d)≈∑nm=1log(1+^ψm)2T (e)≈n/2∑m=−n/2log(1+2πf(2πm/T))12T (f)≈14π∫∞−∞log(1+2πf(λ))dλ,

where follows from Theorem 3.3, follows from (2), follows from (5), follows from (9), follows from (8) and follows from the definition of the integral, establishing the formula (2).

The above proof is by no means rigourous, but, as elaborated in the next section, a refinement with some elementary - arguments and Fourier analysis arguments will certainly make it so to reach (2), which yields a rigorous proof of the classical formula.

## 3 A Rigorous Proof

First of all, we rigorously state our theorem.

###### Theorem 3.1.

Assume that both and are Lesbegue integrable over . Then,

 limT→∞1TI(XT0;YT0)=14π∫∞−∞log(1+2πf(λ))dλ. (10)
###### Remark 3.2.

It is well known that and

are a Fourier transform pair, and the integrability of one implies that the other one is uniformly bounded and uniformly continuous over

. Moreover, it is easy to verify that is non-negative, and both and are symmetric.

We next state the sampling theorem that will be used in our proof, which is a weakened version of Theorem in [15] that holds true in a more general setting where sampling times may not be evenly spaced, and moreover, feedback and memory are possibly involved.

###### Theorem 3.3.

For any fixed and any sequence satisfying for any feasible , we have

 limk→∞I(XT0;Y(ΔT,nk))=I(XT0;YT0),

where .

We are now ready to give the proof of our main result.

###### Proof of Theorem 3.1.

Our proof consists of the following several steps.

Step 1. In this step, we show that both and are bounded from above uniformly over all and , namely, there exists such that for all and , . Here denotes the operator norm induced by the

-norm for vectors.

It is straightforward to verify (cf. the proof of Lemma in [4]) that

 ∥AT,n∥2=supx∈Rn:∥x∥2=1xAT,nxt≤∥gT,n(θ)∥∞,

where denotes the -norm and

 gT,n(θ)≜n−1∑l=−(n−1)γleilθ.

So, to establish the uniform boundedness of , it suffices to prove that is bounded from above uniformly all , and . Towards this end, we note that for any feasible ,

 l2∑l=l1|γl| =nT∣∣ ∣∣l2∑l=l1E[∫t1t0X(s)ds∫tl+1tlX(s)ds]∣∣ ∣∣ =nT∣∣ ∣∣l2∑l=l1∫t1t0∫tl+1tlE[X(u)X(v)]dvdu∣∣ ∣∣ ≤nT∫t1t0⎛⎝l2∑l=l1∫tl+1tl|R(v−u)|dv⎞⎠du ≤nT∫t1t0(∫tl2+1tl1|R(v−u)|dv)du ≤nT∫t1t0(∫tl2+1tl1−1|R(τ)|dτ)du =∫tl2+1tl1−1|R(τ)|dτ, (11)

which immediately implies that

 n−1∑l=−(n−1)|γl|≤∫∞−∞|R(τ)|dτ, (12)

which, together with (6), further implies that for any ,

 ^ψm≤2∫∞−∞|R(τ)|dτ.

Note that a similar argument as above yields that for all , and ,

 |gT,n(θ)|=nT∣∣ ∣∣n−1∑l=−(n−1)E[∫t1t0X(s)ds∫tl+1tlX(s)dseilθ]∣∣ ∣∣≤∫∞−∞|R(τ)|dτ,

which implies the uniform boundedness of , and moreover, together with (6), that of .

Step 2. In this step, we show that both and are bounded from above uniformly over all and . Here denotes the Frobenious norm.

To prove the uniform boundedness of , note that

 ∥AT,n∥2FT=∑n−1k=−(n−1)(n−|k|)γ2kT≤n−1∑k=−(n−1)(1−k/n)|γk|max{|γk|}T/n≤∫∞−∞|R(τ)|dτ∥R∥∞,

where, for the last inequality, we have used (12).

A similar argument can be used to establish the uniform boundedness of .

Step 3. In this step, we show that one can first fix a large enough and then choose a large enough such that is arbitrarily small; more precisely, for any , there exists such that for any , there exists such that for all , .

Towards this goal, we first note that

 ∥AT,n−^AT,n∥2F=n−1∑k=−(n−1)(n−|k|)(γk−^γk)2≤2n−1∑k=1(n−k)γ2n−k=2n−1∑k=1kγ2k.

In light of the integrability of , for any given , there exists such that

 ∫∞τ0|R(τ)|dτ<ε′. (13)

Now, it can be easily verified that for any given , we can first fix a large enough and then choose large enough such that , and furthermore,

 ∞∑k=⌊ε′n⌋|γk|(a)≤∫∞τ0−T/n|R(τ)|dτ<ε′ and ⌊ε′n⌋−1∑k=1(k/n)|γk|2≤ε′,

where we have used (3) in deriving (a). It then follows that for and as above,

 ∥AT,n−^AT,n∥2FT ≤⎛⎝2⌊ε′n⌋∑k=1(k/n)|γk|+2n−1∑k=⌊ε′n⌋(k/n)|γk|⎞⎠max{γk}T/n ≤⎛⎝2⌊ε′n⌋∑k=1(k/n)|γk|+2∞∑k=⌊ε′n⌋|γk|⎞⎠max{γk}T/n ≤4ε′∥R∥∞,

establishing Step .

Step 4. In this step, fixing a polynomial , we show that for any , there exists such that for any , there exists such that for all ,

 ∣∣ ∣∣∑nm=1p(ψm)T−∑nm=1p(^ψm)T∣∣ ∣∣≤ε.

To achieve this goal, it suffices to prove that, given any fixed , for any , one can first fix a large enough and then choose a large enough such that

 ∣∣ ∣∣∑nm=1ψkmT−∑nm=1^ψkmT∣∣ ∣∣≤ε,

which is equivalent to

 ∣∣ ∣ ∣∣tr(AkT,n−^AkT,n)T∣∣ ∣ ∣∣≤ε.

First of all, we note that

 AkT,n−^AkT,n=(AkT,n−Ak−1T,n^AT,n)+(Ak−1T,n^AT,n−Ak−2T,n^A2T,n)+⋯+(AT,n^Ak−1T,n−^AkT,n).

And for the first term, using the well-known fact that for any two compatible matrices ,

 (tr(E1E2))2≤∥E1∥2F∥E2∥2F,∥E1E2∥2F≤∥E1∥22∥E2∥2F,

we deduce that

 ⎛⎝tr(AkT,n−Ak−1T,n^AT,n)T⎞⎠2 =⎛⎝tr(Ak−1T,n(AT,n−^AT,n)T⎞⎠2 ≤∥Ak−1T,n∥2F∥AT,n−^AT,n∥2FT2 ≤∥AT,n∥2(k−2)2∥AT,n∥2FT∥AT,n−^AT,n∥2FT.

It then follows from Steps , and that for any , one can first fix a large enough and then choose a large enough such that

 ∣∣ ∣∣tr(AkT,n−Ak−1T,n^AT,n)T∣∣ ∣∣<ε′.

A completely parallel argument can be used to establish the same statement for other terms, which in turn implies our goal in this step.

Step 5. In this step, we finish the proof of the theorem. First of all, let be a monotone decreasing sequence of positive real numbers convergent to . For any , we first arbitrarily choose a monotone increasing sequence of positive real numbers divergent to infinity, and then, applying Theorem 3.3, choose for each such that

 ∣∣∣1TkI(M;YTk0)−1TkI(M;Y(ΔT,nk))∣∣∣≤εk. (14)

Then, applying the Weierstrass approximation theorem to the continuous function , we choose two polynomials such that for all ,

 p(k)1(x)≤log(1+x)≤p(k)2(x) and p(k)2(x)−p(k)1(x)≤εkx, (15)

 ∑nkm=1p(k)1(ψm)Tk≤∑nkm=1log(1+ψm)Tk≤∑nkm=1p(k)1(ψm)Tk, (16)
 ∑nkm=1p(k)1(^ψm)Tk≤∑nkm=1log(1+^ψm)Tk≤∑nkm=1p(k)1(^ψm)Tk. (17)

Re-choosing a larger first and then a larger if necessary, we have, by Step ,

 limk→∞∑nkm=1p(k)1(ψm)Tk=limk→∞∑nkm=1p(k)1(^ψm)Tk,limk→∞∑nkm=1p(k)2(ψm)Tk=limk→∞∑nkm=1p(k)2(^ψm)Tk. (18)

Now, as elaborated in Appendix A, one can show that (again re-choosing for each if necessary),

 limk→∞∑nkm=1p(k)1(^ψm)Tk=12π∫p(k)1(2πf(x))dx,limk→∞∑nkm=1p(k)2(^ψm)Tk=12π∫p(k)2(2πf(x))dx. (19)

And moreover, from (15), we deduce that

 ∫(p(k)2(2πf(x))−p(k)2(2πf(x)))dx≤2πεk∫f(x)dx.

This, together with the integrability of , implies that

 limk→∞∫p(k)1(2πf(x))dx=limk→∞∫p(k)2(2πf(x))dx, (20)

which, together with (16), (17), (18) and (19), implies that

 limk→∞∑nkm=1log(1+ψm)Tk=limk→∞∑nkm=1log(1+^ψm)Tk=12πlimk→∞∫p(k)1(2πf(x))dx. (21)

Finally, similarly as in the derivation of (20), using (15) and the integrability of , we conclude that

 limk→∞∫p(k)1(2πf(x))dx=∫log(1+2πf(x))dx,

which, together with (21), (5) and (14), implies that

 limk→∞1TkI(M;YTk0)=14π∫∞−∞log(1+2πf(λ))dλ.

The theorem then immediately follows from a typical subsequence argument, as desired.

## 4 Concluding Remarks

Some remarks about the approach employed in this work are in order.

First, echoing [15], we emphasize that time sampling, which is the key ingredient in our approach, ensures the inheritance of causality in converting a continuous-time Gaussian channel to its discrete-time versions, which stands in contrast to the orthogonal expansion representation in some previous approaches that destroys the temporal causality in the conversion process.

More technically, we note that our proof of Theorem 3.1 has actually established that one can appropriately “scale” and with shrinking to as tends to infinity (i.e., the sampling gets finer) such that

 limk→∞logdet(In+ATk,nk)Tk=12π∫log(1+2πf(x))dx.

Such a result can be regarded as a “scalable” version of Szego’s theorem, which seems to serve as a bridge connecting discrete-time and continuous-time Szego’s theorems.

As argued above, we believe that, other than recovering a classical information-theoretic formula with an elementary proof, our approach promises further applications in more general settings, which, for instance, include possible extensions of the formula (2) to continuous-time Gaussian channels with feedback and memory [11], or with multi-users [15], or multi-inputs and multiple-outputs [1].

Acknowledgement. We would like to thank Professor Shunsuke Ihara for insightful discussions and for pointing out relevant references.

## Appendix A Proof of (19)

To prove (19), it suffices to prove that for any ,

 limk→∞∑nkm=1^ψqmTk=12π∫(2πf(x))qdx. (22)

For illustrative purposes, we now prove (22) for the case that in great detail. First of all, by (6), we have, for any ,

 ^ψ2m=nk−1∑j1=0nk−1∑j2=0^γj1^γj2e−2πim(j1+j2)/nk (23)

and furthermore

 nk∑m=1^ψ2m =nk∑m=1nk−1∑j1=0nk−1∑j2=0^γj1^γj2e−2πim(j1+j2)/nk =nk−1∑l=0nk−1∑j=0^γj1^γj2nk∑m=1e−2πim(j1+j2)/nk (a)=nk(^γ20+nk−1∑l=1^γl^γnk−l) =nk(nk−1∑l=−(nk−1)γ2l+2nk−1∑l=1γlγnk−l),

where for (a), we have used the easily verifiable fact that if is equal to or , then is equal to , and otherwise. Noting that

 γl=E[nkTk∫t1t0∫tl+1tlX(s)X(s′)dsds′]=nkTk∫t1t0∫tl+1tlR(s′−s)dsds′,

with a routine continuity argument using the definition of integral, we arrive at

 limk→∞nk∑nk−1l=−(nk−1)γ2lTk=limk→∞∫Tk−TkR2(s)ds=∫∞−∞R(s)R(−s)ds=12π∫(2πf(λ))2dλ, (24)

where we have used the uniform boundedness and uniform continuity of for the first equality, and the last equality follows from the fact that and are a Fourier transform pair. Moreover, using the absolute summability of and the fact that (this follows from the Riemann-Lesbegue lemma), we have

 limk→∞nk(2γ1γn−1+⋯+2γn−1γ1)Tk=0. (25)

It then follows from (24) and (25) that

 limk→∞∑nkm=1