# Introducing the Perception-Distortion Tradeoff into the Rate-Distortion Theory of General Information Sources

Blau and Michaeli recently introduced a novel concept for inverse problems of signal processing, that is, the perception-distortion tradeoff. We introduce their tradeoff into the rate distortion theory of lossy source coding in information theory, and clarify the tradeoff among information rate, distortion and perception for general information sources.

## Authors

• 12 publications
• ### Rate-Distortion-Perception Tradeoff of Variable-Length Source Coding for General Information Sources

Blau and Michaeli recently introduced a novel concept for inverse proble...
11/30/2018 ∙ by Ryutaroh Matsumoto, et al. ∙ 0

• ### On The Classification-Distortion-Perception Tradeoff

04/18/2019 ∙ by Dong Liu, et al. ∙ 0

• ### Rate Distortion Theorem and the Multicritical Point of Spin Glass

A spin system can be thought of as an information coding system that tra...
07/01/2019 ∙ by Tatsuto Murayama, et al. ∙ 0

• ### Representation and Coding of Signal Geometry

Approaches to signal representation and coding theory have traditionally...
12/23/2015 ∙ by Petros T. Boufounos, et al. ∙ 0

• ### A Theory of the Distortion-Perception Tradeoff in Wasserstein Space

The lower the distortion of an estimator, the more the distribution of i...
07/06/2021 ∙ by Dror Freirich, et al. ∙ 0

• ### A Rate-Distortion Framework for Characterizing Semantic Information

A rate-distortion problem motivated by the consideration of semantic inf...
05/10/2021 ∙ by Jiakun Liu, et al. ∙ 0

• ### Nonlinear Transform Coding

We review a class of methods that can be collected under the name nonlin...
07/06/2020 ∙ by Johannes Ballé, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

An inverse problem of signal processing is to reconstruct the original information from its degraded version. It is not limited to image processing, but it often arises in the image processing. When a natural image is reconstructed, the reconstructed image sometimes does not look natural while it is close to the original image by a reasonable metric, for example mean squared error. When the reconstructed information is close to the original, it is often believed that it should also look natural.

Blau and Michaeli [1] questioned this unproven belief. In their research [1], they mathematically formulated the naturalness

of the reconstructed information by a distance between the probability distributions of the reconstructed information and the original information. The reasoning behind this is that the perceptional quality of a reconstruction method is often evaluated by how often a human observer can distinguish an output of the reconstruction method from natural ones. Such a subjective evaluation can mathematically be modeled as a hypothesis testing

[1]. A reconstructed image is more easily distinguished as the variational distance , increases [1], where is the probability distribution of the reconstructed information and is that of the natural one. They regard the perceptional quality of reconstruction as a distance between and . The distance between the reconstructed information and the original information is conventionally called as distortion. They discovered that there exists a tradeoff between perceptional quality and distortion, and named it as the perception-distortion tradeoff.

Claude Shannon [2, Chapter 5] initiated the rate-distortion theory in 1950’s. It clarifies the tradeoff between information rate and distortion in the lossy source coding (lossy data compression). The rate-distortion theory has served as a theoretical foundation of image coding for past several decades, as drawing a rate-distortion curve is a common practice in research articles of image coding. Since distortion and perceptional quality are now considered two different things, it is natural to consider a tradeoff among information rate, distortion and perceptional quality. Blau and Michaeli [1] briefly mentioned the rate-distortion theory, but they did not clarify the tradeoff among the three.

The purpose of this letter is to mathematically define the tradeoff for general information sources, and to express the tradeoff in terms of information spectral quantities introduced by Han and Verdú [2]. It should be noted that the tradeoff among the three quantities can be regarded as a combination of lossy source coding problem [2, Chapter 5] and random number generation problem [2, Chapter 2], both of which will be used to derive the tradeoff.

Since the length limitation is strict in this journal, citations to the original papers are replaced by those to the textbook [2], and the mathematical proof is a bit compressed. The author begs readers’ kind understanding. The base of is an arbitrarily fixed real number unless otherwise stated.

## 2 Preliminaries

The following definitions are borrowed from Han’s textbook [2]. Let

 X={Xn=(X(n)1,…,X(n)n)}∞n=1

be a general information source, where the alphabet of the random variable

is the -th Cartesian product of some finite alphabet . For a sequence of real-valued random variables , , …we define

 p-limsupn→∞Zn=inf{α∣limn→∞Pr[Zn>α]=0}.

For two general information sources and we define

 ¯¯¯I(X;Y)=p-limsupn→∞1nlogPXnYn(Xn,Yn)PXn(Xn)PYn(Yn),

and

 FX(R)=limsupn→∞Pr[1nlog1PXn(Xn)≥R].

For two distributions and on an alphabet , we define the variational distance as . In the rate-distortion theory, we usually assume a reconstruction alphabet different from a source alphabet. In order to consider the distribution similarity of reconstruction, in this letter we assume as both source and reconstruction alphabets.

An encoder of length is a mapping , …, , and the corresponding decoder of length is a mapping , …, . is a general distortion function with the assumption for all and .

###### Definition 1

A triple is said to be achievable if there exists a sequence of encoder and decoder , such that

 limsupn→∞logMnn ≤ R, (1) p-limsupn→∞1nδn(Xn,gn(fn(Xn))) ≤ D, (2) limsupn→∞σ(Pgn(fn(Xn)),PXn) ≤ S. (3)

Define the function by

 R(D,S)=inf{R∣(R,D,S) is achievable }.
###### Theorem 2
 R(D,S)=max{infY¯¯¯I(X;Y),inf{R∣FX(R)≤S}}

where the infimum is taken with respect to all general information sources satisfying

 p-limsupn→∞1nδn(Xn,Yn)≤D. (4)

Proof: Let a pair of encoder and decoder satisfies Eqs. (1)–(3). Then by [2, Theorem 5.4.1] we have

 R≥infY¯¯¯I(X;Y), (5)

where satisfies Eq. (4). On the other hand, the decoder can be viewed as a random number generator to from the alphabet , …, . By [2, Converse part of the proof of Theorem 2.4.1] we have

 R≥inf{R∣FX(R)≤S}. (6)

This complete the converse part of the proof.

We start the direct part of the proof. Assume that a triple satisfys Eqs. (5) and (6). Let satisfy

 limsupn→∞1nlogMn≤R. (7)

Let and be an encoder and a decoder constructed in [2, Lemma 1.3.1] with codebook , …, . Let and be an encoder and a decoder constructed in [2, Theorem 5.4.1] with codebook , …, . Assume that we have a source sequence . If then let , …, be the codeword. If then let , …, be the codeword. Let be the above encoding process. At the receiver of a codeword , if then decode by , otherwise decode by by . Let the above decoding process as .

If and are used then the source sequence is reconstructed by a receiver without error by [2, Lemma 1.3.1] and we have . The probability of and not being used is

 ϵn≤Pr[1nlog1PXn(Xn)≥1nlogMn].

Combined with the assumption and Eq. (7)

 limsupn→∞ϵn≤S,

which implies Eq. (3).

On the other hand, and satisfy Eq. (2), so the combined encoder and also satisfies Eq. (2). The information rate of is at most , which implies that Eq. (1) holds with the constructed and . This completes the direct part of the proof.

## 3 Example with a mixed information source

A typical example of non-ergordic general information source is a mixed information source [2, Section 1.4]. Since Theorem 2 is a bit abstract, we explicitly compute for a mixed information source. Let , , and , be the Hamming distance between , . Consider two distributions and on defined by

 P(0)=1/2,P(1)=1/2, Q(0)=1/4,Q(1)=3/4.

For , …, , in our mixed information source we have

 Pr[Xn=xn]=12n∏i=1P(xi)+12n∏i=1Q(xi).

By [2, Theorem 5.8.1, Example 5.8.1 and Theorem 5.10.1] we see that

 R≥infY:\scriptsize Eq.\ (???) holds¯¯¯I(X;Y)

if and only if

 R≥h(1/2)−h(D), (8)

where is the binary entropy function .

On the other hand, by [2, Example 1.6.1], we have

 FX(R)=⎧⎪⎨⎪⎩1 if R

By the above formulas and assuming , we can see

 R(D,S)=⎧⎪⎨⎪⎩1 if S=0,max{h(1/4),1−h(D)}if 0

## Acknowledgments

The author would like to thank Dr. Tetsunao Matsuta for the helpful discussions.