Introducing the Perception-Distortion Tradeoff into the Rate-Distortion Theory of General Information Sources

08/24/2018 ∙ by Ryutaroh Matsumoto, et al. ∙ 0

Blau and Michaeli recently introduced a novel concept for inverse problems of signal processing, that is, the perception-distortion tradeoff. We introduce their tradeoff into the rate distortion theory of lossy source coding in information theory, and clarify the tradeoff among information rate, distortion and perception for general information sources.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

References

1 Introduction

An inverse problem of signal processing is to reconstruct the original information from its degraded version. It is not limited to image processing, but it often arises in the image processing. When a natural image is reconstructed, the reconstructed image sometimes does not look natural while it is close to the original image by a reasonable metric, for example mean squared error. When the reconstructed information is close to the original, it is often believed that it should also look natural.

Blau and Michaeli [1] questioned this unproven belief. In their research [1], they mathematically formulated the naturalness

of the reconstructed information by a distance between the probability distributions of the reconstructed information and the original information. The reasoning behind this is that the perceptional quality of a reconstruction method is often evaluated by how often a human observer can distinguish an output of the reconstruction method from natural ones. Such a subjective evaluation can mathematically be modeled as a hypothesis testing

[1]. A reconstructed image is more easily distinguished as the variational distance , increases [1], where is the probability distribution of the reconstructed information and is that of the natural one. They regard the perceptional quality of reconstruction as a distance between and . The distance between the reconstructed information and the original information is conventionally called as distortion. They discovered that there exists a tradeoff between perceptional quality and distortion, and named it as the perception-distortion tradeoff.

Claude Shannon [2, Chapter 5] initiated the rate-distortion theory in 1950’s. It clarifies the tradeoff between information rate and distortion in the lossy source coding (lossy data compression). The rate-distortion theory has served as a theoretical foundation of image coding for past several decades, as drawing a rate-distortion curve is a common practice in research articles of image coding. Since distortion and perceptional quality are now considered two different things, it is natural to consider a tradeoff among information rate, distortion and perceptional quality. Blau and Michaeli [1] briefly mentioned the rate-distortion theory, but they did not clarify the tradeoff among the three.

The purpose of this letter is to mathematically define the tradeoff for general information sources, and to express the tradeoff in terms of information spectral quantities introduced by Han and Verdú [2]. It should be noted that the tradeoff among the three quantities can be regarded as a combination of lossy source coding problem [2, Chapter 5] and random number generation problem [2, Chapter 2], both of which will be used to derive the tradeoff.

Since the length limitation is strict in this journal, citations to the original papers are replaced by those to the textbook [2], and the mathematical proof is a bit compressed. The author begs readers’ kind understanding. The base of is an arbitrarily fixed real number unless otherwise stated.

2 Preliminaries

The following definitions are borrowed from Han’s textbook [2]. Let

be a general information source, where the alphabet of the random variable

is the -th Cartesian product of some finite alphabet . For a sequence of real-valued random variables , , …we define

For two general information sources and we define

and

For two distributions and on an alphabet , we define the variational distance as . In the rate-distortion theory, we usually assume a reconstruction alphabet different from a source alphabet. In order to consider the distribution similarity of reconstruction, in this letter we assume as both source and reconstruction alphabets.

An encoder of length is a mapping , …, , and the corresponding decoder of length is a mapping , …, . is a general distortion function with the assumption for all and .

Definition 1

A triple is said to be achievable if there exists a sequence of encoder and decoder , such that

(1)
(2)
(3)

Define the function by

Theorem 2

where the infimum is taken with respect to all general information sources satisfying

(4)

Proof: Let a pair of encoder and decoder satisfies Eqs. (1)–(3). Then by [2, Theorem 5.4.1] we have

(5)

where satisfies Eq. (4). On the other hand, the decoder can be viewed as a random number generator to from the alphabet , …, . By [2, Converse part of the proof of Theorem 2.4.1] we have

(6)

This complete the converse part of the proof.

We start the direct part of the proof. Assume that a triple satisfys Eqs. (5) and (6). Let satisfy

(7)

Let and be an encoder and a decoder constructed in [2, Lemma 1.3.1] with codebook , …, . Let and be an encoder and a decoder constructed in [2, Theorem 5.4.1] with codebook , …, . Assume that we have a source sequence . If then let , …, be the codeword. If then let , …, be the codeword. Let be the above encoding process. At the receiver of a codeword , if then decode by , otherwise decode by by . Let the above decoding process as .

If and are used then the source sequence is reconstructed by a receiver without error by [2, Lemma 1.3.1] and we have . The probability of and not being used is

Combined with the assumption and Eq. (7)

which implies Eq. (3).

On the other hand, and satisfy Eq. (2), so the combined encoder and also satisfies Eq. (2). The information rate of is at most , which implies that Eq. (1) holds with the constructed and . This completes the direct part of the proof.  

3 Example with a mixed information source

A typical example of non-ergordic general information source is a mixed information source [2, Section 1.4]. Since Theorem 2 is a bit abstract, we explicitly compute for a mixed information source. Let , , and , be the Hamming distance between , . Consider two distributions and on defined by

For , …, , in our mixed information source we have

By [2, Theorem 5.8.1, Example 5.8.1 and Theorem 5.10.1] we see that

if and only if

(8)

where is the binary entropy function .

On the other hand, by [2, Example 1.6.1], we have

By the above formulas and assuming , we can see

Acknowledgments

The author would like to thank Dr. Tetsunao Matsuta for the helpful discussions.