Rate-Distortion-Perception Tradeoff of Variable-Length Source Coding for General Information Sources

11/30/2018 ∙ by Ryutaroh Matsumoto, et al. ∙ 0

Blau and Michaeli recently introduced a novel concept for inverse problems of signal processing, that is, the perception-distortion tradeoff. We introduce their tradeoff into the rate distortion theory of variable-length lossy source coding in information theory, and clarify the tradeoff among information rate, distortion and perception for general information sources. We also discuss the fixed-length coding with average distortion criterion that was missing in the previous letter.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

References

  • [1]

    B. Y. and T. Michaeli, “The perception-distortion tradeoff,” Proc. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, Utah, USA, pp. 6228–6237, June 2018.

  • [2] T. S. Han, Information-Spectrum Methods in Information Theory, Springer, 2002. DOI:10.1007/978-3-662-12066-8
  • [3] R. Matsumoto, “Introducing the perception-distortion tradeoff into the rate-distortion theory of general information sources,” IEICE Communications Express, vol. 7, no. 11, pp. 427–431, Nov. 2018. DOI:10.1587/comex.2018XBL0109

1 Introduction

An inverse problem of signal processing is to reconstruct the original information from its degraded version. It is not limited to image processing, but it often arises in the image processing. When a natural image is reconstructed, the reconstructed image sometimes does not look natural while it is close to the original image by a reasonable metric, for example mean squared error. When the reconstructed information is close to the original, it is often believed that it should also look natural.

Blau and Michaeli [1] questioned this unproven belief. In their research [1], they mathematically formulated the naturalness

of the reconstructed information by a distance of the probability distributions of the reconstructed information and the original information. The reasoning behind this is that the perceptional quality of a reconstruction method is often evaluated by how often a human observer can distinguish an output of the reconstruction method from natural ones. Such a subjective evaluation can mathematically be modeled as a hypothesis testing

[1]. A reconstructed image is more easily distinguished as the variational distance , increases [1], where is the probability distribution of the reconstructed information and is that of the natural one. They regarded the perceptional quality of reconstruction as a distance between and . The distance between the reconstructed information and the original information is conventionally called as distortion. They discovered that there exists a tradeoff between perceptional quality and distortion, and named it as the perception-distortion tradeoff.

Claude Shannon [2, Chapter 5] initiated the rate-distortion theory in 1950’s. It clarifies the tradeoff between information rate and distortion in the lossy source coding (lossy data compression). The rate-distortion theory has served as a theoretical foundation of image coding for past several decades, as drawing a rate-distortion curve is a common practice in research articles of image coding. Since distortion and perceptional quality are now considered two different things, it is natural to consider a tradeoff among information rate, distortion and perceptional quality. Blau and Michaeli [1] briefly mentioned the rate-distortion theory, but they did not clarify the tradeoff among the three. Then the author [3] clarified the tradeoff among the three for fixed-length coding, but did not clarified variable-length coding, where fixed and variable refer the length of a codeword is fixed or variable, respectively [2, Chapter 5]. The variable-length lossy source coding is practically more important than the fixed-length counterpart because most of image and audio coding methods are variable-length.

The purpose of this letter is to mathematically define the tradeoff of variable-length lossy source coding for general information sources, and to express the tradeoff in terms of information spectral quantities introduced by Han and Verdú [2]. We also discuss the fixed-length coding with average distortion criterion that was missing in the previous letter [3].

Since the length limitation is strict in this journal, citations to the original papers are replaced by those to the textbook [2], and the mathematical proof is a bit compressed. The author begs readers’ kind understanding. The base of is an arbitrarily fixed real number unless otherwise stated.

2 Preliminaries

The following definitions are borrowed from Han’s textbook [2]. Let

be a general information source, where the alphabet of the random variable

is the -th Cartesian product of some finite alphabet . For a sequence of real-valued random variables , , …we define

For two general information sources and we define

where is the Shannon entropy of in .

For two distributions and on an alphabet , we define the variational distance as . In the rate-distortion theory, we usually assume a reconstruction alphabet different from a source alphabet. In order to consider the distribution similarity of reconstruction, in this letter we assume as both source and reconstruction alphabets.

3 Variable-length source coding

An encoder of length is a stochastic mapping , where , …, and is the set of finite-length sequences over . By stochastic we mean that the encoder output is probabilistic with a fixed input . The corresponding decoder of length is a deterministic mapping . We denote by the (random variable of) length of sequence for . We denote by a general distortion function.

3.1 Average distortion criterion

Definition 1

A triple is said to be -achievable if there exists a sequence of encoder and decoder , such that

(1)
(2)
(3)

Define the function by

Theorem 2

where the infimum is taken with respect to all general information sources satisfying

(4)
(5)

Proof: Let a pair of encoder and decoder satisfies Eqs. (1)–(3). Let , and define the general information source from . We immediately see that satisfies Eqs. (4) and . By the same argument as [2, p. 349] we immediately see

On the other hand, suppose that a general information source satisfies Eqs. (4) and (5). Let and be a lossless variable-length encoder and its decoder [2, Section 1.7] for such that and

For a given information sequence , the encoder randomly chooses according to the conditional distribution , and define the codeword as . The decoding result is . Since the probability distribution of decoding result is , we see that the constructed encoder and decoder satisfy Eqs. (2) and (3).  

3.2 Maximum distortion criterion

Definition 3

A triple is said to be -achievable if there exists a sequence of encoder and decoder , such that

Define the function by

Theorem 4

where the infimum is taken with respect to all general information sources satisfying Eq. (5) and

Proof: The proof is almost the verbatim copy of that of Theorem 2 and is omitted.  

Remark 5

The tradeoff for variable-length coding with the average distortion criterion and without the perception criterion was also determined by using stochastic encoders [2, Section 5.7], but with the maximum distortion criterion without the perception criterion, only the deterministic encoders were sufficient to clarify the tradeoff [2, Section 5.6]. It is not clear at present whether or not we can remove the randomness from encoders in Theorem 4.

4 Fixed-length coding with the average distortion criterion

In this section we state the tradeoff for fixed-length coding with the average distortion criterion, because it has never been stated elsewhere. The proof is almost the same as [3]. Note that the definition of encoder will be different from Section 3 and that an assumption on the distortion will be added.

An encoder of length is a deterministic mapping , …, , and the corresponding decoder of length is a deterministic mapping , …, . We require an additional assumption that for all and .

Definition 6

A triple is said to be -achievable if there exists a sequence of encoder and decoder , such that

Define the function by

Theorem 7

where the infimum is taken with respect to all general information sources satisfying

Proof: Proof is almost the verbatim copy of that of [3].