Non-Asymptotic Bounds and a General Formula for the Rate-Distortion Region of the Successive Refinement Problem

02/21/2018
by   Tetsunao Matsuta, et al.
0

In the successive refinement problem, a fixed-length sequence emitted from an information source is encoded into two codewords by two encoders in order to give two reconstructions of the sequence. One of two reconstructions is obtained by one of two codewords, and the other reconstruction is obtained by all two codewords. For this coding problem, we give non-asymptotic inner and outer bounds on pairs of numbers of codewords of two encoders such that each probability that a distortion exceeds a given distortion level is less than a given probability level. We also give a general formula for the rate-distortion region for general sources, where the rate-distortion region is the set of rate pairs of two encoders such that each maximum value of possible distortions is less than a given distortion level.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

06/14/2022

Two-terminal source coding with common sum reconstruction

We present the problem of two-terminal source coding with Common Sum Rec...
04/28/2021

A coding theorem for the rate-distortion-perception function

The rate-distortion-perception function (RDPF; Blau and Michaeli, 2019) ...
01/05/2019

Exponential Strong Converse for Successive Refinement with Causal Decoder Side Information

We revisit the successive refinement problem with causal decoder side in...
08/03/2020

Phase Transitions in Rate Distortion Theory and Deep Learning

Rate distortion theory is concerned with optimally encoding a given sign...
08/08/2022

Achievable Refined Asymptotics for Successive Refinement Using Gaussian Codebooks

We study the mismatched successive refinement problem where one uses Gau...
06/07/2019

Coding Theorems for Asynchronous Slepian-Wolf Coding Systems

The Slepian-Wolf (SW) coding system is a source coding system with two e...
04/06/2022

Hypergraph-based Source Codes for Function Computation Under Maximal Distortion

This work investigates functional source coding problems with maximal di...

1 Introduction

The successive refinement problem is a fixed-length lossy source coding problem with many terminals (see Fig. 1). In this coding problem, a fixed-length sequence emitted from an information source is encoded into two codewords by two encoders in order to give two reconstructions of the sequence. One of two reconstructions is obtained by one of two codewords by using a decoder, and the other reconstruction is obtained by all two codewords by using the other decoder.

source symbolencoder 1encoder 2decoder 1decoder 2reprod. symbol 1reprod. symbol 2
Figure 1: Successive refinement problem

An important parameter of the successive refinement problem is a pair of rates of two encoders such that each distortion between the source sequence and a reconstruction is less than a given distortion level. The set of these pairs when the length (blocklength) of the source sequence is unlimited is called the rate-distortion region. Since a codeword is used in both decoders, we cannot always optimize rates like the case where each codeword is used for each reconstruction separately. However, there are some cases where we can achieve the optimum rates. Necessary and sufficient conditions for such cases were independently given by Koshelev [3], [4] and Equitz and Cover [5]. The complete characterization of the rate-distortion region for discrete stationary memoryless sources was given by Rimoldi [6]. Yamamoto [7] also gave the rate-distortion region as a special case of a more general coding problem. Later, Effros [8] characterized the rate-distortion region for discrete stationary ergodic and non-ergodic sources.

Recently, the asymptotic analysis of the second-order rates to the blocklength becomes an active target of the study. Especially, for the successive refinement problem, No et al. 

[9] and Zhou et al. [10] gave a lot of results to the set of second-order rates for discrete and Gaussian stationary memoryless sources. No et al. [9] considered separate excess-distortion criteria such that a probability that a distortion exceeds a given distortion level is less than a given probability level separately for each reconstruction. On the other hand, Zhou et al. [10] considered the joint excess-distortion criterion such that a probability that either of distortions exceeds a given distortion level is less than a given probability level. Although they also gave several non-asymptotic bounds on the set of pairs of rates, they mainly focus on the asymptotic behavior of the set.

On the other hand, in this paper, we consider non-asymptotic bounds on pairs of rates in finite blocklengths. Especially, since a rate is easily calculated by a number of codewords, we focus on pairs of two numbers of codewords. Although we adopt separate excess-distortion criteria, our result can be easily applied to the joint excess-distortion criterion. We give inner and outer bounds on pairs of numbers of codewords. These bounds are characterized by using the smooth max Rényi divergence introduced by Warsi [11]. For the point-to-point lossy source coding problem, we also used the smooth max Rényi divergence to characterize the rate-distortion function which is the minimum rate when the blocklength is unlimited [12]. Proof techniques are similar to that of [12], but we employ several extended results for the successive refinement problem. The inner bound is derived by using an extended version of the previous lemma [12, Lemma 2]. We give this lemma as a special case of an extended version of the previous generalized covering lemma [13, Lemma 1]. The outer bound is derived by using an extended version of the previous converse bound [12, Lemma 4].

In this paper, we also consider the rate-distortion region for general sources. In this case, we adopt the maximum-distortion criterion such that the maximum value of possible distortion is less than a given distortion level for each reconstruction. By using the spectral sup-mutual information rate (cf. [14]) and the non-asymptotic inner and outer bounds, we give a general formula for the rate-distortion region. We show that our rate-distortion region coincides with the region obtained by Rimoldi [6] when a source is discrete stationary memoryless. Furthermore, we consider a mixed source which is a mixture of two sources and show that the rate-distortion region is the intersection of those of two sources.

The rest of this paper is organized as follows. In Section 2, we provide some notations and the formal definition of the successive refinement problem. In Section 3, we give several lemmas for an inner bound on pairs of numbers of codewords and the rate-distortion region. These lemmas are extended versions of our previous results [12, Lemma 2] and [13, Lemma 1]. In Section 4, we give outer and inner bounds using the smooth max Rényi divergence on pairs of numbers of codewords. In Section 5, we give a general formula for the rate-distortion region. In this section, we consider the rate-distortion region for discrete stationary memoryless sources and mixed sources. In Section 6, we conclude the paper.

2 Preliminaries

Let , , and be sets of positive integers, real numbers, and non-negative real numbers, respectively.

Unless otherwise stated, we use the following notations. For a pair of integers , the set of integers is denoted by . For finite or countably infinite sets and

, the set of all probability distributions over

and are denoted by and

, respectively. The set of all conditional probability distributions over

given is denoted by

. The probability distribution of a random variable (RV)

is denoted by the subscript notation , and the conditional probability distribution for given an RV is denoted by . The -fold Cartesian product of a set is denoted by while an -length sequence of symbols is denoted by . The sequence of RVs is denoted by the bold-face letter . Sequences of probability distributions and conditional probability distributions are denoted by bold-face letters and , respectively.

For the successive refinement problem (Fig. 1), let , , and be finite or countably infinite sets, where represents the source alphabet, and and represent two reconstruction alphabets. Let over be an RV which represents a single source symbol. Since the source can be characterized by , we also refer to it as the source. When we consider as an -fold Cartesian product of a certain finite or countably infinite set, we can regard the source symbol as an -length source sequence. Thus, for the sake of brevity, we deal with the single source symbol unless otherwise stated.

Two encoders encoder 1 and encoder 2 are represented as functions and , respectively, where and are positive integers which denote numbers of codewords. Two decoders decoder 1 and decoder 2 are represented as functions and , respectively. We refer to a tuple of encoders and decoders as a code. In order to measure distortions between the source symbol and reconstruction symbols, we introduce distortion measures defined by functions and .

We define two events of exceeding given distortion levels and as follows:

Then, we define the achievability under the excess-distortion criterion.

Definition 1.

For positive integers , real numbers , and , let , , and . Then, for a source , we say is -achievable if and only if there exists a code such that numbers of codewords of encoder 1 and encoder 2 are and , respectively, and

In what follows, for constants and , we often use the above simple notations: , , and . In this setting, we consider the set of all pairs of numbers of codewords under the excess-distortion criterion. According to the -achievability, this set is defined as follows:

Definition 2.

For a source , real numbers , and , we define

Basically, this paper deals with a coding for a single source symbol. However, in Section 5, we deal with the coding for an -length source sequence. Hence in that section, by abuse of notation, we regard the above sets , , and as -fold Cartesian products , , and , respectively. We also regard source symbol on as an -length source sequence on . Then we call the sequence of source sequences the general source that is not required to satisfy the consistency condition.

We use the superscript for a code, distortion measures, and numbers of codewords (e.g., ) to make clear that we are dealing with source sequences of length . For a code, we define rates and as

Hereafter, means the natural logarithm.

We introduce maximum distortions for a sequence of codes. To this end, we define the limit superior in probability [14].

Definition 3 (Limit superior in probability).

For an arbitrary sequence of real-valued RVs, we define the limit superior in probability by

Now we introduce the maximum distortions:

Then, we define the achievability under the maximum distortion criterion.

Definition 4.

For real numbers , let . Then, for a general source , and real numbers , we say a pair is fm-achievable if and only if there exists a sequence of codes satisfying

and

In what follows, for constants and , we often use the above simple notation: . In this setting, we consider the set of all rate pairs under the maximum distortion criterion. According to the fm-achievability, this set, usually called the rate-distortion region, is defined as follows:

Definition 5 (Rate-distortion region).

For a general source and real numbers , we define

Remark 1.

We can show that the rate-distortion region is a closed set by the definition and using the diagonal line argument (cf. [14]).

We note that when we regard as -length sequence in the definition of , it gives a non-asymptotic region of pairs of rates for a given finite blocklength.

3 Covering Lemma

In this section, we introduce some useful lemmas and corollaries for an inner bound on the set and .

The next lemma is the most basic and important result in the sense that all subsequent results in this section are given by this lemma.

Lemma 1.

Let be an arbitrary RV, and and be RVs such that the pair is independent of . For an integer , let be RVs which are independent of each other and of , and each distributed according to . For an integer and , let be RVs which are independent of each other and of , and each distributed according to . Then, for any set , we have

(1)

where denotes the indicator function, denotes the expectation operator, and denotes the -th power of the expectation, i.e., .

Proof.

We have

By recalling that is independent of , this coincides with the right-hand side (RHS) of (1). ∎

This lemma implies an exact analysis of the error probability of covering a set in terms of a given condition by codewords and of random coding. Hence, this lemma can be regarded as an extended version of [15, Theorem 9].

Although the above lemma gives an exact analysis, it is difficult to use it for characterizing an inner bound on pairs of numbers of codewords and the rate-distortion region. Instead of it, we will use the next convenient lemma.

Lemma 2.

Let , , and be arbitrary RVs, and and be RVs such that the pair is independent of . Let be a function and be a constant such that

(2)

Furthermore, let be a function and be a constant such that

(3)

Then, for any set , we have

(4)
Proof.

We have

where (a) comes from (3), (b) follows since for and (cf. [16, Lemma 10.5.3]), and (c) comes from (2). Since the probability is not greater than , we have

where , (a) follows since for and , and (b) comes from the fact that . ∎

The importance of this lemma is to be able to change RVs from to arbitrary correlated RVs . This makes it possible to characterize an inner bound on pairs of numbers of codewords and the rate-distortion region.

Lemma 2 can be regarded as an extended version of our previous lemma [13, Lemma 1] to multiple correlated RVs. Hence, like the previous lemma, by changing functions and constants, it gives many types of bounds such as the following two corollaries.

Corollary 1.

For any set , any real numbers , and any integers such that and , we have

Proof.

Let , ,

Then, we can easily check that these constants and functions satisfy (2) and (3). Plugging these functions and constants into (4), we have the desired bound. ∎

This corollary can be regarded as a bound in terms of the information spectrum (cf. [14]). To the best of our knowledge, this type of bound has not been reported so far (although, there are some converse bounds [10, Lemma 15] and [17, Theorem 3]).

On the other hand, the next corollary gives a bound in terms of the smooth max Rényi divergence defined as

where .

Corollary 2.

For any set , any real numbers , and any integers , we have

Proof.

For an arbitrarily fixed , let and be functions such that , ,

(5)

and

(6)

where

Then, we have

(7)

where (a) follows since for .

On the other hand, let and be constants such that

Then, for any , we have

and

Thus, , , , and satisfy (2) and (3).

Plugging these functions and constants into (4), we have

where we use inequalities (5), (6), and (7). Since is arbitrary, this completes the proof. ∎

Remark 2.

The original definition of the smooth max Rényi divergence (cf. [12]) is as follows:

Since for non-negative real valued functions and , it holds that (cf. e.g. [16, Lemma 16.7.1])