Non-Adaptive Policies for 20 Questions Target Localization

04/22/2015 ∙ by Ehsan Variani, et al. ∙ Johns Hopkins University 0

The problem of target localization with noise is addressed. The target is a sample from a continuous random variable with known distribution and the goal is to locate it with minimum mean squared error distortion. The localization scheme or policy proceeds by queries, or questions, weather or not the target belongs to some subset as it is addressed in the 20-question framework. These subsets are not constrained to be intervals and the answers to the queries are noisy. While this situation is well studied for adaptive querying, this paper is focused on the non adaptive querying policies based on dyadic questions. The asymptotic minimum achievable distortion under such policies is derived. Furthermore, a policy named the Aurelian1 is exhibited which achieves asymptotically this distortion.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Consider the following problem of localizing a one dimensional deterministic target with a question/answer process. At each step, a subset of is chosen and is questioned: “whether or not

belongs to this subset”. Assume for now that the answers are truthful, i.e., there is no noise. By repeating this querying, an estimator of

notated is derived. The performance of such policy is measured by the supremum distance between the target and the estimator, . Note that

(1)

since at most n bits of information can be learned using n binary questions. Assume now that the subsets are intervals and the policy is non-adaptive. Then,

(2)

which is achieved by choosing intervals of the form , for . Among adaptive policies, the lowest achievable error in (1) is achieved using the dichotomy policy, consisting in splitting in two intervals of equal size the current interval containing the target. This performance can also be achieved using a non adaptive policy with question sets which are not constrained to be intervals. The dyadic policy, consisting in querying the bits of in its expansion in base 2 is an example of a non-adaptive policy which achieves optimum performance.

Consider now the situation when the answers are noisy according to a memoryless channel and where is a sample from a known distribution. The performance is then measured using the mean square error. In this case, as shown in [tsiligkaridis2013collaborative],

(3)

where the expectation is taken over as well as the noise. and are explicit functions of the entropy of the distribution of and the characteristics of the noise, respectively. Similarly to the noiseless case, this result is derived by considering the maximum amount of uncertainty reduced in average by the question/answering process. Now, is this lower bound achievable?

The bisection policy consists in choosing the interval where is the median of the posterior distribution of after observing the answer of the first questions. It is an adaptive policy which achieves the bound in (3) albeit with different constants. A weaker statement of (3) is that the bisection policy achieves

(4)

for some . A natural question consists in asking if there exists a non adaptive policy achieving (4

) while allowing for querying arbitrary sets, that is, which are not necessarily intervals. The answer to this question is unknown up to our knowledge. In order to progress in answering it, we consider a family of non-adaptive policies. We start with the dyadic policy. Indeed this policy is optimal in the noiseless case. We also know that the dyadic policy is optimal for a different loss function which is the differential entropy of the posterior distribution, see

[jedynak2012twenty] and that the differential entropy of the posterior need to be small for the mean square loss to be small. However, the reciprocal is not true. We extend the dyadic policy as follows: In the case where the prior distribution is Uniform, the dyadic questions correspond to the coefficients of in its expansion in base 2. We allow each bit to be queried not only once but an arbitrary number of times under the constraint of a fixed total number of questions. This construction is extended to non Uniform priors by first transforming into a Uniform random variable using the transformation where is the cumulative distribution of and then considering the dyadic questions for .

2 Information transmission equivalent problem

The problem of interest can be mapped to the well-know point to point communication system [shannon2001mathematical] which transmits the location of the target through a memoryless channel. Consider the transmission of a message in the interval

through a binary input, arbitrary output memoryless channel. The transmission aims at minimizing the mean square error between the input message and the decoded message. The analysis is presented in the case where the source is Uniformly distributed. This analysis is then extended to other distributions in Section 

5. It is assumed that the source provides the binary representation of the message :

(5)

Since the transmission of the infinite bit sequence , is not feasible in practice, the message should be quantized by selecting a finite number of bits, notate by .

The encoding, shown in Figure 1, consists in removing the source redundant information in order to gain compression efficiency. This is achieved by a quantization scheme. For the uniform source class, the scalar quantizer where the quantized levels are explicitly represented via a linear combination of the ’s has been proved to be optimal [crimmins1969minimization, mclaughlin1995optimal] for the mean square error distortion. A source encoder with fixed rate is assumed in this paper. The output of the source encoder is the first bits of the infinite binary representation of . This quantized version is denoted as in Figure 1. The truncated message is denoted .

The channel encoder is designed to add redundancy to the source bit stream to protect the source information from the channel noise. The bits in the binary representation are of different importance due to the different weights in the sum 5. This motivates the coding policy which transmits each bit at a different rate. [nguyen2012optimal] proposed to transmit each bits through a single binary symmetric channel with different transmission rate. This paper considers the general transmission scheme for which each bit is allowed to be transmitted a different number of times while the total number of bits to be transmitted is fixed. In Figure 1, the channel encoder sends the bits codeword for which . We denote to be the maximum bit index selected for transmission. Note that . For any bit , we define

(6)

to be the number of times that bit k is transmitted in the codeword of length

. A transmission policy is then denoted by a vector

such that for any and .

The received message is different from the transmitted message due to the channel noise. Here, we use a binary input channel

with the following channel posterior probability:

(7)

where and are point mass functions or densities.

The decoder is designed to minimize the end-to-end mean square error (MSE) between the decoded message and the original message , notated as .

The communication system of interest is shown in Figure 1. Here, the source encoder and the channel encoder are two separate entities, but there is a tight connection between them. This type of joint source-channel coding has been adressed in [goertz2007joint]. This paper considers only non-adaptive transmission policies. The non-adaptive policies are the policies that uses a one way communication between sender and receiver. On the other hand the adaptive policies are corresponding to the system for which there is a noiseless feedback channel which provides sender the results of the previous transmission [horstein1963sequential, waeber2013probabilistic].

Figure 1: The general transmission interface of a continous message.

3 End-To-End Distortion

After transmission of bits, the decoded message is different from the source message due to the quantization and transmission distortion. The MSE distortion at step is defined as:

(8)

here, denotes statistical expectation and is the history of the transmissions of the previous bits.

The square error distortion of Eq 8 is minimized when which lead to the minimum end-to-end distortion:

(9)

For a given history the transmitted bits and , and are independent and the minimum end-to-end distortion is therefore, using (5), simplified as:

(10)

where is the random variable corresponding to the ’th bit of in the binary representation of Eq 5. Since each bit in the above sum is weighted differently, an unequal bit error protection transmission pattern is desired. The variable denotes the number of times that bit has been transmitted within the first transmissions.

We now provide bounds on the distortion achieved by any such transmission scheme or equivalently on any non adaptive policy using the dyadic questions. The proof involves classical large deviation inequalities and is provided in the Appendix.

Figure 2: The distortion is bounded from above and below for all transmission policy for and the maximum depth of .
Theorem 1

For any transmission pattern , the minimum end-to-end distortion is bounded as follows:

(11)

where,

(12)

is the chernoff information bound [cover2012elements] and

(13)

Figure 2 provides an illustration of theorem 1. We show the distortion as well as the lower and the upper bound for questions and bits source. The channel is binary with transitions probabilities . For this setting, and . There are only possible transmissions patterns for which the distortion and the bounds are plotted in the figure. The experiments are done over iterations. These policies are shown in the x axis. The policy which minimizes is consisting in sending the first bit 6 times, the second, 3 times and the third once.

4 Aurelian Coding Scheme

The optimal non-adaptive policies are the ones which minimize the end-to -end distortion . However, it is difficult to find such optimum transmission patterns from Eq 10. Instead, we define the efficient policies as follows:

Definition 1

An efficient policy is a policy that minimizes the upper bound of the end-to-end distortion.

subject to

As Figure 2 shows, the upper distortion bound can have many local minima. Solving such an integer minimization problem might be hard directly. However, we can characterize some properties of an efficient transmission policy.

Lemma 1

For any efficient transmission policy ,

  • there is no gap between bit indices selected for transmission, i.e. for , where is the last non-zero transmission bit index.

  • for any , the difference between transmission values of the corresponding bits is bounded from above and below:

    (14)

    where .

These properties are essential to derive the following lower bound on the minimum end-to-end distortion for all efficient policies.

Theorem 2

The logarithm of the minimum end-to-end distortion of any efficient non-adaptive transmission policy goes to with a rate in , more specifically:

for each sequence of efficient policies, where

Even though we derived some necessary important properties for policies that achieve the infimum of the upper bound, we did not provide an explicit efficient non-adaptive transmission policy. Our strategy is the following: We propose a policy, called the Aurelian policy, derived from the minimization of (12) for sequences of real numbers (Of course we then adapt the ’s such that we obtain a sequence of integers).

Definition 2

Aurelian Policy:

(15)

At this stage, even if the choice of the Aurelian policy is based on some ”good” intuitive reasons ( the minimization of the continuous problem), we do not have any insurance on the rate of convergence to of the distortion rate when this policy is used, compared to the rate of convergence of an efficient non-adaptive transmission policy. Theorem 2 address this this problem. First of all, we will derive a upper bound for the convergence rate to of the distortion rate of the Aurelian policy. Having this rate, we will show that it is comparable to the distortion rate of an efficient non-adaptive transmission policy. In other words, we can conclude that we don’t loose ”too much” by following an Aurelian policy instead of deriving an explicit non-adaptive transmission policy.

Theorem 3

The logarithmic rate of convergence of the Aurelian policy is no more than . More precisely,

(16)

where .

And finally,

Corollary 1

If we denote by the distortion rate of the Aurelian policy, and the distortion rate of any efficient non-adaptive transmission policy, then we have:

(17)

for and given in Theorem 2 and Theorem 3, respectively.

The blue curve in Figure 3 shows the asymptotic behavior of the aurelian policy. As this figure shows, after about transmissions, the receiver is able to decode the message with very high confidence. It is also worth to mention that the rate of convergence is of order .

Figure 3: The normalized distortion versus the number of transmissions as well as the lower bound, i.e the constant provided in Theorem 2.

5 Non-Uniform Distribution

Note that the lower bound of Theorem 2 generalizes to random variables for which the cumulative distribution is Lipschitz continous. Indeed in this case, by definition, there exists a constant such that

(18)

The cumulative disttribution of any random varaible is a uniform random variable in interval . Instead of locating target , we can search for the target . Lets be the estimated location after steps and let .

The lower bound of the distortion is then derived using Theorem 2.

6 Conclusion

The problem of noisy target localization under mean square distortion is addressed through an iterative binary question/answering process. Among non-adaptive policies, the policies involving dyadic questions were considered. It was shown that the logarithm of the mean square distortion is for large enough. An explicit policy achieving this rate was exhibited. We finally conjecture that this policy is optimal among all non-adaptive policies.

References

7 Appendix

7.1 Proof of Theorem 1:

7.1.1 Upper bound

Proof: We first prove the follwoing inequality:

(19)

the upper bound is then derived by considering the fact that for any random variable and scalar , if , then . Note:

where is the mode of . The above equation can be simplified further as:

where,

here, is the indicator of sending bit at transmission step . The second equality is hold for any positive value . The third inequality is derived using Markov inequality and the final equality holds based on the fact that by defnition. The fact that each transmission is independent of the rest and have same i.i.d distribution is also used for deriving the expectation of the last inequality.

7.1.2 Lower bound

Next, Lets define the posterior function .

Lemma 2

The posterior denisty on bit at step is whenever . Otherwise the posterior is a function of the posterior at previous transmission step:

(20)

Prrof: First note that since bits are independent random variables, sending a bit different from will not change the posterior at step , thus when . In the other hand, whenever the bit was sent through the channel at the ’th transmission:

where,

(21)
Lemma 3

If

(22)

Proof: For .

Corollary 2

After transmission of bits,

(23)

where, .

Proof: Using Lemma 3 recursively:

Taking expectation from both side and using Jensen inequality:

where the last equation derived based on the fact that there is only steps that bit has been sent and these steps are i.i.d.

7.2 Optimal Properties

Property 1: Lets assume there is a bit such that and let be the first index greater such that . Lets be a new policy made from the earlier policy , such that

(24)

we claim :

Since and .n

Let us call the transmission policy derivable from by a -move when

here it is assumed that , otherwise the above move can not be defined. An optimal transmission policy is such that no further -move can lead to a transmission policy with lower upper bound distortion. This is the key idea to prove the following lemma:

Property 2: Lets be a coding policy and be the policy derived by a -move between two bit indices and such that . This move reduce the upper bound values if , simplification of both side prove the following lower bound inequality

Similarly, the other bound can be derived by considering situations that a -move can reduce the upper bound.

Corollary 3

Using the above properties, we can derive the following bounds for the last transmitted bit index and the transmission number of first bit, :

The proof is very straigtforward using the properties and note .

7.3 Asymptotic Behavior of Infimum of the Efficient Policies

Lets be an efficient policy. For such policy

(25)

where , since . Using the Corollary 3:

(26)

let and using the upper bound of ,

(27)

where which complete the proof.

7.4 Asymptotic Behavior of Supremum of the Aurelian Policies

For an Aurelian policy : Lets be an efficient policy. For such policy

(28)

For , since :

(29)

where .