I Introduction
Consider a single target acquisition over a search region of width and resolution up to width
. Mathematically, this is the problem of estimating a unit vector
via a sequence of noisy linear measurements(1) 
where a binary measurement vector denotes the locations inspected and the vector denotes the additive measurement noise per location. More generally, the observation at time can be written as
(2) 
where is a noise term whose statistics are a function of the measurement vector . The goal is to design the sequence of measurement vectors , such that the target location W is estimated with high reliability, while keeping the (expected) number of measurements as low as possible.
In this paper, we first consider the linear model (1) when the elements of are i.i.d Gaussian with zero mean and variance . This means that in (2) are distributed as , and show that the problem of searching for a target under measurement dependent Gaussian noise is equivalent to channel coding over a binary additive white Gaussian noise (BAWGN) channel with state and feedback (in Section 4.6 [1]). This allows us not only to retrofit the known channel coding schemes based on sorted Posterior Matching (sort PM) [2]
as adaptive search strategies, but also to obtain information theoretic converses to characterize fundamental limits on the target acquisition rate under both adaptive and nonadaptive strategies. As a corollary to the nonasymptotic analysis of our sorted PosteriorMatchingbased adaptive strategy and our converse for nonadaptive strategy, we obtain a lower bound on the adaptivity gain.
Ia Our Contributions
Our main results are inspired by the analogy between target acquisition under measurement dependent noise and channel coding with state and feedback. This connection was utilized in [3] under a Bernoulli noise model. In this paper, in Proposition 1, we formalize the connection between our target acquisition problem with Gaussian measurement dependent noise and channel coding over a BAWGN channel with state. Here, the channel state denotes the variance of the measurement dependent noise . Since feedback codes i.e., adapting the codeword to the past channel outputs, are known to increase the capacity of a channel with state and feedback. This motivates us to use adaptivity when searching, i.e., to utilize past observations when selecting the next measurement vector . Furthermore, this information theoretic perspective allows us to quantify the increase in the adaptive target acquisition rate. Our analysis of improvement in the target acquisition rate as well as the adaptivity gain, measured as the reduction in expected number of measurements, while using an adaptive strategy over a nonadaptive strategy has two components. Firstly, we utilize information theoretic converse for an optimal nonadaptive search strategy to obtain a nonasymptotic lower bound on the minimum expected number of measurements required while maintaining a desired reliability. As a consequence, this provides the best nonadaptive target acquisition rate. Secondly, we utilize a feedback code based on Posterior Matching as a twostage adaptive search strategy and obtain a nonasymptotic upper bound on the expected number of measurements while maintaining a desired reliability. These two components of our analysis allow us to characterize a lower bound on the increased target acquisition rate due to adaptivity.
Our nonasymptotic analysis of adaptivity gain reveals two qualitatively different asymptotic regimes. In particular, we show that adaptivity gain depends on the manner in which the number of locations grow. We show that the adaptivity grows logarithmically in the number of locations, i.e., when refining the search resolution ( going to zero) and while keeping total search width fixed. On the other hand, we show that as the search width expands while keeping search resolution fixed, the adaptivity gain grows in the number of locations as .
The problem of searching for a target under a binary measurement dependent noise, whose crossover probability increases with the weight of the measurement vector was studied by
[3] and analyzed under sort PM strategy in [2]. In particular, [3] and [2] provide asymptotic analysis of the adaptivity gain for the case where and approaches zero. Our prior work [4] by utilizing a (suboptimal) hard decoding of Gaussian observation , strengthens [3] and [2] by also accounting for the regime in which grows. While the analysis in [4] strengthens the nonasymptotic bounds in [2] with Bernoulli noise it failed to provide tight analysis for our problem with Gaussian observations. In this paper, by strengthening our analysis in [4] we extend the prior work in three ways: (i) we consider the soft Gaussian observation , (ii) we obtain nonasymptotic achievability and converse analysis, and (iii) we characterize tight nonasymptotic adaptivity gain in the two asymptotically distinct regimes of and .IB Applications
Our problem formulation addresses two challenging engineering problems which arise in the context of modern communication systems. We will discuss the two problems in the following examples and then provide the details of the state of art.
Example 1 (Establishing initial access in mmWave communication).
Consider the problem of detecting the direction of arrival for initial access in millimeter wave (mmWave) Communications. In mmWave communication, prior to data transmission the base station is tasked with aligning the transmitter and receiver antennas in the angular space. In other words, the base station’s antenna pattern can be viewed as a measurement vector searching the angular space . At each time , the noise intensity depends on the base station’s antenna pattern and the noisy observation is a function of measurement dependent noise . Here it is natural to characterize the fundamental limit on the measurement time as a function of asymptotically small .
Example 2 (Spectrum Sensing for Cognitive Radio).
Consider the problem of opportunistically searching for a vacant subband of bandwidth over a total bandwidth of . In this problem secondary user desires to locate the single stationary vacant subband quickly and reliably, by making measurements at every time . At each time instant , the noise intensity depends on the number of subbands probed as dictated by and noisy observation is is a function of measurement dependent noise . Here it is natural to characterize fundamental limit of the measurement time required for a secondary user to acquire the vacant subband as a function of the asymptotically large bandwidth .
Giordani et al. [5] compare the exhaustive search like the Sequential Beamspace Scanning considered by Barati et al. [6], where the base station sequentially searches through all angular sectors, against a two stage iterative hierarchical search strategy. In the first stage an exhaustive search identifies a coarse sector by repeatedly probing each coarse region for a predetermined SNR to be achieved. In the second stage an exhaustive search over all locations identifies the target. Giordani et al. show that in general the adaptive iterative strategy reduces the number of measurements over exhaustive search except when desired SNR is too high, forcing the number of measurements required at each stage to get too large. We observe this in through our simulations in Section VIA. In fact, as confirmed by our simulations randomcodingbased nonadaptive strategies including the AgileLink protocol [7], outperform the repetition based adaptive strategies.
Past literature on spectrum sensing for cognitive radio [8, 9, 10] and support vector recovery [11, 12] have focused on the problem where can be real or complex, with measurement independent noise applying both exhaustive search and multiple adaptive search strategies. In contrast, our work considers a simple binary model, , but captures the implications of measurement dependence of the noise, which is known in the spectrum sensing literature as noise folding. The problem of measurement dependent noise (known as noise folding) has been investigated in [13] where nonadaptive design of complex measurements matrix satisfying RIP condition has been investigated. Our work compliments this study by characterizing the gain associated with adaptively addressing the measurement dependent noise (noise folding), albeit for the simpler case of binary measurements. We note that the case of adptively finding a subset of a sufficiently large vacant bandwidth with noise folding is considered in [14], where ideas from group testing and noisy binary search have been utilized. The solutions however depend strongly on the availability of sufficiently large consective vacant band and does not apply to our setting.
Notations: Vectors are denoted by boldface letters and is the element of a vector. Matrices are denoted by overlined boldface letters. Let denote the set . Bern
denotes the Bernoulli distribution with parameter
,denotes the entropy of a Bernoulli random variable with parameter
. Let denote the pdf of Gaussian random variable with mean and variance at . Logarithms are to the base 2. Let if otherwise .Ii Problem Setup
In this section, we describe the mathematical formulation of the target acquisition problem followed by the performance criteria.
Iia Problem Formulation
We consider a search agent interested in quickly and reliably finding the true location of a single stationary target by making measurements over time about the target’s presence. In particular, we consider a total search region of width that contains the target in a location of width . In other words, the search agent is searching for the target’s location among total locations. Let denote the true location of the target, where if and only if target is located at location . The target location can take possible values uniformly at random whose value remains fixed during the search. A measurement at time is given by a vector , where if and only if location is probed. Each measurement can be imagined to result in a clean observation indicating of the presence of the target in the measurement vector . However, only a noisy version of the clean observation is available to the agent. The resulting noisy observation is given by the following linear model with additive measurement dependent noise
(3) 
Here, we assume which corresponds to the case of i.i.d white Gaussian noise with denotes the noise variance per unit width. Conditioned on the measurement vector , the noise is independent over time.
A search consisting of measurements can be represented by a measurement matrix which yields the observation vector . At any time instant , the agent selects the measurement vector in general as a function of the past observations and measurements. Mathematically,
(4) 
for some causal (possibly random) function . After observing the noisy observations and measurement matrix , the agent estimates the target location as follows
(5) 
for some decision function . The probability of error for a search is given by and the average probability of error is given by .
Now we define the measurement strategy:
Definition 1 (Reliable Search Strategy ).
For some , an reliable search strategy, denoted by , is defined as a sequence of (possibly random) number of causal functions , according to which the measurement matrix is selected, and a decision function which provides an estimate of , such that the average probability of error is at most .
Definition 2 (Achievable Target Acquisition Rate).
A target acquisition rate is said to be an achievable, if for any small and large enough, there exists an reliable search strategy such the following holds
(6)  
(7) 
A targeting rate is said to be achievable target acquisition rate if it is achievable for all .
The above definition is motivated by information theoretic notion of transmission rate over a communication channel, which captures the exponential rate at which the number of messages grow with the number of channel uses while the receiver can decode with a small average error probability. Similarly, the target acquisition rate captures the exponential rate at which the number of target locations grow with the number of measurement vectors while a search strategy can still locate the target with a diminishing average error probability.
Definition 3 (Target Acquisition Capacity).
The supremum of achievable target acquisition rates is called the target acquisition capacity.
IiB Types of Search Strategies and Adaptivity Gain
Each measurement vector and the number of total measurements can be selected either based on the past observations , or independent of them. Based on these two choices, strategies can be divided into four types i) having fixed length versus variable length number of the measurement matrix , and ii) being adaptive versus nonadaptive. A fixed length reliable strategy uses a fixed number of measurements predetermined offline independent of the observations, to obtain estimate . On the other hand, a variable length reliable strategy uses a random number of measurements (possibly determined as a function of the observations ) to obtain estimate, . For example, can be selected such that agent achieves in every search and hence is a random variable which is a function of the past noisy observations. Under an adaptive strategy the agent designs the measurement vector as a function of the past observations , i.e., is a function of both and .
Definition 4.
Let be a class of all reliable adaptive strategies.
Under a nonadaptive strategy, the agent designs the measurement vector offline independent of past observations, i.e., does not depend on or .
Definition 5.
Let be a class of all reliable nonadaptive strategies.
For any reliable strategy , the performance is measured by the expected number of measurements . To achieve better reliability, i.e., smaller , in general the agent requires larger .
Definition 6 (Adaptivity Gain).
The adaptivity gain is defined as the best reduction in the expected number of measurements when searching with an reliable adaptive strategy , over an reliable nonadaptive strategy . Mathematically, it is given as
(8) 
Hence, characterizing adaptivity gain allows us to characterize the improvement in target acquisition rate when using adaptive strategies over nonadaptive strategies.
Iii Preliminaries: Channel Coding with State and Feedback
In this section, we review fundamentals of channel coding with state and feedback and relevant literature to connect these information theoretic concepts to the problem of searching under measurement dependent noise discussed in the previous section. The aim is to formulate an equivalent model of channel coding with state and feedback for comparison to (3).
A communication channel is specified by a set of inputs , a set of outputs , and a channel transition probability measure for every and that expresses the probability of observing a certain output given that an input was transmitted [15]. Throughout this work, we will concentrate on coding over a channel with state and feedback (section 4.6 in [1]). Formally, at time the channel state, belongs to a discrete and finite set . We assume that the channel state is known at both the encoder and the decoder. For a channel with state, the transition probability at time is specified by the conditional probability assignment . Transmission over such a channel is shown in Figure 1. In general, the channel state at time evolves as a function of all past outputs and all past states,
(9) 
The goal is to encode and transmit a uniformly distributed message
over the channel. The encoding function at any time depends on the message to be transmitted , all past states, and all the past outputs. Thus the next symbol to be transmitted is given by(10) 
The encoder obtains the past outputs from the decoder due to the availability of a noiseless feedback channel from decoder to encoder. In this paper, we assume that both encoder and decoder know the evolution of the channel state, i.e., the sequence . After channel uses, the decoder uses the noisy observations and state information to find the best estimate , of the message . The probability of error at the end of message transmission is given by and the average probability of error is given by .
Example 3 (Binary Additive White Gaussian Noise channel with State and feedback).
Consider a Binary Additive White Gaussian Noise (BAWGN) channel with noisy output given as the sum of input and Gaussian random variable whose distribution is a function of the channel state . Specifically, is a Gaussian random variable with state dependent noise variance for some . In other words, we have
(11) 
where , and the state evolves as . Transmission over a BAWGN channel is illustrated in Figure 2.
Proposition 1.
The problem of searching under measurement dependent Gaussian noise is equivalent to the problem of channel coding over a BAWGN channel with state and feedback. Specifically,

The true location vector can be cast as a message to be transmitted over the BAWGN. Therefore, there are possible messages.

An reliable search strategy provides a sequence of such that . Hence, setting for all , the search strategy dictates the evolution of channel states .

The measurement matrix can be used as the codebook, i.e., by setting . Specifically, codewords are obtained by setting .

The measurement vector fixes the channel transition probability measure as since noise distribution is for . Hence, the channel state depends on measurement vector.
A coding scheme for a channel with state and feedback can double as a search strategy. This general approach of search using channel codes provides an efficient way to design and compare nonadaptive and adaptive search strategies. This also implies that feedback can improve the capacity of a channel with state which is what we characterize as our adaptivity gain for the problem of searching under measurement dependent noise.
Definition 7.
The BAWGN capacity with input distribution and noise variance is defined as
(12) 
Corollary 1.
From channel coding over a BAWGN channel with state and feedback, we obtain that for any small and large enough, there exists an reliable search strategy such the following holds
(13)  
(14) 
where follows from Theorem 4.6.1 in [1] and follows by combining the fact that the best channel is obtained when noise variance is the least, i.e., , with the converse of the noisy channel coding theorem [15].
Iv Main Results
In this section, we characterize a lower bound on the adaptivity gain ; the performance improvement measured in terms of reduction in the expected number of measurements for searching over a width among locations under measurement dependent Gaussian noise.
Theorem 1.
Let . For any reliable nonadaptive strategy searching over a search region of width among locations with number of measurements, there exists an reliable adaptive strategy with number of measurements, such that for some small constant the following holds
where
and is the solution of the following equation
(15) 
Proof of Theorem 1 is obtained by combining Lemma 1 and Lemma 2. Theorem 1 provides a nonasymptotic lower bound on adaptivity gain. The bound can be viewed as two parts corresponding to two stages. Intuitively, the first part corresponds to the initial stage of the search, where the agent narrows down the target’s location to some coarse fractions of the total search region, i.e., narrows to a section of width with high confidence. The second stage corresponds to refined the search within one of the coarse sections obtained from initial stage. This implies that an adaptive strategy can zoom in and confine the search to a smaller section to reduce the noise intensity. Whereas, a non adaptive strategy does not adapt to zoom in, and thus performs equally in both stages. We formalize this intuition in Lemma 2. Optimizing over fraction of the first search we obtain a bound on expected number of measurements. We obtain the following corollary as a consequence of Theorem 1.
Corollary 2.
Let . For any reliable nonadaptive strategy searching over a search region of width among with number of measurements, there exists an reliable adaptive strategy with number of measurements, such that for a fixed the asymptotic adaptivity gain grows logarithmically with the total number of locations,
(16) 
For a fixed , the asymptotic adaptivity gain grows at least linearly with total number of locations,
(17) 
Furthermore, we have
(18) 
and
(19) 
The proof of the above corollary is provided in AppendixC.
Remark 1.
The above corollary characterizes the two qualitatively different regimes previously discussed. For fixed , as goes to zero the asymptotic adaptivity gain scales as only , whereas for fixed , as increases the asymptotic adaptivity gain scales as . In other words, target acquisition rate improves by a constant for fixed as decreases while it grows linearly with for a fixed . In other words, adaptivity provides a larger gain in target acquisition rate for the regime where the total search width is growing than in the case where we fix the total width and shrink the location widths. In Section VI we related this phenomenon to the diminishing capacity of BAWGN channel when the total noise grows.
Next we provide the main technical components of the proof of Theorem 1.
Iva Converse: NonAdaptive Search Strategies
Lemma 1.
The minimum expected number of measurements required for any reliable nonadaptive search strategy can be lower bounded as
Proof of the Lemma 1 is provided in AppendixA. The proof follows from the fact that clean signal and noise are independent over time and independent of past observations for , due to the nonadaptive nature of the search strategy. In the absence of information from past observation outcomes, the agent tries to maximize the mutual information at every measurement. Since and , the mutual information is maximized at .
IvB Achievability: Adaptive Search Strategy
Consider the following two stage search strategy.
IvB1 First Stage (Fixed Composition Strategy )
We group the locations of width into sections of width . Let denote the true location of the target among the sections of width . Now, we use a nonadaptive strategy to search for the target location among sections of width . In particular, we use a fixed composition strategy where at every time instant , the fraction of total locations probed is fixed to be . In other words, the measurement vector at every instant is picked uniformly randomly from the set of measurement vectors . For the ease of exposition, we assume that is an integer. Hence, for this strategy, at every , and . For all , let
be the posterior probability of the estimate
after reception of , i.e., and let . Assume that agent begins with a uniform probability over the sections, i.e., . The posterior probability at time when is obtained by the following Bayesian update:(20) 
where
(21) 
Let be the number of measurements used under stage 1. Note that is a random variable. Hence, first stage is a nonadaptive variable length strategy. Now, the expected stopping time can be upper bounded using Lemma 3 from AppendixB.
IvB2 Second Stage (Sorted Posterior Matching Strategy )
In the second stage, the agent zooms into the width section obtained from the first stage and uses an adaptive strategy to search only within this section. The agent searches for the target location of width among the remaining locations. In particular, we use the sorted posterior matching strategy proposed in [2] which we describe next. Let denote the true target location of width . For all , let be the posterior probability of the estimate after reception of , i.e., and let . Assume the agent begins with a uniform probability over the sections, i.e., . At every time instant , we sort the posterior values in descending order to obtain the sorted posterior vector . Let vector denote the corresponding ordering of the location indices in the new sorted posterior. Define
(22) 
We choose the measurement vector such that if and only if . Note that for this strategy, at every , the noise is and the worst noise intensity is . The posterior probability at time when is obtained by the following Bayesian update:
(23) 
where
(24) 
Let be the number of measurements used under stage 2. Note that is a random variable. Hence, the second stage is an adaptive variable length strategy. The expected number of measurements can be upper bounded using Lemma 6 from AppendixB.
Noting that the total probability of error of the two stage search strategy is less than and that the expected stopping time is , we have the assertion of the following lemma.
Lemma 2.
The minimum expected number of measurements required for the above reliable adaptive search strategy can be upper bounded as
(25) 
Remark 2.
For an reliable adaptive search strategy using the two stage strategy, the nonasymptotic upper bound provided by Lemma 2 for is tighter than the upper bound provided in [2] using the sorted posterior matching strategy. In fact, for any given , our bound is significantly smaller than the upper bound in [2]. In the asymptotically dominating terms of the order , our upper bound closely follows the simulations as illustrated in Section VI.
V Extensions and Generalizations
Va Generalization to other noise models
The main results presented in this paper consider the setup where the noise is distributed as . In other words, the variance of the noise given by is a linear function of the size of a measurement vector . This model assumption holds when each target location adds noise equally and independently of other locations when probed together. In general, due to correlation across locations the additive noise variance can be assumed to scale as a nondecreasing function of the measurement vector . In this section, we extend our model to a general formulation for the noise , where is a nondecreasing function of . For example, for some . Figure 3, shows that the effect of the noise function on the capacity.
Theorem 2.
Let and let be a nondecreasing function. For any reliable nonadaptive strategy searching over a search region of width among locations with number of measurements, there exists an reliable adaptive strategy with number of measurements, such that for some small constant the following holds
where and goes to 0 as .
VB Multiple Targets
The problem formulation and the main results of this paper consider the special case when there exists a single stationary target. Suppose instead the agent aims to find the true location of unique targets quickly and reliably. Our problem formulation is easily extended to the general case where there may exist multiple targets. In our generalization to multiple targets under the linear noise model (3), the clean signal indicates the the number of targets present in the measurement vector . In particular, let be such that if and only if th location contains the th target. Then, the noisy observation is given as
(26) 
where
Comments
There are no comments yet.