Probabilistically Faulty Searching on a Half-Line

02/18/2020 ∙ by Anthony Bonato, et al. ∙ UNIVERSITY OF TORONTO 0

We study p-Faulty Search, a variant of the classic cow-path optimization problem, where a unit speed robot searches the half-line (or 1-ray) for a hidden item. The searcher is probabilistically faulty, and detection of the item with each visitation is an independent Bernoulli trial whose probability of success p is known. The objective is to minimize the worst case expected detection time, relative to the distance of the hidden item to the origin. A variation of the same problem was first proposed by Gal in 1980. Then in 2003, Alpern and Gal [The Theory of Search Games and Rendezvous] proposed a so-called monotone solution for searching the line (2-rays); that is, a trajectory in which the newly searched space increases monotonically in each ray and in each iteration. Moreover, they conjectured that an optimal trajectory for the 2-rays problem must be monotone. We disprove this conjecture when the search domain is the half-line (1-ray). We provide a lower bound for all monotone algorithms, which we also match with an upper bound. Our main contribution is the design and analysis of a sequence of refined search strategies, outside the family of monotone algorithms, which we call t-sub-monotone algorithms. Such algorithms induce performance that is strictly decreasing with t, and for all p ∈ (0,1). The value of t quantifies, in a certain sense, how much our algorithms deviate from being monotone, demonstrating that monotone algorithms are sub-optimal when searching the half-line.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The problem of searching for a hidden item in a specified continuous domain dates back to the early 1960’s and to the early works of Beck [8] and Bellman [9]. In its simplest form, a unit speed robot (that is, a mobile agent) starts at a known location, the origin, in a known search-domain. An item, sometimes called the treasure or the exit, is located (hidden) at an unknown distance away from the origin, and it can be located by the robot only if it walks over it. What is the robot’s trajectory that minimizes the worst case relative time that the treasure is located, compared to ? This worst case measure of efficiency is known as the competitive ratio of the trajectory. Interestingly, numerous variations of the problem admit trajectories inducing constant competitive ratios. In certain cases, for example, in the so-called linear-search problem where the domain is the line, tight lower bounds are known that require elaborate arguments.

We consider -Faulty Search (FS), a probabilistic version of the classic linear-search problem in which the hidden item lies in a half-line (or -ray), and the item is detected with constant probability (with independent Bernoulli trials) every time the robot walks over the item. This is a special case of a problem first proposed by Gal [28], where the search-domain is the line (or -rays). Natural solutions to the problem are so-called cyclic and monotone search patterns; that is, trajectories that process each direction periodically and where the searched space in each direction expands monotonically. In [3], Alpern and Gal proposed such a solution for searching -rays and they conjectured that an optimal trajectory must be cyclic and monotone. Angelopoulos in [5] extended the upper bound results using cyclic and monotone trajectories for searching -rays. We prove that monotone trajectories are sub-optimal for searching a -ray. We do so first by establishing a lower bound for all monotone algorithms to the problem (which we also match with an upper bound), and second by designing a sequence of non-monotone trajectories inducing increasingly better performance (and deviating increasingly from being monotone).

1.1 Related Work

Search-type problems are concerned with finding a specific type of information placed within a well specified discrete or continuous domain. As a topic, it spans various sub-fields of Theoretical Computer Science and has given rise to a number of book-length treatments [1, 3, 20, 40]. Applications range from data structures and mobile agent computing, to foraging and evolution, among others, for example, see [2, 15, 33, 35, 39].

The problem of searching for a hidden item in one-dimensional domains was first proposed more than 50 years ago by Beck [8] and Bellman [9] in a Bayesian context. In the 1990’s, solutions to basic problem’s variations were rediscovered, for example, see [7, 34]. Since then, several studies of various search-type problems have resulted in an extensive literature. Below we give representative and selective examples, with an attempt to cite relatively recent results. Variations of search-type problems that share many similarities range from the type of search domain (for example, 1 or 2-dimensional [26, 32], -dimensional grid [17], cycle [37], polygons [22], graphs [6], grid [14], -rays [12]), to the number of searchers (1 or more [36]), to the criterion for termination (for example, search, evacuation [13], priority evacuation [19], fetching [30]) to the communication model (for example, wireless or face-to-face [18]) to the type of the objective (for example, minimize worst case or average case [16]) to cost specs (for example, turning costs [25], cost for revisiting [10]), to the measure of efficiency (for example, time, energy [23]) to the knowledge of the input (none or partial [11]) and to other robots’ specs (for example, speeds [21], faults [31], memory [38]), just to name a few. More recently, Fraigniaud et al. considered in [27] a Bayesian search problem in a discrete space, where a set of searchers are trying to locate a treasure placed, according to some distribution, in one of the boxes indexed by positive integers. Since it is outside the scope of this work to provide a comprehensive list of the large related literature, we further refer the interested reader to [3, 4, 24, 29].

The version of linear search that we study, where the searcher is probabilistically faulty, was presented as an open problem by Gal in [28]. Later in [3] (see chapter 8.6.2), Alpern and Gal provided a search strategy when the search domain is a line. In particular, they considered cyclic search trajectories where the robot alternates between searching each of the two directions, and each time monotonically increasing the searched space. Among the same family of algorithms that moreover expand the searched space in each direction geometrically, the authors provided the optimal trajectory. In addition, they conjectured that cyclic and monotone trajectories are in fact optimal. Along the same lines, [5] studied cyclic and monotone trajectories for searching -rays. In a variation of the problem where the hidden item detections are not Bernoulli trials, [5] showed also that cyclic trajectories are in fact sub-optimal. For this and many other variations of probabilistically searching, where the probability of success is not known, optimal strategies remain open.

1.2 Main Contributions & Paper Organization

We introduce and study -Faulty Search (FS), a variation of the classic linear-search (cow-path) problem, in which the search space is the half-line, and detection of the hidden item (treasure) happens with known probability . We are interested in designing search strategies that induce small competitive ratio, as a function of ; that is, that minimize the worst case expected detection time of the hidden item, with respect to its placement , relative to the optimal performance of an algorithm that knows in advance the location of the item (so we normalize the expected performance both by and ).

We focus on two families of search algorithms, which indicate that optimal solutions to FS may be particularly challenging to find. First, we study a natural family of algorithms, that we call monotone algorithms, which intuitively are determined by non-decreasing turning points where searcher returns to the origin before expanding the searched space. Given that turning points increase geometrically; that is, when , relatively straightforward calculations determine the optimal expansion factor . In fact, a simplified argument shows that in the cow-path problem (that is, when the search space consists of -rays and ) the optimal expansion factor is . A more tedious argument (and one of our technical contributions), as in the cow-path problem, shows that the aforementioned choice of geometrically increasing ’s for FS is in fact optimal among the family of monotone algorithms. Our main technical contribution pertains to the design and analysis of a family of algorithms that we call -sub-monotone, which provide a sequence of refined search strategies which induce competitive ratios that strictly decrease with , for every . Somehow surprisingly, our findings show that plain-vanilla, and previously considered, algorithms for FS are sub-optimal.

The organization of our paper is as follows. In Section 2, we define problem FS formally, we introduce measures of efficiency and we complement with preliminary and important observations. Section 3 studies the special family of monotone search algorithms. In particular, in Section 3 we propose and analyze a specific monotone algorithm where turning points increase geometrically. Section 3.2 contains one of our technical contributions, in which we prove that the monotone algorithm presented in the previous section is in fact optimal within the family. Our main technical contribution is in Section 4, which introduces and studies the family of -sub-monotone algorithms. Performance analysis of the family of algorithms is presented in Section 4.1. In Section 4.2, we propose a systematic method for choosing parameters for the -sub-monotone algorithm with the objective to minimize their competitive ratio. Our formal findings are evaluated in Section 4.3, where we demonstrate the sequence of strictly improved competitive ratios by -sub-monotone algorithms when . As our proposed parameters for the algorithms are obtained as the roots to high degree () polynomials, are results, for the most part, cannot be described by closed formulas. However, in Section 4.4

, we selectively discuss heuristic choices of the parameters that induce nearly optimal search strategies and whose performance can be quantified by closed formulas. We also quantify formally the boundaries of

-sub-monotone algorithms, and we show that the competitive ratio of our -sub-monotone is off additively by at most from the best performance we can achieve by letting grow arbitrarily. In the final section, we conclude with open problems.

2 Problem Definition and Preliminary Observations

In -Faulty Searching on a Halfline (FS) a speed-1 searcher (or robot) is located at the origin of the infinite half-line. At unknown distance bounded away from the origin, which bound we set arbitrarily to 1, there is an item (or treasure) which is located/detected by the robot with constant and known probability every time the robot passes over it (that is, detection trials are mutually independent and each has probability of success ). Also, for the sake of simplifying the analysis, we assume that the probability of detection becomes 1 if the treasure is placed exactly at a point where the robot changes direction. As we will see later, the worst placements of the treasure will be proven to be arbitrarily close to the turning points.

Given a robot’s trajectory , probability and distance , the termination time is defined as the expected time that the robot detects the treasure for the first time. Feasible solution to FS are robot’s trajectories that induce bounded termination time (as a function of ) for all and for all .

Note that is part of the input to an algorithm for FS, while is unknown. Hence, trajectories may depend on but not on . It is also evident that for a robot’s trajectory to induce bounded termination time for all treasure placements, the robot needs to visit every point of the half-line, past point 1, infinitely many times. As it is also common in competitive analysis, we measure the performance of a search strategy relative to the optimal offline algorithm; that is, an algorithm that knows where the treasure is. Since such an algorithm needs to travel for time to reach the treasure, as well as one would need trials, in expectation, before detecting it, we are motivated to introduce the following measure of efficiency for search trajectories.

Definition 2.1.

The competitive ratio of search strategy for FS is defined as

Trajectory solutions (or search strategies) to problem FS are in correspondence with infinite sequences of turning points, satisfying , , and , for all . Indeed such a sequence corresponds to the trajectory in which robot moves from to (moving away from the origin), and from to (moving toward the origin), each time changing direction of movement, where .

For search strategy and treasure location (except from the turning points of ), let denote the time till the robot passes over the treasure for the ’th time. Since the probability of successfully detecting the treasure is , we have In what follows, we express the expected termination time with respect to the additional time between two visitations of the treasure.

Lemma 2.2.

Let , and let . We then have that

Proof.

Note that for each we have . We then have that

and the proof follows. ∎

3 Monotone Trajectories

We explore the simplest possible trajectories for FS in which the searcher repeatedly returns to the origin every time she changes direction during exploration and before exploring new points in the half-line. More formally, monotone trajectories for FS are search algorithms , defined asAlternatively, we could have defined monotone trajectories so as to return to location 1, instead of the origin, since we know that . Our analysis next shows that such a modification would not improve the competitive ratio. where is a strictly increasing sequence with . Note that, in particular, we allow . The present section is devoted into determining the best monotone algorithm for FS. More specifically, we prove the following.

Theorem 3.1.

The optimal monotone algorithm for FS has competitive ratio .

The proof of Theorem 3.1 is given in the next two sections. In Section 3.1 we propose a specific monotone algorithm with the aforementioned performance (see Lemma 3.3), while in Section 3.2 we show that no monotone algorithm performs better (see Lemma 3.4). Somewhat surprisingly we show in Section 4 that the upper bound of Theorem 3.1 is in fact sub-optimal.

3.1 An Upper Bound Using Monotone Trajectories

In this section we propose a specific monotone algorithm with the performance promised by Theorem 3.1. In particular, we consider “restricted” trajectories determined by increasing sequences , where and . Within this sub-family, we determine the optimal choice of that induces the smallest competitive ratio. For this, we first determine the placements of the treasure that induce the worst competitive ratio, given a search trajectory. As stated before, in the following analysis we make the assumption that the treasure is not placed at any turning point.

Lemma 3.2.

Consider a monotone algorithm , determined by the strictly increasing sequence . If the treasure appears in interval , then the competitive ratio is no more than

Proof.

Suppose that the treasure is located at point , where . With that notation in mind (see also Figure 1), we compute the time intervals between consecutive visitations, as they were defined in Lemma 2.2. We have that

Figure 1: Monotone algorithm . Figure also depicts the first 5 visitations of the treasure that is placed at .

Therefore, by Lemma 2.2 the expected termination time for algorithm is

Recall that the competitive ratio of this algorithm is , and hence, in the worst case, approaches from the right.  ∎∎

We are now ready to prove the promised upper bound.

Lemma 3.3.

The monotone trajectory , where and has competitive ratio .

Proof.

We study the restricted family of monotone trajectories , where , for some . By Lemma 3.2, the competitive ratio of search strategy is at most

(1)

Calculations above assume that , as otherwise, the second summation is divergent. We will make sure later that our choice of complies with this condition. Note also that for to be increasing, we need . Now, denote expression (1) by . We will determine the choice of that minimizes , given that .

It is straightforward to see that , and hence, is convex when . Hence, if has a root in , that would be a minimizer. Indeed,

has two roots , one being positive and one negative (for all values of ). We choose the positive root, that we call , and it is elementary to see that , for all , as wanted. Substituting in (1) gives the competitive ratio promised by the statement of the lemma. ∎

3.2 Lower Bounds for Monotone Trajectories

This section is devoted to proving the following lemma.

Lemma 3.4.

Every monotone trajectory has competitive ratio at least .

Consider an arbitrary monotone algorithm , where is a monotone sequence tending to infinity, and which determines the turning points of the algorithm. Without loss of generality, we set , as otherwise we may scale all turning points by . Our lower bound will be obtained by restricting the placement of the treasure arbitrary close to (and away after) turning points (this may only result in a weaker lower bound). Taking , we obtain that

where the superscript of indicates exactly the placement of the treasure at . In what follows, and for a fixed integer , we define

We have the following lemma.

Lemma 3.5.

Let be the optimal competitive ratio that can be achieved by monotone trajectory . For every integer and for every we have that

(2)
Proof.

If the treasure is placed arbitrarily close to turning point , then by Lemma 2.2, a lower bound to the best possible competitive ratio satisfies the following (infinitely many) constraints:

We next restrict our attention to the first such constraints, where is an arbitrary integer. Hence, we require that

Now, multiply both hand-sides of the inequalities by to obtain

We conclude that is at least the last term above, so after rearranging the terms of the inequality, bringing them all on one side, and factoring out the terms, we have that

as desired. ∎

Recall that . Our lower bound derived in the proof of Lemma 3.4 is obtained by finding the smallest satisfying constraints (2), and in particular, inducing a strictly increasing sequence of in . Note that minimizing subject to constraints (2) in variables

is a non-linear program. To obtain a lower bound for

, we observe that the only negative coefficients of variables are those on the diagonal; that is, the coefficient of in the ’th constraint. This allows us to apply repeatedly back substitution to obtain a lower bound for all and hence, as well, assuming that the visiting points are increasing in . Equivalently, for the optimal that an algorithm can achieve, we may treat (for the sake of the analysis) all inequalities (2) as being tight, giving rise to the linear system

(3)

in variables , where

Constraints (3) may be thought as the defining linear system on ’s that give the optimal turning strategies, assuming that the treasure can only be placed arbitrarily close and after any of the first turning points of a search trajectory. In other words, given that any monotone algorithm is defined by a sequence of turning points, these points can be chosen so as to minimize the competitive ratio with the assumption that the hidden item will be nearly missed after each turning point. Having the competitive ratio be independent of the treasure’s placement gives a lower bound to the competitive ratio of the algorithm. The proof of Lemma 3.4 follows directly from the following technical lemma.

Lemma 3.6.

Linear system (3), in variables , defines a monotone sequence of turning points only if .

Proof.

We proceed by finding a closed formula for and then imposing monotonicity. Our first observation is that for all we have that . Setting allows us to rewrite the matrix of system (3) as

We proceed by applying elementary row operations to the system. From each row of (except the last one) we subtract a multiple of the following row to obtain linear system , where

and . Now set

and define matrix

By Cramer’s rule we have that

Note that .

Next we compute . We denote the principal minor of as . The last row of is . We further denote by the matrix we obtain from by scaling its last row by so that it reads . Finally, we denote by the matrix we obtain by replacing the last row of by ; that is, the all-1 row except from the last entry which is -1. With this notation in mind, we note that

Now expanding the determinants of with respect to their first rows we obtain the system of recurrence equations

We solve the first one with respect to and we substitute to the second one to obtain the following recurrence exclusively on

The characteristic polynomial of the latter degree-2 linear recurrence has discriminant equal to

which in particular is a degree-2 polynomial in the competitive ratio and has discriminant . Since is convex, we conclude that the discriminant of the characteristic polynomial is non-negative when is larger than the largest root of , that is when

and the proof follows. ∎

4 Sub-Monotone Trajectories

For a fixed integer , we consider a -sub-monotone trajectory that is defined by a strictly increasing sequence , where for some , and (where ) satisfying For convenience, we introduce abbreviations and . For the formal description of the trajectory, we introduce the notion of a -hop between consecutive points , see Algorithm 1, which is a sub-trajectory of the robot starting from and finishing at .

1:  for  do
2:     Move from to
3:     Move from to
4:     Move from to
5:  end for
6:  Move from to
Algorithm 1 -Hop between

Given parameters and , the -suborigin trajectory is defined in Algorithm 2.

1:  Move from the origin to , then to the origin and then to .
2:  for  do
3:     Perform a -hop between .
4:     Move from to the origin
5:     Move from the origin to
6:  end for
Algorithm 2 -Sub-Monotone Trajectory

The trajectory of the robot performing a -sub-monotone search is depicted in Figure 2 that shows a -hop between points and .

Figure 2: -sub-monotone algorithm determined by turning points and intermediate turning points within hops . The figure also depicts all possible intervals , that the treasure can lie within a -hop between and. Possible placements of the treasure are depicted in every interval , along with the first five visitations of the treasure, except the last interval for which there are only three visitations before the searcher returns to the origin.
Lemma 4.1.

For any , the time required for the -hop is

Proof.

The reader may consult Figure 2. The interval is traversed exactly three times, except from the interval which is traversed once. Hence, the time for a robot to move from to is

The alternative expression is obtained by factoring out and is given for convenience. ∎

Using the above, we compute the total time the robot needs to progress from the origin to .

Lemma 4.2.

For any sufficiently small , the time needed for the robot to reach for the first time is equal to

Proof.

The algorithm will perform a number of hops before returning to the origin after each hop. According to Lemma 4.1, the total time for this trajectory is

and the proof follows. ∎

4.1 Performance Analysis of -Sub-Monotone Trajectories

For the remainder of the paper, we introduce the following expressions:

(4)
(5)
(6)
(7)
(8)
(9)

where, in particular, . The purpose of this section is to prove the following theorem.

Theorem 4.3.

For any and given that the treasure lies in interval , the worst case induced competitive ratio is given by the formula

An immediate consequence of Theorem 4.3 is that the best -sub-monotone algorithm with expansion factor within consecutive -hops and intermediate turning points is the solution (if it exists) to optimization problem

(10)
s.t.

Alternatively, any solution which is feasible to (10) has competitive ratio .

The proof of Theorem 4.3 is given by Lemmas 4.6, 4.7 at the end of the current section. Towards establishing the lemmas, we need to calculate the time between consecutive visitations of the treasure in order to eventually apply Lemma 2.2 and compute the performance of a -sub-monotone algorithm.

As we did previously and for the sake of simplifying the analysis, we assume that the treasure will never coincide with a turning point . Moreover, we assume that the treasure is placed at distance from the origin, where , for some

that we allow for the moment to vary.

Since the treasure can be in any of these intervals, there are cases to consider when computing the performance of the algorithm. Lemmas 4.4 and 4.5 concern different cases as to where the treasure is with respect to internal turning points associated with .

Lemma 4.4.

For any , suppose that the treasure is placed at distance from the origin, where . We then have that