Towards Real-Time Search Planning in Subsea Environments

07/24/2017
by   James McMahon, et al.
0

We address the challenge of computing search paths in real-time for subsea applications where the goal is to locate an unknown number of targets on the seafloor. Our approach maximizes a formal definition of search effectiveness given finite search effort. We account for false positive measurements and variation in the performance of the search sensor due to geographic variation of the seafloor. We compare near-optimal search paths that can be computed in real-time with optimal search paths for which real-time computation is infeasible. We show how sonar data acquired for locating targets at a specific location can also be used to characterize the performance of the search sonar at that location. Our approach is illustrated with numerical experiments where search paths are planned using sonar data previously acquired from Boston Harbor.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 7

page 8

09/17/2019

Real-time Multi-target Path Prediction and Planning for Autonomous Driving aided by FCN

Real-time multi-target path planning is a key issue in the field of auto...
02/13/2019

Search and Rescue in the Face of Uncertain Threats

We consider a search problem in which one or more targets must be rescue...
09/24/2021

A dynamic programming algorithm for informative measurements and near-optimal path-planning

An informative measurement is the most efficient way to gain information...
06/11/2018

The Research of the Real-time Detection and Recognition of Targets in Streetscape Videos

This study proposes a method for the real-time detection and recognition...
09/09/2019

Rapid Motion-Planning for Dubins Vehicles under Environmental Drifts

This paper presents a rapid (i.e., (near) real time) solution to the min...
06/07/2013

Autonomous search for a diffusive source in an unknown environment

The paper presents an approach to olfactory search for a diffusive emitt...
11/12/2021

RATE: Overcoming Noise and Sparsity of Textual Features in Real-Time Location Estimation

Real-time location inference of social media users is the fundamental of...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

This paper addresses algorithms for real-time computation of robotic search paths for subsea applications. The goal is to find an unknown number of stationary targets distributed in a bounded area in finite time and/or finite distance. Our contributions in this paper build upon prior work reported in [1, 2] in which we developed decision-theoretic cost functions that can be used to evaluate candidate search paths. We assume herein, as we did in prior work, that the search sensor produces false positive measurements of targets that do not exist, and that the performance of the sensor is affected by spatial variation of the environment. That is, in some locations, the search sensor performs better or worse than in other locations.

We assume that the search sensor has the typical characteristics of a side-scan sonar, which is often used for imaging the seafloor. Specifically, the system studied in this work performs synthetic aperture sonar processing which requires the vehicle to travel in straight lines. This motion enables the vehicle to perform signal processing that greatly improves the quality of the collected data; perturbations in the straight-line motion of the vehicle significantly distort the resulting image  [3, 4, 5]

). Thus, a search path should be a set of straight lines and the number of turns should be small. Performance of the search sensor at a location is dependent on the environment (e.g., seafloor characteristics) at the location, and our path planning approach accounts for a probabilistic description of the environment. The local environment, relative to sensor performance, can be inferred from sensor data. Thus a search mission can be adjusted in real-time to account for information about the environment that is acquired during a search mission. We present preliminary results on estimating sensor performance, and we illustrate our approach using a real data set abstracted from measurements made by an autonomous underwater vehicle (AUV) in the Boston Harbor. While the approach presented in this work is applied to an AUV that performs synthetic aperture sonar processing, the work may also be relevant to other unmanned systems (e.g. aircraft that perform synthetic aperture radar processing 

[6]).

In previous work [2], we show a rigorous way of computing the optimal search paths by deriving a decision-theoretic cost function that is associated with the accuracy of our estimate of the number of targets in the search environment. The work herein extends our search theory results reported in [2] by proposing a new cost function and a fast, near-optimal, planning algorithm to allow scalability of the search problem to larger areas.

Search theory has its roots in numerous civilian and military applications. The goal is either to maximize the probability of detecting the target or to minimize the expected time until a decision about the presence or absence of the target is made 

[7]. In a realistic search problem, there are certain limitations on successfully locating the target such as imperfect sensor measurements or uncertain knowledge of the search environment and its affect on sensor performance. Noisy sensor measurements often include missed detections, i.e. failing to detect a target that is present, and false alarms, i.e. detection of a target that is not present. Local environmental conditions may also affect the number of false alarms and missed detections the sensor observes. Although there is an extensive literature on search theory (see e.g., [8, 9]), the issue of false alarms is seldom addressed. Exceptions include [10] and [11], but uncertainty in environmental conditions is not accounted for in these studies. We build upon prior work by accounting for false alarms, missed detections and uncertainty in the environment.

The problem of computing search paths through an uncertain environment has been shown to be NP-Hard [12]. Given the challenge of such a problem, related work in non-myopic planning for fielded systems has either yielded poor computational performance [12], limitations to paths that are pre-planned [13], or employed sampling-based approaches to generate approximate solutions [14]. In this paper, to address the problem complexity, we focus on abstracting the problem in such a way that the real-time performance for a non-myopic planner is tractable thereby enabling real-time performance on-board an AUV.

The remainder of this paper is organized as follows. In Section II, we formulate the search problem and describe our proposed cost function to minimize the risk of incorrectly estimating the number of targets. In Section III, we present an exact-solution method and propose an approximate-solution method for planning on-line search paths. Section IV describes the data processing technique that was used for abstraction and mapping of the real data set, and Section V provides numerical results.

Ii Problem Formulation

Ii-a Preliminaries

We are given a search grid that consists of disjoint cells where is the number of rows and is the number of columns in , and a distance constraint

of the mission. We associate with each cell random variables

and that represent the number of targets and the environmental conditions in the cell, respectively. We presume is independent of and is independent of when . The searcher’s objective is to estimate . We assume that the search sensor’s performance is dependent on the environment, and we assume that the environment in each cell is from a finite set of possible environments . That is, for all , the environment is

. We presume that the actual environmental condition in each cell is not known, but that a probability distribution is known for each cell. The environment probability distribution for each cell

is expressed where is the probability that the environment in cell is .

When the search vehicle visits a location, it acquires the noisy observation of the number of targets and of the environmental conditions at that location. We use Bayesian update law to update our belief on the number of targets, , and the environmental conditions, , after acquiring and measurements. We model the likelihood of observing targets when is the true number of targets and is the true environment

(1)

where denote the probability of one or more false alarms and the probability of detection. Both and are assumed to vary as functions of the environment type . We refer the reader to [2] for more details on the observation model in (1).

After acquiring the measurements and , we apply Bayes’ rule to update our belief on the number of targets and on the environmental conditions.

(2)
(3)

where is the observation model in (1), and is the likelihood of observing the environment given true environment . We assume that the probability of observing a particular environment is known before the mission starts and does not change. Insight on the sensor model for environment characterization arises from research on subsea bottom-type characterization, such as in [15]. We then compute the posterior belief on the number of targets unconditioned on the environment.

(4)

Ii-B Value of Searching a Location

After the vehicle visits a location, we compute an estimate, denoted , of the number of targets at that location, based on the observation . When is greater than , we overestimate the number of targets, i.e. we declare more than the actual number of targets are present. When is less than

, we underestimate the number of targets, i.e. we fail to declare some of the targets that are present. Both overestimation and underestimation may degrade the utility of the search results. Within a decision-theoretic framework, we define a linear loss function to penalize deviations from the true number of targets. Given the measured data

, we define the loss corresponding to the estimate when is the true number of targets

(5)

where and are relative costs of underestimating and overestimating the number of targets. For some subsea search applications, such as mine-hunting, overestimating the number of targets is preferred to underestimation. In our prior work we assess the utility of defining an estimate of the number of targets, while in this work we assess its loss since minimizing a risk value associated with incorrect estimation of the number of targets may be better suited for a practical application.

The posterior expected loss of computing the estimate when the environment is is

(6)

where the expectation is taken over the parameter space with respect to the posterior distribution . The estimator is called Bayes estimator when it minimizes the expected loss in (6).

(7)

Expected loss in (6) is called Bayes’ risk when is the Bayes estimator in (7). For notational convenience and the clarity of the presentation, we denote

(8)
(9)

and call (8) the current risk and (9) the anticipated risk conditioned on search measurement and environment .

Due to the uncertainty in the environment and the noise in environment observations, anticipated risk in (9) can be different than actual risk that considers the true environment. We define a linear loss function that penalizes deviations from actual search performance. Let be the true environment in a cell. Given the observed number of targets and environmental measurement , we define the loss of computing the estimate on the environmental conditions

(10)

where and are the anticipated risk conditioned on the observation when true environment is and when the environment estimate is , respectively. Similar to (5), are the relative costs of underestimating and overestimating the environmental conditions.

Given the measurements and , the posterior expected loss of computing the environment estimate is

(11)

An estimate of the risk associated with the incorrect estimation of the number of targets after acquiring the measurements and is

(12)
(13)

In order to assess the benefit of searching a location, we compute the expected loss in (12) for every possible set of observations and . Let the anticipated risk for visiting a cell be defined as

(14)

where

(15)

Further, we define the benefit of searching a location as the risk reduction we expect to achieve before seeing the measurements and . That is, the value of searching cell is the difference between the current risk and the anticipated risk for that cell

(16)

Ii-C Planning Survey Routes to Maximize Expected Risk Reduction

We cast the problem of finding the optimal search path as a search over all feasible paths from the initial location of the search vehicle that satisfy the distance constraint of the mission. Let be the distance constraint of the mission, be the distance required to traverse the path , and in (16) be the value of searching cell . A path is then described by an ordering of cells to be inspected, , such that the distance required to traverse the path is less than the mission constraint . The search problem is defined,

Definition 1

Given the maximum distance the vehicle can traverse, the value of searching each cell, and a set of feasible paths , find the path such that

(17)

subject to

(18)

We do not allow multiple visits to the same cell. However, re-planning the optimal path after acquiring new observations allows us to visit the same cell more than once if doing so would be beneficial. A future goal of our research is to incorporate multiple visits to the same cell within our computational framework.

Iii On-line planning for real-time search

Planning survey missions for AUVs in real environments requires both the consideration of the areas to be searched as well as the constraints imposed on the vehicle to acquire the data. In this work we have to contend with the constraint of acquiring acoustic data that must be spatially and temporally correlated. To execute the search sufficiently the vehicle is required to perform straight-line paths through the search region. Therefore, we constrain the vehicle to only visit cells within a single row until the row itself has been fully explored. Once the AUV has reached the end of a row, the sensor is disabled for a short period of time while it traverses to another row to perform another line measurement (see Fig. 3).

Given this constraint, the planning problem becomes a search over the cells that can be measured within a finite time/distance horizon . While the planner could consider the value of each cell independently, as the size of the search area increases, so does the complexity of the search.

Iii-a Exact Branch-and-Bound Algorithm

We first apply an exact branch-and-bound algorithm with a best-first search method to compute the optimal search path. While this method yields the maximum expected risk reduction we can achieve, it is computationally less efficient compared to the approximate solution.

Branch-and-bound algorithms are commonly used to solve large state-space search problems [16]. They repeatedly partition the problem into subproblems, which is called branching, and compute a lower and an upper bound on these subproblems, which is called bounding. Subproblems that will not lead to the optimal solution are pruned. This reduces the computational complexity of the problem compared to brute-force search where all candidate solutions are enumerated prior to the search and the value of each of these candidate solutions are computed. To select the subproblems for branching in a systematic manner, we use a best-first search method where the most promising nodes, the nodes that have the largest upper bounds, are expanded first. The search is performed until the search space is fully explored, yielding the optimal solution .

Iii-A1 Lower Bound

The lower bound used in the exact solution is calculated by summing the benefit of searching a location (Eq.16) over a lawn-mover path, a path that consists of parallel linear tracks until the mission length is met. Since this path belongs to the set of feasible paths , the optimal solution is guaranteed to yield an expected risk reduction greater than or equal to the value of this path. This value is calculated once at the start of the search in () time where is the number of cells. When the value of expanded nodes of a candidate path exceeds this lower bound, we update the lower bound to be the value of this candidate path.

Iii-A2 Upper Bound

To compute the upper bound for the maximum expected risk reduction on a given path, the maximum expected risk reduction in a cell throughout the search area is multiplied by the number of cells the vehicle can visit without exceeding the mission length. That is,

(19)

As the search is expanded, the upper bound is re-computed by subtracting the maximum risk reduction in a cell from the value UB and adding the risk reduction associated with the newly added cell. The initial upper bound computation takes time while updates are done in . While these bounds may not prune the search tree as effectively as tighter bounds, they take significantly less time to compute.

Iii-B Approximate Algorithm

A exhaustive search over each permutation of all feasible paths within the time horizon would in a worst case be . To address this we re-formulate the planning problem as a search over a graph where the vertices of the graph correspond to horizontal rows over the search area , and the edges are the distance required to travel between rows. We can then define the search problem over this graph as follows:

Definition 2

Given

  • An undirected symmetric graph ,

  • A maximum tour length of ,

  • A reward, associated with each vertex , and

  • A cost, associated with each edge ,

compute an ordered open tour that visits a subset of vertices such that

(20)
(21)
(22)
(23)

This problem definition follows that of the Selective Traveling Salesman Problem or Orienteering Problem which is a special case of the Traveling Salesman Problem and has been heavily studied in the literature (see [17, 18] for a comprehensive survey). In this work the reward is a function of the sum of the expected risk reduction across the entire row a survey would take place in. Therefore, the reward for a given node is defined as:

(24)

where and are the number of rows and the number of columns in the search grid .

Following the upper bound on the optimal solution defined in [19] we make a slight modification such that the tour is not constrained to ending at specific node. While the 0-1 knapsack problem is NP-Hard, only a valid upper bound is required in order to bound the Orienteering Problem. Following the approach in [20], upper bounds are computed by searching for a terminal vertex in a sorted array of weighted vertices. That is, given a sorted array of non-increasing values of the reward per unit weight:

(25)

where for . The upper bound on the reward can be computed by finding the maximum reward that can be obtained without exceeding the budget . Since this is a summation over the sorted array the upper bound has a complexity of . Further detail on how to tighten the bounds can be found in [20].

Using these upper bounds, solutions for the planning problem are found by implementing a depth-first branch-and-bound (DFBnB) solver [21] where each node in the search tree corresponds to performing different survey lines. The branching of the tree is ordered such that nodes associated with higher reward per weight (Eq. 25) are evaluated first. To prune the search based off of the maximum possible reward at each node, the upper bound previously described is employed [19]. This bound can be re-computed during each iteration in . Furthermore, each time a new tour is discovered that yields a higher reward, the ordering of the tour is optimized via 2-opt [22] in order to ensure the lowest cost tour for the highest reward is being produced. Instead of performing the search exhaustively, after a large number of iterations, , the search is terminated.

Iv Mine Countermeasures Performance Mapping

While there are many environmental factors affecting performance, we have developed an abstraction of the acoustic data collected with the sonar based on some of these parameters. We illustrate a representative example in Fig. 1-a., and have developed a notional mapping between this abstraction and relevant mine countermeasures (MCM) performance metrics. The mapping is employed to define segmented regions of the ocean floor within which the expected mine-hunting performance may be estimated in real-time as a vehicle executes its mission. In the following analysis, the mapping is defined by three segments characterizing the environment as difficult, moderate, or easy for MCM, as in Fig. 1-b to 1-d. In this section, we describe the processing approach through which the abstraction and mappings are derived and apply the technique to data collected at-sea with a representative side-look bottom-mapping active sonar in an exercise conducted in the approaches to Boston Harbor during 2015.

Fig. 1:

Abstraction and mapping of acoustic data collected with an autonomous underwater vehicle (AUV) to quantify mine countermeasures (MCM) performance.(a) shows the environmental characterization map. (b-d) shows the prior probability of the respective class belonging to each cell

At the beginning of the exercise, an AUV was programmed to execute linear tracks comprising a “lawnmower” search mission over an area of the seafloor, non-dimensional (grid) units in area. As the vehicle progressed through each line, the active sonar aboard the AUV transmitted acoustic energy primarily in the broadside direction, orthogonal to the flight-path of the vehicle, and collected acoustic returns on a planar receive array. The element-level active acoustic data are first pre-processed through farfield beamforming [23] and matched filtering [24], using techniques employed ubiquitously in modern day sonar systems. In the analysis considered here, only the broadside monostatic return angle is retained, although it is straightforward to extend these techniques to other steering angles as well. For each line-run, the pre-processing results in a two-dimensional grid of (unipolar) acoustic return magnitudes, , as a function of down-range position index, , and cross-range position index, , with respect to the flight path of the sonar; in the absence of background ambient noise, these data are proportional to the backscattered acoustic energy of patches on the seafloor under observation and will be referred to as such in the remainder of this section.
For each line run, the beamformed and matched filtered acoustic data are abstracted as follows. First, a two-dimensional median filter is employed to down-sample the high resolution acoustic data to length scales relevant to performance estimation. All spatial scales in this manuscript are described in non-dimensional units as the planning algorithms are generally applicable regardless of length-scale. After applying the median filter, for each down-range bin in the grid we compute the mean target strength over all cross-range bins over the course of an entire line run.

This results in an estimate of the acoustic energy as a function of down-range position. Generally, this quantity was observed to be relatively stationary across all lines collected. After subtracting the background mean at each cross-range bin,

, we normalize by the standard deviation as a function of cross-range position index,

. This whitening procedure results in an environmental performance map, . A representative example for data collected during a Boston 2015 exercise is presented in Fig. 1-a for all lines prosecuted; we have fused the grids in regions where the lines overlap. The color-scale of this figure represents the number of standard deviations the background environment scattering strength deviates from the mean. We posit that such maps are correlated with relevant MCM performance metrics, the details of which are beyond the scope of the present investigation. We have segmented the environmental map into regions defined as difficult (Fig. 1-b.), moderate (Fig. 1-c.), and easy (Fig. 1-d.) for MCM based upon the deviations of the background of , , and respectively. These maps are employed in planning and performance optimization described in the remaining sections of this manuscript. We note that the data processing described in this section has been specifically configured with the goal of real-time implementation on-board an AUV.

V Numerical Results

In this section we numerically investigate the performance of the exact algorithm proposed in Section III-A and the approximate algorithm proposed in Section III-B. First, the two approaches are compared in terms of overall search performance (i.e., achieved risk reduction) and computation time, over a small, notional, search area (Fig. 2). Then, the performance of the approximate algorithm is demonstrated (in simulation) over a larger, more realistic environment (Fig. 1).

A1 A2 A3 A4 A5 A6
Fig. 2: Search area and cell-wise environment distributions
(a) = 12%
(b) = 7%
(c) = 23%
(d) = 21%
Fig. 3: Search paths for a) exact-solution method b) approximate solution method when mission length is 50 and c) exact-solution method d) approximate solution method when mission length is 100. For each path, corresponding averaged normalized risk reduction (see Fig. 4) is also shown

In this study we assume that there are three candidate environments . The probability of detection, , and the probability of at least one false alarm, , for each environment is and , respectively, for environment . Similarly, and for environment , and and for environment . Note that the information about the number of targets revealed after visiting a cell increases with increasing probability of detection and decreases with increasing probability of false alarm. Thus, environment is the least informative and environment is the most informative. We consider that the sensor model for environment characterization is

(26)

where is the probability of observing the true environment when the environment is . We use the sensor model with , , and for .

V-a Comparisons of Exact and Approximate Solutions

The Monte Carlo based comparison between the exact and approximate algorithms was conducted over a 15-by-15 cell search area with known cell-wise environment distributions as shown in Fig. 2. The search area is divided into 6 parts: A1 through A6. For each part, the corresponding probability distribution is given, where is the probability that the environment is . For example, for the cells labeled as A1, there is a 0.95 probability that the environment is and a 0.05 probability that the environment is . On the other hand, for the cells labeled as A5, there is a 0.9 probability that the environment is and a 0.1 probability that the environment is . For each of the probability distributions in Fig. 2, the value of searching the corresponding cell (Eq. 16) is different. Lighter colored cells (e.g. cells contained in A5 and A3) are expected to yield greater risk reduction since there is higher probability that the environment in these cells is environment type . Thus, these cells are preferred for search111We note that the search area in Fig. 2. is selected to illustrate that the performance when using the approximate method can be suboptimal. Choosing a random search area is likely to reduce the difference between the performance of the exact method and the performance of the approximate method. For example, rotating this search area would result in less deviation from the optimal performance

Based on this environmental distribution a total of 500 synthetic environments (or “scenes”) were generated for a particular mission length. For each scene, a corresponding set of deterministic measurements were generated based on the prior belief on the number of targets and the environmental conditions. Multiple measurements were simulated at each cell location to account for cases in which the same cell is visited multiple times. The mission length (in terms of number of cells) was varied in increments of 10 from 50 to 110. The scene, deterministic measurements, and a specified mission length were then passed to both solvers (corresponding to the exact and approximate algorithms) consecutively. At each iteration the respective solvers recorded the sequences of planned paths, the computation time (for the re-planning step), and the search performance. The algorithms were implemented in C++ using the Armadillo linear algebra library [25] on a 64-bit Ubuntu 16.04 operation system. All computations described in this paper were executed on an Intel Xeon E5-1650V3 processor with a processor base frequency of 3.5 GHz and 32 GB of RAM.

Fig. 4: Comparison of average computation time (top) and average normalized search performance (bottom) between exact and approximate algorithms based on Monte Carlo simulation. Error bars indicate one standard deviation.

The results of the Monte Carlo simulation are shown in Fig. 4. As expected, the search performance and computation time increased with mission length for both algorithms. We found that, in general, the approximate algorithm had comparable search performance to the exact algorithm with a substantially lower computation time. The difference in computation time is particularly pronounced as the mission length increases (e.g., for a mission length of 100 cells the difference is about two orders of magnitude). The approximate algorithm is further evaluated using a more realistic environment as described in the following section. We note that in Fig. 4, achieved risk reduction after a mission is normalized with the risk prior to acquiring any measurement. That is, let be the path the vehicle traverses at the th run, the averaged normalized achieved risk reduction is

(27)

where is the number of Monte Carlo iterations. Thus, Fig. 4 gives the percentage of the achieved risk reduction over a long run (e.g., on average, both approaches yield above 20% risk reduction throughout the search area).

Fig. 3 shows the search paths for both algorithms for a mission length of 50 (Fig. 3a-b) and for a mission length of 100 (Fig. 3c-d). For each path, the averaged normalized achieved risk reduction (Eq. 27) associated with the corresponding mission length is also presented. The path traversed by the vehicle is represented by the red line. The results show that while the exact method yields the optimal search path, the approximate method results in a near-optimal search path. For both methods, the vehicle skips the parts of the search area where expected risk reduction is relatively small. We note that since the approximate method considers each row as a vertex, it does not start a row if it will not finish it. Thus, when using the approximate method, search may stop before the mission length is met.

Fig. 5: Representative planning solution for the approximate method over the large-scale environmental map ( cells) which yields a nominal search performance of

V-B On-line Planning

Using the environmental characterization map from the Boston Harbor discussed in Section IV, the approximate algorithm was used to generate search paths over the search area in a simulated re-planning framework. This re-planning framework has four steps: (i) Plan search path over search area to maximize the reduction in risk, (ii) survey first leg of the mission, (iii) update cells with collected measurements, and (iv) return to (i). The process is repeated until the planner is no longer able to generate search paths due to insufficient planning length (the planning budget is updated each planning cycle to maintain the overall mission length). The sensor model used and method of simulating measurements follows that of Section V-A. Additionally we set in order to provide a feasible branching factor that would achieve runtime performance on-board an AUV.

The search area is consisted of 51 rows and 65 columns; a total number of 3315 cells. Each cell is 25m by 25m. Figure 5 shows the final executed trajectory for a mission length of 1500 cells. The mission consisted of measuring 22 rows while re-planing after measuring each row. It can be seen that the plan covers a large portion of the easy environment type at the start location (bottom-left corner) and eventually skips a large portion of the map to inspect another large swath of easy environment type. 222We note that since the search mission is performed within limited time, searching a portion of the map with the difficult environment type would result in reducing less risk compared to skipping that portion and performing search on easy locations The worst-case computation time (initial plan) took approximately s, and the final nominal search performance was .

Vi Conclusions

In this paper we address the problem of finding an unknown number of stationary targets within a bounded search area within bounded time. We develop an exact and an approximate algorithm for computing time-constrained search paths over the search area. Comparing both the exact and approximate algorithms, we show a near-optimal reduction in risk is achieved when using the approximate algorithm which has a runtime suitable for use on-board a fielded AUV.

The algorithms in this work open up additional research avenues which include improving the computational complexity or solution quality of the approximate algorithm. This can be done by employing tighter lower and upper bounds on estimated mission performance as well as implementing probabilistic sampling strategies for searching the set of feasible paths. Additional performance increases can also be obtained by considering the remaining search budget in cases where the whole-row approximation does not exhaust the overall budget. Furthermore, considering multiple visits to single cells or correlation between cells during the planning process are also potential areas of investigation.

Vii Acknowledgements

The authors would like to thank Dr. Stefan Edelkamp for providing the branch-and-bound software that was used as the base for the implementation in this paper.

References

  • [1] H. Yetkin, C. Lutz, and D. J. Stilwell, “Utility-based adaptive path planning for subsea search,” in OCEANS 2015 - MTS/IEEE Washington, Oct 2015, pp. 1–6.
  • [2] ——, “Acquiring environmental information yields better anticipated search performance,” in OCEANS 2016 - MTS/IEEE Monterey, Sept 2016, pp. 1–6.
  • [3] P. Chapple, “Automated detection and classification in high-resolution sonar imagery for autonomous underwater vehicle operations,” Maritime Operations Division Defence Science and Technology Organisation, Edinburgh, Australia, Tech. Rep., 2008.
  • [4] B. H. Houston, J. A. Bucaro, T. Yoder, L. Kraus, J. Tressler, J. Fernandez, T. Montgomery, and T. Howarth, “Broadband low frequency sonar for non-imaging based identification,” in OCEANS 2002 - MTS/IEEE Biloxi, Oct 2002, pp. 383–387.
  • [5] M. P. Hayes and P. T. Gough, “Synthetic aperture sonar: A review of current status,” IEEE Journal of Oceanic Engineering, vol. 34, no. 3, pp. 207–224, July 2009.
  • [6] K. Ouchi, “Recent trend and advance of synthetic aperture radar with selected topics,” Remote Sensing, vol. 5, no. 2, pp. 716–807, 2013.
  • [7] T. H. Chung and J. W. Burdick, “A decision-making framework for control strategies in probabilistic search,” in Proc. IEEE International Conference on Robotics and Automation, April 2007, pp. 4386–4393.
  • [8] S. J. Benkoski, M. G. Monticino, and J. R. Weisinger, “A survey of the search theory literature,” Naval Research Logistics (NRL), vol. 38, no. 4, pp. 469–494, 1991.
  • [9] T. H. Chung, G. A. Hollinger, and V. Isler, “Search and pursuit-evasion in mobile robotics,” Autonomous robots, vol. 31, no. 4, pp. 299–316, 2011.
  • [10] T. H. Chung and J. W. Burdick, “Analysis of search decision making using probabilistic search strategies,” IEEE Transactions on Robotics, vol. 28, no. 1, pp. 132–144, Feb 2012.
  • [11] B. Kriheli, E. Levner, and A. Spivak, “Optimal search for hidden targets by unmanned aerial vehicles under imperfect inspections,” American Journal of Operations Research, vol. 6, no. 2, p. 153, 2016.
  • [12] A. Singh, A. Krause, C. Guestrin, and W. J. Kaiser, “Efficient informative sensing using multiple robots,”

    Journal of Artificial Intelligence Research

    , vol. 34, pp. 707–755, 2009.
  • [13] J. Binney and G. S. Sukhatme, “Branch and bound for informative path planning,” in IEEE International Conference on Robotics and Automation, May 2012, pp. 2147–2154.
  • [14] G. A. Hollinger, “Long-horizon robotic search and classification using sampling-based motion planning,” in Proc. Robotics: Science and Systems, July 2015.
  • [15] S. Jaramillo and G. Pawlak, “AUV-based bed roughness mapping over a tropical reef,” Coral Reefs, vol. 30, no. 1, pp. 11–23, 2011.
  • [16] B. W. Wah and C. F. Yu, “Stochastic modeling of branch-and-bound algorithms with best-first search,” IEEE Transactions on Software Engineering, vol. SE-11, no. 9, pp. 922–934, Sept 1985.
  • [17] D. Feillet, P. Dejax, and M. Gendreau, “Traveling salesman problems with profits,” Transportation Science, vol. 39, no. 2, pp. 188–205, 2005.
  • [18] P. Vansteenwegen, W. Souffriau, and D. Van Oudheusden, “The orienteering problem: A survey,” European Journal of Operational Research, vol. 209, no. 1, pp. 1–10, 2011.
  • [19] G. Laporte and S. Martello, “The selective traveling salesman problem,” Discrete Applied Mathematics, vol. 26, no. 2-3, pp. 193–207, 1990.
  • [20] S. Martello and P. Toth, “Algorithms for knapsack problems,” North-Holland Mathematics Studies, vol. 132, pp. 213–257, 1987.
  • [21] W. Zhang, “Depth-first branch-and-bound versus local search: A case study,” in National Conference on Artificial Intelligence.   Austin, TX, USA: AAAI, 2000, pp. 930–935.
  • [22] G. A. Croes, “A method for solving traveling-salesman problems,” Operations research, vol. 6, no. 6, pp. 791–812, 1958.
  • [23] B. Maranda, “Efficient digital beamforming in the frequency domain,” The Journal of the Acoustical Society of America, vol. 86, no. 5, pp. 1813–1819, 1989.
  • [24] M. A. Ainslie, Principles of Sonar Performance Modelling.   Springer, 2010.
  • [25] C. Sanderson, “Armadillo: An open source c++ linear algebra library for fast prototyping and computationally intensive experiments,” 2010.