I Introduction
Swarm robotic search is concerned with searching for or localizing targets in unknown environments with a large number of collaborative robots. There exists a class of search problems in which the goal is to find the source or target with maximum strength (often in the presence of weaker sources), and where each source emits a spatially varying signal. Potential applications include source localization of gas leakage [1], nuclear meltdown tracking [2], chemical plume tracing [3], and magnetic field and radio source localization [4, 5]. In such applications, decentralized swarm robotic systems have been touted to provide mission efficiency, fault tolerance, and scalable coverage advantages [6, 7, 8], compared to sophisticated standalone systems. Decentralized search subject to a signal with unknown spatial distribution usually requires both task inference and planning, which must be undertaken in a manner that maximizes search efficiency and mitigates interrobot conflicts. This in turn demands decision algorithms that are computationally lightweight (i.e., amenable to onboard execution) [9], preferably explainable [10], and scalable [11] – it is particularly challenging to meet these characteristics simultaneously.
In this paper, we perceive the swarm robotic^{1}^{1}1Note that, we take a broader perspective of selforganizing swarm systems, one that is not limited to superficiallyobservable patternformations. search process to consist of creating/updating a model of the signal environment and deciding future waypoints thereof. Specifically, we design, implement and test a novel decentralized algorithm founded on a batch Bayesian search formalism. This algorithm tackles the balance between exploration and exploitation over trajectories (as opposed to over points, which is typical in nonembodied search), while allowing asynchronous decisionmaking. The remainder of this section briefly surveys the literature on swarm search algorithms, and converges on the contributions of this paper.
Ia Swarm Robotic Search
In timesensitive search applications under complex signal distributions, a team of robots can broaden the scope of operational capabilities through distributed remote sensing, scalability and parallelism (in terms of task execution and information gathering) [12]. The multirobot search paradigm [11] uses concepts such as cooperative control [13], modeldriven strategies, [14], Bayesian filter by incorporating mutual information [15], strategies based on local cues [16], and uncertainty reduction methods [17]. Scaling these methods from the multirobotic (10 agents [11]) to the swarmrobotic level (10 to 100 agents) often becomes challenging in terms of online computational tractability.
A different class of approaches that is dedicated to guiding the search behavior for larger teams is that based on natureinspired swarm intelligence (SI) principles [18, 19, 20]. SIbased heuristics have been used to design algorithms both for search in nonembodied
dimensional space (e.g., particle swarm optimization) and for swarm robotic search
[21, 22]. Majority of the latter methods are targeted at localizing a single source [9, 23], with their effectiveness relying on the usage of adaptive parameters (e.g., changing inertia weight) [23]. The localization of the maximum strength source in the presence of other weaker sources (i.e., under a multimodal spatial distribution), without making limiting assumptions (e.g., distributed starting points [24]), has received much less attention among SIbased approaches.Translating optimization processes: Similar in principle to some SI approaches, here we aim to translate an optimization strategy [25] to perform search in the physical 2D environment. In doing so, it is important to appreciate two critical differences between these processes: 1) Movement cost: unlike optimization, in swarm robotic search, moving from one point to another may require a different energy/time cost depending upon the environment (distance, barriers, etc.) separating the current and next waypoints. 2) Sampling over paths: robots usually gather multiple samples (signal measurements) over the path from one waypoint to the next (as sampling frequency waypoint frequency), unlike in optimization where we sample only at their next planned point. This “sampling over paths” characteristic has received minimal attention in existing SIbased approaches.
Moreover, with SIbased methods, the resulting emergent behavior, although often competitive, raises questions of dependability (due to the use of heuristics) and mathematical explainability [26]). The search problem can be thought of as comprising two main steps: task inference (identifying/updating the signal spatial model) and task selection (waypoint planning). In SI methods, the two steps are not separable, and a spatial model is not explicit. In our proposed approach, the processes are inherently decoupled – robots exploit Gaussian Processes to model the signal distribution knowledge (task inference) and solves a 2D optimization over a special acquisition function to decide waypoints (task selection). Such an approach is expected to provide explainability, while preserving computational tractability.
IB Contributions of this Paper
The primary contributions of this paper, comprising what we call the BayesSwarm algorithm, can be summarized as the following. 1) We extend Gaussian process modeling to update over trajectories and consider robot’s motion constraints when using the GP to identify new samples. 2) We formulate a novel batchBO acquisition function, which not only seeks to balance exploration and exploitation, but also manages the interactions between samples in a batch, i.e., different robots’ planned future paths, in a computationally efficient manner. 3) We develop and test a simulated parallel implementation of BayesSwarm for asynchronous search planning over complex multimodal signal distributions. The performance of BayesSwarm, and its variations (synchronized and purely explorative implementations), is analyzed over signal environments of different complexity, and compared with that of an exhaustive search baseline.
The remaining portion of the paper is organized as follows: Our proposed decentralized algorithm (BayesSwarm) is described next in Section II. Numerical experiments and results, investigating the performance of BayesSwarm for different case studies, are then presented in Sections III and IV. The paper ends with concluding remarks. A summary background of GP modeling is provided as Appendix A thereafter.
Ii BayesSwarm Algorithm
Iia BayesSwarm: Overview
Figure 1 illustrates the sequence of processes (motion, sensing, planning, communication, etc.), and associated flow of information, encapsulating the behavior of each swarm robot. The pseudocode of our proposed BayesSwarm algorithm is given in Alg. 1. Each robot executes the BayesSwarm algorithm after reaching a waypoint, to update its knowledge and identify the next waypoint. This is accomplished by updating the GP model of the signal environment, and using it to maximize a special acquisition function. Importantly, these planning instances need not be synchronized across robots. The following assumptions are made in implementing BayesSwarm: i) all robots are equipped with precise localization; and ii) each robot can communicate their knowledge and decisions to all peers after reaching a waypoint (while full observability is assumed, the provision of asynchronous planning reduces the communication dependency, compared to synchronized algorithms [27, 28].
Assuming we have robots, let’s define the key parameters in BayesSwarm: : the observation locations () and signal measurements () of robot over its path connecting waypoints and ; : the cumulative information of robot up to its arrival at the th waypoint, including all selfrecorded and peerreported observations; : the next planned waypoint of robot, known to robot at the time when it’s at its th waypoint; : reported next waypoints of robot’s peers by that time.
IiB Acquisition Function
Each robot takes an action (plans/travelsto next waypoint, ) that maximizes an acquisition function. Since the swarm’s objective is to collectively explore a search area to find the strongest signal source among multiple sources in the least amount of time, the acquisition function must balance exploration and exploitation. To this end, we propose the following acquisition function formulation:
(1) 
s.t.
(2) 
Here, the first term, , leads robot towards the location of the maximum signal strength expectation (promotes exploitation); and the second term, , minimizes the knowledge uncertainty of robot w.r.t. the signal’s spatial distribution (promotes exploration). The multiplicative factor, , penalizes the interactions among the samples planned to be collected by robot and its peers. How these terms are formulated differently from standard acquisition functions used in BO (to enable the unique characteristics of embodied search), are described further in the following subsections.
The coefficient in Eq. (1) is the exploitation weight, i.e., would be purely exploitative. Here, we design to be adaptive in a way (given by Eq. (3)) such that the swarm behavior is strongly explorative at the start and becomes increasingly exploitative over waypoint iterations, e.g., it changes from 0.5 to 0.97 between 33% and 70% of the expected mission time:
(3) 
The term in Eq. (1) is a prescribed scaling parameter that is used to align the orders of magnitude of the exploitative and explorative terms. For our case studies, we set . Equation (2) constrains the length of robot’s planned path, ), based on a set timehorizon () for reaching the next waypoint, and the maximum velocity () of robots. In this paper, the timehorizon is set such that distance between any consecutive waypoints does not exceed the arena length.
IiC Source Seeking (Exploitative) Term
The robots model the signal’s spatial distribution using a GP with squared exponential kernel (further description of this GP modeling is given in Appendix A). The GP model is updated based on the robot’s own measurements and those communicated by its peers (each over their respective most recent paths), thereby providing the following mean function:
(4) 
where and . Due to motion constraints (Eq. (2), a robot may not be able to reach the location,
, with the maximum expected signal strength (estimated using their GP model), within the time horizon. Therefore, the exploitative term is redefined to get closest to
, as given by:(5) 
where .
IiD KnowledgeUncertainty Reducing (Explorative) Term
Unlike in optimization, in robotic search, sampling is performed over the path of each agent. This concept is known as informative path planning, where robots decide their path such that the best possible information is extracted. The (explorative) second term in Eq. (1) models the reduction in uncertainty in the robots’ belief (knowledge), thus facilitating informative path planning. To this end, the path of the robot is written in a parametric form as given below:
(6) 
where is the current location of robot. In computing the selfreducible uncertainty in the belief of robot, we account for the locations of both the past observations made by the robot and its peers and the future observations of robot’s peers to be made over the paths to their planned waypoints () – both of these only consider what’s currently known to robot via communication from its peers. The knowledgegain can thus be expressed as:
(7) 
where and . For further details on computing the mean (Eq. (5
)) and the variance (Eq. (
7)) of the GP, refer to Appendix A.IiE Local Penalizing Term
For a batchBO implementation, it is necessary to account for (and in our case mitigate) the interaction between the batch of future samples. In swarm robotic search, this provides an added benefit of mitigating the overlap in planned knowledge gain by robots in the swarm – thereby promoting a more efficient search process. Modeling it explicitly via predictive distribution carries a significant computational overhead of [29]. Simultaneous optimization of future candidate samples (in the batch) [30] is also not applicable here since each robot must plan its future waypoint in a decentralized manner. Recently, Gonzalez et al. [29] reported a computationally tractable approximation to model the interactions, using a local penalization term. We adopt and extend this idea in our work through the local penalty factor.
This penalty factor, enables local exclusion zones based on the Lipschitz properties of the signal spatial function (), and thus tends to smoothly reduce the acquisition function in the neighborhood of the existing batch samples (the known planned waypoints of robot’s peers (), with the signal observations at those points being not yet reported). To compute this penalty, we define a ball with radius around each peers’ planned waypoints:
(8) 
The local penalty associated with a point
is defined as the probability that
does not belong to the ball :(9) 
We assume that the distribution of the ball radius is Gaussian with mean and variance . Here, is the maximum strength of the source signal and is a valid Lipschitz constant (). Both and can be in general readily set based on the knowledge of the application (we know the expected maximum strength of the source signal and the size of arena). By having these assumptions, we can derive the following expressions for the local penalty:
(10)  
Here, erfc(.) is the complementary error function. The effective penalty factor is estimated based on the approximated interactions with the waypoints of all peers of robot:
(11) 
IiF Information Sharing
Global interrobot communication is assumed in this work. However, given the bandwidth limitations of adhoc wireless communication (likely in emergency response applications) and its energy footprint of [31], the communication burden needs to be kept low. Thus, along with asynchronous planning, robots share only a downsampled set of observations. Moreover, each robot broadcasts the following information only after every planning instance: its next planned waypoint () and observations made over its last path ().
Iii Case Studies
Iiia Distributed Implementation of BayesSwarm
In order to represent the decentralized manner in which BayesSwarm operates, we develop a simulated environment using MATLAB’s (R2017b) parallel computing tools, and deployed this environment in a dual 20 core workstation (Intel^{®} Xeon Gold 6148 27.5M Cache 2.40 GHz, 20 cores processor, 196 GB RAM). Each robot executes its behavior, as depicted in Fig. 1), in parallel with respect to the rest of the swarm – updating its own knowledge model after each waypoint and deciding its next waypoint based on its own information and that received from its peers till that point.
The simulation time step is set at 1 ms. The observation frequency over a path is set at 1 Hz. In order to have tractable GP updating, each robot uses all observations dataset () if the size is less than 1,000, otherwise it is downsampled to 1,000 samples using a simple integer factor.
IiiB Case Studies
To evaluate the BayesSwarm algorithm, two types of experiments are conducted using two distinct signal environments. The two environments, shown in Fig. 2, respectively provide a bimodal spatial distribution over a small arena, and a complex multimodal spatial distribution over a larger arena. In Experiment 1: BayesSwarm is run with 5 robots (small swarm size chosen for ease of illustration), to analyse its performance over the two environments, and compare with that of two variations of BayesSwarm (BayesSwarmSync and BayesSwarmExplorative) and an exhaustive search baseline. The synchronized planning (BayesSwarmSync) version is implemented by changing the inequality constraint Eq. (2) to an equality constraint (fixed interval between waypoints) – to investigate the hypothesized benefits of asynchronous planning. The purely explorative version (BayesSwarmExplorative) is implemented by using in Eq. (1) – to highlight the need for balancing exploration/exploitation. In Experiment 2: a scalability analysis is undertaken to explore the performance of BayesSwarm in Case 2, across multiple swarm sizes.
The results of the experiments are evaluated and compared in terms of relative completion time () and mapping error (). The relative completion time represents the search completion time () relative to the idealized completion time (), as given by:
(12) 
Here, represents the time that a swarm robot would hypothetically take to directly traverse the straightline path connecting the starting point and the signal source location. Although the focus of this work is source localization (a search problem), BayesSwarm can also be applied for mapping purposes (a coverage problem) by setting . Hence we report the mapping error (in terms of RootMean Square Error or RMSE), which measures how the response estimated using GP deviates from the actual signal distribution over the arena. The RMSE is computed over a set of
test points uniformly distributed over the arena.
For simulation termination purposes, two criteria are used. The first criterion terminates the search if any robot arrives within vicinity of the signal source location. The second criterion terminates the simulated mission, if a maximum allowed search time () is reached.
Case  Algorithm  Completion Time  Mapping Error 

1  BayesSwarm  0.22  0.009 
BayesSwarmSync  0.16  0.008  
BayesExplorative  0.50  0.007  
Exhaustive Search  6.55    
2  BayesSwarm  0.41  0.102 
BayesSwarmSync  0.71  0.066  
BayesExplorative  6.11  0.054  
Exhaustive Search  31.36   
The maximum allowed search time, the idealized time, robot velocity, and Lipschitz constant are set for both cases as follows: Case 1: s, s, s, m/s, , , m; Case 2: s, s, m/s, s, , , m. Exhaustive search is done in parallel by 5 robots (2 in 1 qtr, other 3 separately in 3 qtrs of the arena).
Iv Results and Discussion
Iva Overall Performance of BayesSwarm
Figure 3 shows snapshots of the status of a 5robot swarm at different time points, based on the implementation of BayesSwarm in the Case 1 environment (Fig. 2(a)). They illustrate how the knowledge uncertainty (top plots), represented by of any given robot, changes as the swarm robots explore the arena while exchanging information with each other and updating their GP models (bottom plots).
Looking from the perspective of robot 5 – it can be observed from Figs. 3(f) to 3(h) how the actions of the swarm helps improve the model of the environment, and correspondingly how the uncertainty in robot 5’s knowledge of the signal environment reduces (Figs. 3(b) to 3(d)). In this Case, the five robots are able to build a relatively accurate model of the environment and find the signal source in 36s (using a total of 180 downsampled measurements); at that point, the estimated and actual signal distributions mostly coincide (Figs. 3(h)). The helpful role played by the adaptive weight parameter () is also evident from Figs. 3(g) to 3(h), which show a more explorative behavior early on (e.g., s), and a more exploitive behavior later on (e.g., at s) when two of the robots converge on the source.
IvB Experiment 1: Comparative Analysis of BayesSwarm
Table I summarizes the performance of the complete, synchronized, and explorative versions of BayesSwarm and the baseline algorithm, in terms of completion time () and mapping error (). The results show that BayesSwarm and its variations outperform the baseline exhaustive search (by an order of magnitude better efficiency) in both case studies.
Although BayesSwarm requires greater completion time than BayesSwarmSync in Case 1, the former is clearly superior compared in Case 2. This performance benefit in a complex environment is attributed to the planning flexibility afforded by the absence of synchronization, which enables planning paths of different length across the swarm.
Our asynchronous implementation, with communication occurring only at waypoints and in a sequence among robots, however introduces nonhomogeneity in the models of the environment across the swarm. An example of this is seen from the discrepancies in the knowledge states of robot 1 (Fig. 3(a)) and robot 5 (Fig. 3(b)) around the 5s timepoint. In the future, this issue could be addressed, while retaining the asynchronous benefits, by designing the communication schedule to be independent of the planning processes.
While the purely explorative version (BayesExplorative) expectedly provides lower mapping error by reducing the knowledge uncertainty faster, it falls significantly behind both BayesSwarmSync and BayesSwarm in terms of search completion time, for both environment cases (as evident from Table I). This illustrates the importance of preserving the exploitation/exploration balance.
To study the impact of the penalty factor, , that promotes waypoints away from those planned by peers, we ran BayesSwarm without the penalty. Compared to BayesSwarm, the “without penalty factor” version got stuck in the local signal mode in case 1 and took 1.5 times the time to find the global source in Case 2, with the latter’s mapping error performance being also poorer. These observations highlighted the value of the penalty factor.
IvC Experiment 2: Scalability Analysis of BayesSwarm
Here, we run BayesSwarm simulations on Case 2 with swarm sizes varying from 2 to 100. Figure 4 illustrates the results of this study in terms of the relative completion time, mapping error, and computing time per planning instance. The mapping error drops quickly with increasing swarm size, given the resulting increased exploratory capability. The mission completion time also reduces at a remarkable rate between 2 to 50 robots, and then saturates (due to a diminishing marginal utility). Some oscillations are observed in this performance metric, since the penalty characteristics become more aggressive as the swarm size (and thus robot crowding) increases. The computation cost increases, as expected, due to the increasing size of sample sets over which the GP model has to be updated by the robots at each planning instance. This cost is however bounded, via downsampling to a maximum of 1000 samples. Interestingly, the downsampling does not noticeably affect the mission performance improvements.
V Conclusion
In this paper, we proposed an asynchronous and decentralized swarm robotic algorithm to perform searching for the maximum strength source of spatially distributed signals. To this end, we exploit the batch Bayesian search concept, by making important new modifications necessary to account for the constraints and capabilities that differentiate embodied search from a Bayesian Optimization process. Primarily, a new acquisition function is designed to incorporate the following: 1) knowledge gain over trajectories, as opposed at points; 2) mitigating interactions among planned samples of different robots; 3) time adaptive balance between exploration /exploitation; and 4) accounting for motion constraints.
For evaluation, we used two cases with different arena size and signal distribution complexity. BayesSwarm outperformed an exhaustive search baseline by completing the missions 17 times and 76 times faster than exhaustive search, respectively in the simple and the complex cases. The benefits of allowing asynchronous planning and exploitation/exploration balance was also evident in the complex case (noticeably lower mission completion time), studied by setting control experiments (where synchronization and pure exploration was enforced).
Scalability analysis of BayesSwarm demonstrated a somewhat superlinear reduction in completion time and mapping error with increasing number of robots. The computing cost per waypoint planning did increase sharply with increasing swarm size, since swarm size exacerbates the cost of refitting the GP (onboard swarmbots), which grows exponentially along the mission as more samples are collected. Efficient refitting approximations, e.g., particle learning, will be explored in the future to address this concern. Another direction of future research would be consideration of partial observation (attributed to communication constraints), which along with physical demonstration would allow more comprehensive appreciation of the BayesSwarm algorithm.
Appendix A Gaussian Process Model
Gaussian process (GP) models provide nonparametric surrogates [32]
that can be used for Bayesian inference over a function space
[33]. For a set of observations, , GP expresses the observed values as a summation of the approximating function and an additive noise , i.e.,. Assuming the noise follows an independent, identically distributed Gaussian distribution with zero mean and variance,
, we have . The function can then be estimated by a GP with mean and a covariance kernel :(13) 
(14)  
(15) 
Here
is the vector of explicit basis functions and
is the covariance matrix, , with . In this paper, the squared exponential kernel is used to define the covariance . The GP hyperparameters are determined by maximizing the loglikelihood as a function of , i.e.,(16) 
where
(17)  
References
 [1] W. Baetz, A. Kroll, and G. Bonow, “Mobile robots with active iroptical sensing for remote gas detection and source localization,” in 2009 IEEE International Conference on Robotics and Automation. IEEE, 2009, pp. 2773–2778.
 [2] K. Nagatani, S. Kiribayashi, Y. Okada, K. Otake, K. Yoshida, S. Tadokoro, T. Nishimura, T. Yoshida, E. Koyanagi, M. Fukushima et al., “Emergency response to the nuclear accident at the fukushima daiichi nuclear power plants using mobile rescue robots,” Journal of Field Robotics, vol. 30, no. 1, pp. 44–63, 2013.
 [3] W. Li, J. A. Farrell, S. Pang, and R. M. Arrieta, “Mothinspired chemical plume tracing on an autonomous underwater vehicle,” IEEE Transactions on Robotics, vol. 22, no. 2, pp. 292–307, 2006.
 [4] A. Viseras, T. Wiedemann, C. Manss, L. Magel, J. Mueller, D. Shutin, and L. Merino, “Decentralized multiagent exploration with onlinelearning of gaussian processes,” in 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016, pp. 4222–4229.
 [5] D. Song, C.Y. Kim, and J. Yi, “Simultaneous localization of multiple unknown and transient radio sources using a mobile robot,” IEEE Transactions on Robotics, vol. 28, no. 3, pp. 668–680, 2012.
 [6] O. De Silva, G. K. Mann, and R. G. Gosine, “Development of a relative localization scheme for groundaerial multirobot systems,” in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on. IEEE, 2012, pp. 870–875.
 [7] P. Ghassemi and S. Chowdhury, “Decentralized task allocation in multirobot systems via bipartite graph matching augmented with fuzzy clustering,” in ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2018, pp. V02AT03A014–V02AT03A014.
 [8] ——, “Decentralized informative path planning with explorationexploitation balance for swarm robotic search,” arXiv preprint arXiv:1905.09988, 2019.
 [9] J. Pugh and A. Martinoli, “Inspiring and modeling multirobot search with particle swarm optimization,” in Swarm Intelligence Symposium, 2007. SIS 2007. IEEE. IEEE, 2007, pp. 332–339.

[10]
D. Gunning, “Explainable artificial intelligence (xai),”
Defense Advanced Research Projects Agency (DARPA), nd Web, 2017.  [11] Y. Tan and Z.y. Zheng, “Research advance in swarm robotics,” Defence Technology, vol. 9, no. 1, pp. 18–39, 2013.

[12]
P. Odonkor, Z. Ball, and S. Chowdhury, “Distributed operation of collaborating
unmanned aerial vehicles for timesensitive oil spill mapping,”
Swarm and Evolutionary Computation
, 2019.  [13] A. Sinha, R. Kaur, R. Kumar, and A. Bhondekar, “A cooperative control framework for odor source localization by multiagent systems,” 2017.

[14]
T. Wiedemann, D. Shutin, V. Hernandez, E. Schaffernicht, and A. J. Lilienthal, “Bayesian gas source localization and exploration with a multirobot system using partial differential equation based modeling,” in
Olfaction and Electronic Nose (ISOEN), 2017 ISOCS/IEEE International Symposium on. IEEE, 2017, pp. 1–3.  [15] B. Charrow, N. Michael, and V. Kumar, “Cooperative multirobot estimation and control for radio source localization,” The International Journal of Robotics Research, vol. 33, no. 4, pp. 569–580, 2014.
 [16] H. Hajieghrary, M. A. Hsieh, and I. B. Schwartz, “Multiagent search for source localization in a turbulent medium,” Physics Letters A, vol. 380, no. 20, pp. 1698–1705, 2016.
 [17] P. Sujit and D. Ghose, “Negotiation schemes for multiagent cooperative search,” Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering, vol. 223, no. 6, pp. 791–813, 2009.

[18]
J. Kennedy, “Particle swarm optimization,”
Encyclopedia of machine learning
, pp. 760–766, 2010.  [19] K. Krishnanand and D. Ghose, “Glowworm swarm optimisation: a new method for optimising multimodal functions,” International Journal of Computational Intelligence Studies, vol. 1, no. 1, pp. 93–119, 2009.
 [20] M. Senanayake, I. Senthooran, J. C. Barca, H. Chung, J. Kamruzzaman, and M. Murshed, “Search and tracking algorithms for swarms of robots: A survey,” Robotics and Autonomous Systems, vol. 75, pp. 422–434, 2016.
 [21] J. Kennedy, “Swarm intelligence,” in Handbook of natureinspired and innovative computing. Springer, 2006, pp. 187–219.
 [22] E. Bonabeau, D. d. R. D. F. Marco, M. Dorigo, G. Théraulaz, G. Theraulaz et al., Swarm intelligence: from natural to artificial systems. Oxford university press, 1999, no. 1.
 [23] W. Jatmiko, K. Sekiyama, and T. Fukuda, “A psobased mobile sensor network for odor source localization in dynamic environment: Theory, simulation and measurement,” in 2006 IEEE International Conference on Evolutionary Computation. IEEE, 2006, pp. 1036–1043.
 [24] K. Krishnanand, P. Amruth, M. Guruprasad, S. V. Bidargaddi, and D. Ghose, “Glowworminspired robot swarm for simultaneous taxis towards multiple radiation sources,” in Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006. IEEE, 2006, pp. 958–963.
 [25] P. Ghassemi, S. S. Lulekar, and S. Chowdhury, “Adaptive model refinement with batch bayesian sampling for optimization of bioinspired flow tailoring,” in AIAA Aviation 2019 Forum, 2019, p. 2983.
 [26] A. Kolling, P. Walker, N. Chakraborty, K. Sycara, and M. Lewis, “Human interaction with robot swarms: A survey,” IEEE Transactions on HumanMachine Systems, vol. 46, no. 1, pp. 9–26, 2016.
 [27] E. Klavins, “Communication complexity of multirobot systems,” in Algorithmic Foundations of Robotics V. Springer, 2004, pp. 275–291.
 [28] M. Cáp, P. Novák, M. Seleckỳ, J. Faigl, and J. Vokffnek, “Asynchronous decentralized prioritized planning for coordination in multirobot system,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2013, pp. 3822–3829.
 [29] J. González, Z. Dai, P. Hennig, and N. Lawrence, “Batch bayesian optimization via local penalization,” in Artificial Intelligence and Statistics, 2016, pp. 648–657.
 [30] J. Azimi, A. Fern, and X. Z. Fern, “Batch bayesian optimization via simulation matching,” in Advances in Neural Information Processing Systems, 2010, pp. 109–117.
 [31] M. Li, K. Lu, H. Zhu, M. Chen, S. Mao, and B. Prabhakaran, “Robot swarm communication networks: architectures, protocols, and applications,” in 2008 Third International Conference on Communications and Networking in China. IEEE, 2008, pp. 162–166.
 [32] C. E. Rasmussen, “Gaussian processes in machine learning,” in Summer School on Machine Learning. Springer, 2003, pp. 63–71.
 [33] J. Snoek, H. Larochelle, and R. P. Adams, “Practical bayesian optimization of machine learning algorithms,” in Advances in neural information processing systems, 2012, pp. 2951–2959.
Comments
There are no comments yet.