On the Effects of Collision Avoidance on Emergent Swarm Behavior

10/14/2019 ∙ by Chris Taylor, et al. ∙ 0

Swarms of autonomous agents, through their decentralized and robust nature, show great promise as a future solution to the myriad missions of business, military, and humanitarian relief. The diverse nature of mission sets creates the need for swarm algorithms to be deployed on a variety of hardware platforms. Swarms are currently viable on platforms where collisions between agents are harmless, but on many platforms collisions are prohibited since they would damage the agents involved. The available literature typically assumes that collisions can be avoided by adding a collision avoidance algorithm on top of an existing swarm behavior. Through an illustrative example in our experience replicating a particular behavior, we show that this can be difficult to achieve since the swarm behavior can be disrupted by the collision avoidance. We introduce metrics quantifying the level of disruption in our swarm behavior and propose a technique that is able to assist in tuning the collision avoidance algorithm such that the goal behavior is achieved as best as possible while collisions are avoided. We validate our results through simulation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Swarms have been extensively studied and are an attractive choice for many applications due to their decentralized nature and robustness against individual failures [25, 24, 4]. A common goal with decentralized control in swarms is to achieve an emergent behavior [25, 10, 6], where the collective behavior of the swarm has properties that the behaviors of individual agents lack. This is desirable when the individual agents have limited awareness of the global objective throughout their decision-making process. In such systems, agents typically interact with each other through local agent-to-agent communication, local sensing without communication, or indirect communication through the environment, i.e., stigmergy [18]. The goal then is to design various local interaction mechanisms that result in desired globally emergent behaviors. Although swarms have been studied for a long time, the overwhelming majority of works ignore the effects that physical collisions among agents have on the overall swarm. While in many applications this is not important, these existing algorithms are insufficient if collisions among agents in the swarm are problematic.

For instance, in computer animations it is only necessary to create visually appealing swarm behaviors as opposed to physically realistic ones [25, 12]. In such applications collisions between different agents of the swarm are not even modeled and they can simply move through one another. In some optimization problems, virtual swarms are used as a means to find solutions by imitating the behavior of ants [17], which are unaffected by collisions. Even considering physical swarms, some implementations do not need to worry about collisions. A well-known example of a real life swarm deployment, Kilobots [27], uses fairly small and slow-moving platforms which are mostly unaffected by collisions. Other works even use aerial platforms that are actually designed to collide as a means of communication [22, 23].

Instead, this paper is concerned with physical swarm systems in which collisions among agents are deemed catastrophic. We imagine many new-age applications in which we would like to leverage the various algorithms developed by the swarms community on physical agents that are fragile or may require high speed operations. An example of this is the quadrotor platform used in DARPA’s OFFSET program [13]. Collisions, in this case, will lead to the loss of the agents. Any mention of debilitating collisions is usually confined to anecdotal observations or “blooper reels” like the accompanying video [33] of the approach in Borrmann et al. [7]. It is clear from this example and anecdotal experience that some platforms are not expected to collide, a fact which is often ignored. The goal then is to still be able to leverage the ideas and benefits of swarming technology, while ensuring that agents do not collide.

Literature review: Various decentralized collision avoidance techniques exist to provide safety for swarms. The arguably simplest approach, originally used for waypoint navigation [26], is the artificial potential field method which simulates repulsive forces so that agents “repel” each other in a manner analogous to magnetic or Coulombic forces. Another promising approach is to use “gyroscopic” forces [16] which are designed to steer agents around obstacles without affecting the agent’s speed.

These efforts use simple ad-hoc approaches to collision avoidance, but later works take a more rigorous approach, introducing the concept of theoretically sound minimally invasive controllers. The first examples of these are optimal reciprocal collision avoidance (ORCA) [30] and control barrier certificates [7]. In both cases, agents have a primary goal in mind and select a ‘minimally invasive’ control input that will avoid collisions in a way that stays as true as possible to the intended behavior. To validate their techniques, most works use specially constructed test scenarios [30, 32, 7, 16, 1, 2, 20], usually involving a group of agents starting on a circle and heading to the point directly opposite on the circle. Unfortunately, most of these works do not investigate the effect of the collision avoidance algorithms on the original intended behavior of the swarm. This last step is critical as guaranteeing the lack of collisions is only half the problem. We must instead be able to guarantee this while also ensuring the desired swarm behavior still emerges.

A few works have sought to combine the study of emergent swarm behavior with collision avoidance, mostly in search and coverage control problems [5, 3, 8] or formation control [15]. In these cases, the original intended swarm behaviors are generally preserved without incident despite implementing collision avoidance techniques. However, the situations in these examples have aligned goals with collision avoidance: keeping agents away from each other. These situations naturally lead to swarm behaviors in which the agents are well separated. Thus, both behavioral and anti-crash constraints are satisfied simultaneously.

Instead, in this work we consider a behavior where the intended behavior of the swarm is less aligned with avoiding collisions. Some examples of emergent behaviors we are interested in are milling [14], where agents pack themselves into tight groups and orbit a common point, and double milling [10, 9, 29], where agents rotate in opposing directions, frequently encountering each other at high relative speeds. We believe more work is needed to understand how behaviors like these interact with collision avoidance.

Statement of contributions: In this work we rigorously study the effects of different collision avoidance strategies on a particular swarming algorithm proposed by Szwaykowska et al. [29]. Under a particular set of parameters, the ‘milling’ behavior is shown to emerge among the swarm of agents when collisions among agents are not modeled. We then impose two physical constraints on the system (no collisions and limited acceleration) and study how existing works can be leveraged to still achieve the desired emergent behavior.

Specifically, we first introduce a metric that captures how well the agents are performing the desired milling behavior. Using this metric, we explore two different collision avoidance techniques under a very large set of parameters to quantitatively understand how active collision avoidance disrupts the intended behavior of the swarm. Finally, given a particular choice of a collision avoidance strategy, we show how to tune the parameters of the algorithm to ensure collisions among agents are avoided while preserving the intended behavior of the as much as possible. Our results suggest that a successful algorithm that can guarantee the emergence of the desired behavior and no collisions simultaneously should be co-designed rather than combining existing swarming algorithms with existing collision avoidance strategies. Our results are validated through simulation.

Ii Problem Formulation

This paper is concerned with the deployment of physical swarm systems in which collisions between agents are absolutely not permitted. In such scenarios, we are interested in understanding the effect that this added constraint has on the ability of the swarm to reach an intended globally emergent behavior. More specifically, we aim to understand the effects that various collision avoidance algorithms have when combined with a particular swarming algorithm.

Ii-a Individual Agent Model

We are motivated by the desire to deploy swarms on physical swarm systems, so we focus on simple agent models that capture the physical aspects we are most concerned with. Letting  be the position of agent  in a swarm of agents, we consider double-integrator dynamics

(1)

with the following two constraints at all times .

C1. Limited acceleration. We assume that  for each agent , where  represents the maximum allowable acceleration.

C2. No collisions. Letting  represent the size of the agents, we want to be able to guarantee that

(2)

for all , where  represents the outer radius of each agent. Without constraint C1, agents can increase their acceleration arbitrarily large for arbitrarily small periods of time. This is not practical when considering a physical swarm and the restrictions that come from operating in the physical world. Thus, we are interested in strategies that satisfy C1.

Ii-B Desired Global Behavior: Ring State

Given the model above, we now introduce our specific desired behavior for the swarm system. The dynamics formulation in [29] is capable of producing a few behaviors, but the one we are interested in is the “ring state” behavior, where agents self-organize to orbit around a common point. Central to the dynamics is a ‘delayed attraction’ term, where agents are attracted to the positions of other agents, regardless of their distance, but with a delay. Agents are also able to ‘sense’ the relative locations of their immediate neighbors within a short sensing radius with no delay. This simulates a situation where agents can see their immediate neighbors and receive information on far-away agents through a separate channel.

The original dynamics in [29] use a fixed communication graph for the delayed channel, but in this work we assume all-to-all communication is available to help the desired behavior emerge as easily as possible, as our goal is to understand the effects that collision avoidance has on the ideal behavior. Given the model in Eq. 1, the desired controller is given by

(3)

where is comprised of three terms (in order): keep the agent’s speed at approximately with “gain” , avoid nearby agents using a collision avoidance term , and attract toward the delayed position of other agents seconds in the past, where the strength of the attraction is weighted by . The neighbor set comprises the states of agent ’s local neighbors and is based on a simple circular sensing area with radius defined by

(4)

It is important to note how these are used in Eq. 3: if an agent needs the state of one of its local neighbors in it is received with no delay, but sensing far-away neighbors incurs a delay of seconds.

Amount of delay on attraction term
Desired speed
Gain on enforcing desired speed
Strength of delayed attraction term
Gain to control strength of repulsion term
Sensing radius and length scale
Maximum acceleration
Radius of a single agent
TABLE I: Summary of design parameters

All the terms we require to describe our swarm system are recalled in Table I. If we select suitable parameters for and ignore the collision avoidance term and constraint C2, a rotating ring emerges as shown in figure Fig. 0(a). Without constraint C2, the agents simply move through one another and the desired swarm behavior emerges without problems.

However, since we are interested in ensuring collisions do not occur, we must consider various collision avoidance strategies  and their effect on the intended behavior.

(a) Collisions and collision avoidance disabled. Agents can freely pass through each other in opposite directions.

(b) Collisions and collision avoidance enabled, but the “aggressiveness” is too weak. Agents crash into each other frequently.

(c) Collisions and collision avoidance enabled, and is sufficient. Agents avoid crashes and form the ring.

(d) Collisions and collision avoidance where the aggressiveness is too strong. Agents scatter aimlessly.
Fig. 1: Positions and velocities of 12 agents for different parameters of . The other parameters are held constant at . Also shown for each is the orderliness metric .

Iii Methodology

With our problem defined, we must first choose different types of collision avoidance strategies  for the agents to employ. However, in trying to understand the effects these have on the desired emergent behavior, we note that we are essentially attempting to capture a qualitative property. Figure 1 clearly shows that the question of whether the intended behavior successfully emerged has a non-binary answer. Thus, after discussing different collision avoidance mechanisms in Section III-A, we propose a quantifiable metric in Section III-B to enable comparison between various states to determine which produces the desired emergent behavior ‘better’. Finally, we utilize these tools in Section III-C to investigate how well collision avoidance strategies can be tuned to achieve the desired emergent behavior while satisfying physical constraints C1 and C2.

Iii-a Collision Avoidance

We compare two collision avoidance schemes to be used for the anti-crash force in Eq. 3. The first choice of collision avoidance is a potential-fields scheme presented with [29], based on the gradient of the potential function

(5)

The second collision avoidance scheme we study is the “gyroscopic” force presented in [16]. This produces a force orthogonal to the agents velocity that “steers” the agent without changing its speed. It can be written in closed-form as

(6)

where is a

rotation matrix to give us a vector orthogonal to agent i’s velocity,

is the state of the nearest agent and

(7)

is the sign function modified such that the agent is forced to steer left during a perfect head-on collision to prevent a situation where . The function represents a potential controlling the magnitude of the steering force. As [16] specifies, the magnitude is arbitrary so we choose

(8)

such that the force magnitude is exactly the same as the method of potential fields Eq. 5 with one other agent.

Regardless of the choice of the collision-avoidance term , we modify the dynamics of each agent to ensure they satisfy C1 by using

(9)

where caps the acceleration in the direction of ,

(10)

Fig. 1 explores what happens with the potential-fields collision avoidance approach  as we change just the strength parameter while leaving all others parameters described in Table I fixed.

Fig. 0(a) shows the idealistic case in which agents are allowed to move through one another. By enforcing constraints C1 and C2, the remaining figures demonstrate the challenges that still arise in guaranteeing the desired behavior of the swarm still emerges. In 0(b), the collision avoidance is turned on but the gain  is not large enough that agents still collide too much for the behavior to emerge. As we continue to increase the collision avoidance gain, Fig. 0(c) shows a behavior in which collisions are no longer occurring, and the desired behavior apparently emerges. This seems to support the conjecture that swarming algorithms combined with collision avoidance strategies are sufficient in deploying actual swarms. However,  Fig. 0(d) shows what happens as the collision avoidance gain  becomes too large; the desired behavior never emerges as agents are too active in avoiding one another.

We are interested in swarming algorithms that are able to guarantee both the emergence of the desired behavior while actively avoiding collisions. Unfortunately, Fig. 1 also demonstrates that we are interested in understanding a qualitative property of the entire swarm. This motivates the need to define a more precise metric for the quality of the emergent behavior instead of qualitative comparisons.

Iii-B Measuring Emergent Behavior Quality

We measure the quality of the emergent behavior both through the amount of collisions and through specialized metrics to quantify how closely the behavior matches the desired ring formation. Many other works define metrics to quantify emergent behavior, for instance [21] uses polarity and normalized angular momentum metrics to quantify a rotating mill similar to our ring state, [31] uses group polarization to quantify alignment in fish schooling, and [11] uses a correlation function to quantify alignment in starling flocks. Similar to those in [21], we introduce two metrics to quantify the quality of the emerged behavior: the “fatness” , which characterizes how thick the ring is relative to its inner diameter, and “tangentness” , the degree to which agents’ velocities are aligned tangentially to the ring.

Formally, letting be the average position of all agents

and and  be the minimum and maximum distance of the formation to , respectively,

fatness is defined as

(11)

In other words, the fatness is the proportion of empty space available in the center of the formation, where implies a perfectly thin ring and implies an entirely filled-in disc.

The tangentness is defined as

(12)

which is the average of the cosine of the angle between an agent’s velocity vector and the normal vector to the circle centered at . A tangentness represents perfect alignment and represents maximum disorder. The tangentness is similar to the normalized angular momentum measure in [21] except that it ignores each agent’s absolute speed and only considers alignment. Figs. 3 and 2 show example plots of and  over time for choices of parameters that lead to successful or failure of achieving the desired emergent behavior, respectively.

Fig. 2: Fatness , tangentness for a successful emergence. Fig. 0(c) shows a snapshot of the final formation.
Fig. 3: Fatness , tangentness for a failed emergence. Fig. 0(d) shows a snapshot of the final formation.

The fatness and tangentness metrics are defined for one instant in time. It is more useful to consider the behavior of the swarm in steady-state. We let and be the average values of over the last seconds

(13)
(14)

where is the interval over which an average is recorded. We choose in our tests.

We additionally define a single orderliness metric that combines the steady-state fatness  and steady-state tangentness  of the system into one number as

(15)

where represents a perfect ring and represents maximum disorder. Fig. 1 shows the approximate values of under each formation.

To quantify crashes, we consider the crash rate in collisions per second since we are interested in the steady-state behavior of the swarm independently of how long the swarm has been operational. Since we consider collisions to be absolutely prohibited, it does not makes sense to simply count how many times agents violate constraint C2 with no further consequences, for instance, as might make more sense for ‘soft’ agents like fish [31]. We instead expect collided agents to be ‘damaged’ in such a way that their participation within the swarm is no longer possible. To capture this, we remove any agents that violate constraint C2 from their current location and ‘respawn’ them at a safe distance away from the rest of the swarm. This can be thought of, from the physical swarm perspective, as launching a new agent to replace agents lost through collision. Respawning, as opposed to simply deleting the agents, is necessary since we are interested in steady-state behavior of the swarm for a specific number of agents and allowing to decrease arbitrarily could lead to unfairly biased analysis.

Equipped with our orderliness metric  and crash rate metric, we can now study our problem in a more quantitative way. Figs. 6 and 5 explore the results as we vary the sizes of the agents , the collision avoidance gain , and the sensing range . The white curves are explained in Section III-C.

We make a few observations here. As the strength of the collision avoidance increases, whether increasing in Fig. 5 or in Fig. 6 (both in the +y direction), the behavior quality approaches zero. Additionally, in the -y direction as we weaken the collision avoidance, the crash rate increases and the quality abruptly drops to zero, as can be seen by the dark blue region in the lower-right of both figures.

Figs. 6 and 5 seem to suggest that there exists a hard boundary in the parameter space separating a successfully emerged behavior and one that fails due to too many collisions. The edge of this boundary just before agents begin colliding seems to provide the best behavior quality . This suggests that if our goal is to utilize swarming algorithms with various collision avoidance strategies, we would like to operate right at this boundary. We want to find this boundary in a more methodical manner than sampling the parameter space.

Iii-C Finding Safe Collision Avoidance Parameters

Here we are interested in determining sets of parameters that avoid collisions while being as close as possible to the boundary identified in Section III-B. More specifically, given all of the parameters in Table I except for one, we wish to find the critical value of the parameter which maximizes  and minimizes crashes. Based on our results from Section III-B, we wish to choose parameters that ‘just barely’ guarantee safety such that agents are maintaining their intended behavior as best as possible. Rather than analyzing the entire swarm, we take the myopic view of one agent and identify conditions under which it can guarantee no collisions with a fixed number of other agents.

We consider our test scenario to be a fixed number of agents avoiding each other while in each other’s sensing radii. We find that while agents are in each other’s local sensing radii and avoiding each other, the anti-crash term tends to be much stronger than the other terms in Eq. 3. Thus, as an approximation we represent the agent dynamics as

(16)

Let be the set of parameters used to define , . We consider a selection of parameters to be on the edge of safety if agents just barely avoid a crash, i.e., the closest distance they can get under our test scenario is exactly .

Iii-C1 Safety with two agents

We begin by studying the conditions under which one agent can guarantee avoiding crashes with a single other agent. Our starting point is any state in which a second agent has just entered the sensing range of the first agent; they are exactly  away from one another. For analysis purposes and with a slight abuse of notation, we redefine the states to be in coordinates relative to agent 1 rather than a fixed frame. We can now introduce the reachable set , which is the set of all possible relative positions two agents can be in while avoiding a crash using the dynamics of Eq. 16 with parameters . This corresponds to any situation where two agents have just entered each other’s sensing radii traveling at a speed up to and are avoiding each other. The reachable set for this situation is

(17)

where come from the parameters . Note that this is the set of positions relative to the starting state of agent 1, but a rigid transformation applied to all coordinates can transform this scenario into anything where , . We define the “headroom” as the available space agents have in the worst case collision scenario

(18)

Thus, the parameters on the edge of safety that guarantee no collisions with two agents are s.t. . We conjecture that the solution for is a head-on collision at full speed, that is, we consider only the subset of the reachable set where and This simplifies finding the solution to Eq. 18 without running any optimization routines.

Iii-C2 Safety with three agents

Clearly there will be more than two agents coming in contact with each other so we extend the logic of the previous discussion to three agents. Similar to before, we consider the reachable set for three agents who are avoiding each other using the dynamics of Eq. 16. This scenario, specifically, consists of:

  1. Agents and come within  of one another and begin actively avoiding each other, i.e., their states are in .

  2. Agent enters at the edge of either ’s or ’s sensing radius.

Using this setup we define as

(19)

The headroom for three agents is defined similar to as

(20)

We find the solution to the three agent case using numerical optimization techniques. Specifically, we use two different optimization algorithms, simulated annealing [34] and differential evolution [28] included with the Scipy package [19], and verify that they both arrive at the same answer. For all choices of that avoid a crash, the solution we find for the three agent worst case can be described as

  1. “boosts” the speed of past

  2. undergoes a head-on collision with .

Thus, to calculate , we first calculate , the maximum speed achieved by after it comes in contact with . can be found by solving

(21)

We find is around 90° which allows to ‘push’ and increase its speed. After finding , the headroom is similar to , where the worst case is a head-on collision with and , thus

(22)

Having defined the headroom for 2 and 3 agents respectively, we can now use them to ‘tune’ the collision avoidance and find parameters which are on the edge of safety, that is or . To do this, we assume that all of the collision avoidance parameters are given except one and solve as a numerical root-finding problem, assuming . Based on our observations from Figs. 6 and 5 we believe this will give us the optimum point between emergence and safety. While our headroom approach works empirically when considering just , the complexity of this approach for motivates the need to co-design a highly specialized collision avoidance algorithm for this behavior instead of tuning a generic algorithm. Additionally, guaranteeing safety is difficult due to our simplifying assumptions made in formulating Eq. 16. We leave guarantees of safety as well as consideration of for to future work.

Iv Results

To validate our theory, we explore many different combinations of the parameters in Table I, the number of agents , and the choice of collision avoidance to see how the convergence quality is affected. For each particular choice of parameters we initialize all agents on a grid formation with a spacing of and set their initial speeds to and bearings randomly. As mentioned in Section III-B, we respawn agents that collide in order to ensure that collisions are actually catastrophic and that the number of agents remains fixed.

Iv-a Comparing Collision Avoidance Algorithms

Fig. 4: A comparison how each collision avoidance strategy scales with the number of agents . Left: the convergence quality . Right: the crash rate.

Through our convergence metric , the potential-fields method [29] and the gyroscopic method [16] are compared. To keep the comparison unbiased, we allow each collision avoidance method to take on a range of repulsion strength between 0 and 4. We then select the value of that gives the best convergence quality . Fig. 4 shows and the crash rate for both collision avoidance methods as a function of the number of agents, where the other parameters are fixed at . It is clear that the potential fields strategy outperforms the gyro method for this particular set of parameters. Despite picking the best value of , the gyro strategy is not able to cope with more than about 25 agents and the behavior quality quickly starts to degrade. We additionally test control barrier certificates from [7] with similar results, where significant interference causes the goal behavior to fail to emerge. We focus our efforts on the potential fields approach from Eq. 5, and leave further analysis of the barrier certificates technique to future work.

Iv-B Choosing Collision Avoidance Parameters

Fig. 5: Left: The convergence quality . Right: the crash rate as a function of and . The white lines show safe values of as a function of , where safety is defined by and .
Fig. 6: A similar plot to Fig. 5 except and is varied.

We show in Section III-C how to choose parameters for the potential fields strategy which are on the ‘edge of safety’, that is just barely strong enough to avoid collisions. We validate this approach on the potential fields strategy by exploring a large space of parameters, investigating the convergence quality and crash rate for each choice of parameters, then comparing this to the theoretical boundary line predicted by . Fig. 5 shows a plot of the quality and crash rate as a function of two parameters: the agent size and force multiplier , with the other parameters held fixed at . Additionally, Fig. 5 shows two curves defined by the value of as a function of where the headroom and . Our procedure is able to predict the boundary between and quite well, with giving more conservative parameters that are almost entirely crash-free except for extreme values of . Fig. 6 shows similar results if we predict the boundary value of instead of .

It is clear from our results that there is a strong inter-dependency between the choice of collision avoidance and the emergent behavior. Parameters which are ‘below’ the boundary line approximately defined by seem to produce no useful behavior due to too many collisions, as can be seen by the crash rate plot on the right side of Figs. 6 and 5. Parameters which are just ‘above’ the boundary line seem to produce the best results, i.e., the best quality , but as we increase the aggressiveness of the avoidance the quality gradually drops away to and there is no meaningful emerged behavior as well.

V Conclusion

Although research on swarming algorithms and collision avoidance have received significant amounts of attention independently from one another, this paper shows why further research is necessary in applications where collisions cannot occur. We support our claim through an illustrative example of a particular behavior that is disrupted by different collision avoidance strategies unless great care is taken to tune the collision avoidance parameters. This paper thus identifies the need for novel controllers that are co-designed to account for both collision avoidance and the globally emergent behavior simultaneously.

We also show that there is a methodical technique in choosing design parameters which maximimize convergence quality yet also avoid collisions and we demonstrate its efficacy empirically. We propose that the best parameters are those which are on the edge of safety, or intuitively as weak as possible while being strong enough to avoid collisions.

For the future, we intend to develop novel controllers that can achieve the behavior mentioned here while simultaneously ensuring collision avoidance. We intend to have agents sense each other locally as opposed to the infinite range communication assumption described earlier. Finally, we plan to test additional swarm behaviors to explore how our results generalize, including to three dimensional cases.

Acknowledgements

This work was supported by the Department of the Navy, Office of Naval Research (ONR), under federal grant N00014-19-1-2121.

References

  • [1] J. Alonso-Mora, A. Breitenmoser, P. Beardsley, and R. Siegwart (2012) Reciprocal collision avoidance for multiple car-like robots. In 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, pp. 360–366. External Links: Document, ISBN 978-1-4673-1405-3, Link Cited by: §I.
  • [2] J. Alonso-Mora, A. Breitenmoser, M. Rufli, P. Beardsley, and R. Siegwart (2012) Optimal reciprocal collision avoidance for multiple non-holonomic robots. In Springer Tracts in Advanced Robotics, Vol. 83 STAR, pp. 203–216. External Links: Document, ISBN 9783642327223, ISSN 16107438 Cited by: §I.
  • [3] S. H. Arul, A. J. Sathyamoorthy, S. Patel, M. Otte, H. Xu, M. C. Lin, and D. Manocha (2019) LSwarm: Efficient Collision Avoidance for Large Swarms with Coverage Constraints in Complex Urban Scenes. arXiv preprint arXiv:1902.08379. External Links: 1902.08379, Link Cited by: §I.
  • [4] L. Bayindir (2016) A review of swarm robotics tasks. Neurocomputing 172, pp. 292–321. External Links: Document, ISBN 0925-2312, ISSN 18728286 Cited by: §I.
  • [5] R.W. Beard and T.W. McLain (2003) Multiple UAV cooperative search under collision avoidance and limited range communication constraints. In IEEE Conference on Decision and Control, Maui, HI, pp. 25–30. External Links: Document Cited by: §I.
  • [6] E. Bonabeau, D. d. R. D. F. Marco, M. Dorigo, G. Theraulaz, et al. (1999) Swarm intelligence: from natural to artificial systems. Oxford university press. Cited by: §I.
  • [7] U. Borrmann, L. Wang, A. D. Ames, and M. Egerstedt (2015) Control Barrier Certificates for Safe Swarm Behavior. IFAC-PapersOnLine 48 (27), pp. 68–73. External Links: Document, ISBN 9781467360906, ISSN 24058963 Cited by: §I, §I, §IV-A.
  • [8] A. Breitenmoser and A. Martinoli (2016) On combining multi-robot coverage and reciprocal collision avoidance. Springer Tracts in Advanced Robotics 112, pp. 49–64. External Links: Document, ISBN 9784431558774, ISSN 1610742X Cited by: §I.
  • [9] J. A. Carrillo, A. Klar, S. Martin, and S. Tiwari (2010) Self-Propelled Interacting Particle Systems With Roosting Force. Mathematical Models and Methods in Applied Sciences 20 (supp01), pp. 1533–1552. External Links: Document, ISSN 0218-2025 Cited by: §I.
  • [10] J. Carrillo, M. D’Orsogna, and V. Panferov (2009) Double milling in self-propelled swarms from kinetic theory. Kinetic and Related Models 2 (2), pp. 363–378. External Links: Document, ISSN 1937-5093 Cited by: §I, §I.
  • [11] A. Cavagna, A. Cimarelli, I. Giardina, G. Parisi, R. Santagati, F. Stefanini, and M. Viale (2010) Scale-free correlations in starling flocks. Proceedings of the National Academy of Sciences 107 (26), pp. 11865–11870. External Links: Document, 0911.4393, ISBN 1091-6490 (Electronic)$\$r0027-8424 (Linking), ISSN 0027-8424, Link Cited by: §III-B.
  • [12] Q. Chen, G. Luo, Y. Tong, X. Jin, and Z. Deng (2019) Shape-constrained flying insects animation. Computer Animation and Virtual Worlds 30 (3-4), pp. 1–11. External Links: Document, ISSN 1546427X Cited by: §I.
  • [13] T. Chung (2017) OFFensive swarm-enabled tactics (offset). Note: https://www.darpa.mil/work-with-us/offensive-swarm-enabled-tacticsAccessed: 2019-09-10 Cited by: §I.
  • [14] M. R. D’Orsogna, Y. L. Chuang, A. L. Bertozzi, and L. S. Chayes (2006) Self-propelled particles with soft-core interactions: Patterns, stability, and collapse. Physical Review Letters 96 (10). External Links: Document, ISBN 0031-9007 (Print), ISSN 00319007 Cited by: §I.
  • [15] L. Dai, Q. Cao, Y. Xia, and Y. Gao (2017) Distributed MPC for formation of multi-agent systems with collision avoidance and obstacle avoidance. Journal of the Franklin Institute 354 (4), pp. 2068–2085. External Links: Document, ISSN 00160032 Cited by: §I.
  • [16] Dong Eui Chang, S.C. Shadden, J.E. Marsden, and R. Olfati-Saber (2003) Collision avoidance for multiple agent systems. In IEEE Conference on Decision and Control, Vol. 42, Maui, HI, pp. 539–543. External Links: Document, ISBN 0-7803-7924-1, ISSN 0191-2216, Link Cited by: §I, §I, §III-A, §III-A, §IV-A.
  • [17] M. Dorigo and T. Stützle (2019) Ant colony optimization: Overview and recent advances. In International Series in Operations Research and Management Science, Vol. 272, pp. 311–351. External Links: Document, ISBN 9783319910864, ISSN 08848289 Cited by: §I.
  • [18] O. Holland and C. Melhuish (1999) Stigmergy, self-organization, and sorting in collective robotics. Artificial Life 5 (2), pp. 173–202. External Links: Document, ISSN 10645462 Cited by: §I.
  • [19] E. Jones, T. Oliphant, P. Peterson, et al. (2001–)

    SciPy: open source scientific tools for Python

    .
    Note: [Online; accessed 2019-09-10] External Links: Link Cited by: §III-C2.
  • [20] E. Lalish and K. A. Morgansen (2012) Distributed reactive collision avoidance. Autonomous Robots 32 (3), pp. 207–226. External Links: Document, ISBN 9781109610482, ISSN 09295593 Cited by: §I.
  • [21] Y. li Chuang, M. R. D’Orsogna, D. Marthaler, A. L. Bertozzi, and L. S. Chayes (2007) State transitions and the continuum limit for a 2D interacting, self-propelled particle system. Physica D: Nonlinear Phenomena 232 (1), pp. 33–47. External Links: Document, 0606031, ISBN 3102062679, ISSN 01672789 Cited by: §III-B, §III-B.
  • [22] S. Mayya, P. Pierpaoli, G. Nair, and M. Egerstedt (2017) Collisions as information sources in densely packed multi-robot systems under mean-field approximations. In Robotics: Science and Systems, Vol. 13, Boston, MA. External Links: ISBN 9780992374730, ISSN 2330765X, Link Cited by: §I.
  • [23] Y. Mulgaonkar, A. Makineni, L. Guerrero-Bonilla, and V. Kumar (2018) Robust Aerial Robot Swarms Without Collision Avoidance. IEEE Robotics and Automation Letters 3 (1), pp. 596–603. External Links: Document, ISSN 2377-3766, Link Cited by: §I.
  • [24] H. Oh, A. Ramezan Shirazi, C. Sun, and Y. Jin (2017) Bio-inspired self-organising multi-robot pattern formation: A review. Robotics and Autonomous Systems 91, pp. 83–100. External Links: Document, ISSN 09218890, Link Cited by: §I.
  • [25] C. W. Reynolds (1987) Flocks, herds and schools: A distributed behavioral model. In ACM SIGGRAPH Computer Graphics, Vol. 21, New York, NY, pp. 25–34. External Links: Document, 0208573, ISBN 0897912276, ISSN 00978930, Link Cited by: §I, §I.
  • [26] E. Rimon and D. Koditschek (1992) Exact Robot Navigation Using Artificial Potential Functions. Robotics and Automation, IEEE 8 (5), pp. 501–518. External Links: Document, ISSN 1042296X, Link Cited by: §I.
  • [27] M. Rubenstein, C. Ahler, and R. Nagpal (2012) Kilobot: A low cost scalable robot system for collective behaviors. In Proceedings - IEEE International Conference on Robotics and Automation, Saint Paul, MN, pp. 3293–3298. External Links: Document, ISBN 9781467314039, ISSN 10504729 Cited by: §I.
  • [28] R. Storn and K. Price (1997)

    Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces

    .
    Journal of global optimization 11 (4), pp. 341–359. Cited by: §III-C2.
  • [29] K. Szwaykowska, I. B. Schwartz, L. Mier-Y-Teran Romero, C. R. Heckman, D. Mox, and M. A. Hsieh (2016) Collective motion patterns of swarms with delay coupling: Theory and experiment. Physical Review E 93 (3). External Links: Document, 1601.08134, ISSN 24700053 Cited by: §I, §I, §II-B, §II-B, §III-A, §IV-A.
  • [30] J. Van Den Berg, S. J. Guy, M. Lin, and D. Manocha (2011) Reciprocal n-body collision avoidance. In Springer Tracts in Advanced Robotics, Vol. 70, pp. 3–19. External Links: Document, ISBN 9783642194566, ISSN 16107438 Cited by: §I.
  • [31] S. V. Viscido, J. K. Parrish, and D. Grünbaum (2005) The effect of population size and number of influential neighbors on the emergent properties of fish schools. Ecological Modelling 183 (2-3), pp. 347–363. External Links: Document, ISBN 0304-3800, ISSN 03043800 Cited by: §III-B, §III-B.
  • [32] L. Wang, A. D. Ames, and M. Egerstedt (2017) Safety barrier certificates for collisions-free multirobot systems. IEEE Transactions on Robotics 33 (3), pp. 661–674. External Links: Document, arXiv:1609.00651v1, ISBN 9781467386821, ISSN 15523098 Cited by: §I.
  • [33] L. Wang (2016)(Website) Note: https://youtu.be/rK9oyqccMJwAccessed: 2019-09-24 Cited by: §I.
  • [34] Y. Xiang, D. Sun, W. Fan, and X. Gong (1997) Generalized simulated annealing algorithm and its application to the thomson model. Physics Letters A 233 (3), pp. 216–220. Cited by: §III-C2.