Provable Emergent Pattern Formation by a Swarm of Anonymous, Homogeneous, Non-Communicating, Reactive Robots with Limited Relative Sensing and no Global Knowledge or Positionin

04/18/2018 ∙ by Mario Coppola, et al. ∙ 0

In this work, we explore emergent behaviors by swarms of anonymous, homogeneous, non-communicating, reactive robots that do not know their global position and have limited relative sensing. We introduce a novel method that enables such severely limited robots to autonomously arrange in a desired pattern and maintain it. The method includes an automatic proof procedure to check whether a given pattern will be achieved by the swarm from any initial configuration. An attractive feature of this proof procedure is that it is local in nature, avoiding as much as possible the computational explosion that can be expected with increasing robots, states, and action possibilities. Our approach is based on extracting the local states that constitute a global goal (in this case, a pattern). We then formally show that these local states can only coexist when the global desired pattern is achieved and that, until this occurs, there is always a sequence of actions that will lead from the current pattern to the desired pattern. Furthermore, we show that the agents will never perform actions that could a) lead to intra-swarm collisions or b) cause the swarm to separate. After an analysis of the performance of pattern formation in the discrete domain, we also test the system in continuous time and space simulations and reproduce the results using asynchronous agents operating in unbounded space. The agents successfully form the desired patterns while avoiding collisions and separation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 23

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Swarm intelligence enables several robots to collaborate, in a distributed fashion, towards achieving a common goal (Şahin et al. 2008). Each robot (or agent) acts independently based on its local perception of the environment. The goal is achieved via the combined (inter-)actions of all agents (Navarro and Matía 2012). The challenge is to develop controllers such that the swarm, despite the limited capabilities of each of its individuals, will achieve the global goal (liveness property) and avoid undesired situations (safety property) (Winfield et al. 2005b).

In this paper, we focus on achieving these properties for the problem of pattern formation by a swarm of fully homogeneous, anonymous, and reactive robots. Homogeneous means that all robots are identical. Anonymous means that the robots do not know each other’s identities (they cannot tell who is who). Reactive means that the robots react only based on their current perception of the world. Furthermore, we impose that each robot can only sense the environment in a narrow range around itself, and thus has only partial observability of the global state of the swarm. The agents also cannot communicate between each other and do not have any global positioning information. Finally, they hold no knowledge on the number of agents in the swarm or what the global goal actually is. Despite these extremely limited agents, we present a method that enables them to arrange into a desired pattern.

The contribution of this article to the body of knowledge is a method to define local agent behaviors such that a highly limited swarm of agents can achieve a global pattern, with a proof procedure to check that it will achieve this from any initial configuration. The method is based on defining a set of desired local states (as observable by an agent, directly based on its sensors) that can coexist if and only if the desired pattern is achieved. When an agent is in any other state, it will try to move about its neighbors in search of a desired state. With this approach, it is possible to guarantee that the swarm will reshuffle from any initial configuration into the desired pattern. Our proof of convergence is of a local nature; it analyzes the role of agents within the swarm and the actions that they can take. The local proof is more restrictive than a global proof, but it avoids the computational explosion of a global analysis and does not make any statistical assumptions about the state of the agents. With the proposed method, one can show that the emergent behavior of a swarm is predictable, provided that its constituent local states and actions are properly defined. Additionally, the behavior can also guarantee that the agents remain aggregated and never collide (and thus satisfy the safety property). We validate the approach in a discrete environment and then export and test it in a simulation environment with continuous time, continuous space, and fully asynchronous agents.

This paper is organized as follows. In Sect. 2, we review prior approaches to distributed robot control with a focus on pattern formation, and we explain the context of this approach in the field. Following this, we define the problem and its key aspects in Sect. 3. The methodology, which includes the local proof, is then detailed in Sect. 4. The local proof requires that the desired pattern is the only pattern that can emerge. Our implementation to verify this is presented in Sect. 5. The performance of the system is then tested for a set of representative patterns, first in an idealized discrete world (Sect. 6) and then in a continuous world (Sect. 7). The results and insights gathered are further discussed in Sect. 8. Finally, Sect. 9 provides concluding remarks and future research that can follow up on methodology and results in this paper.

2 Related work and research context

Distributed pattern formation and spatial organization is a branch of swarm robotics with applications in aerial robots (Saska et al. 2016; Achtelik et al. 2012), underwater robots (Joordens and Jamshidi 2010), satellites (Engelen et al. 2011; Verhoeven et al. 2011), planetary rovers (Áron Kisdi and Tatnall 2011), and entertainment (Alonso-Mora 2014).

One approach is to use a centralized omniscient controller that plans the path of every agent (Alonso-Mora 2014). This is efficient but requires external infrastructure, namely: 1) a global localization method and 2) an external computer capable of communicating with the agents. Distributed approaches aim to achieve the same result without the central controller. If a global localization method is still available, the agents can use their global position as a guide towards target locations (Hou and Cheah 2009; Morgan et al. 2015; Dada and Ramlana 2013; Gold et al. 2000). However, for a swarm to be independent of external infrastructure, there is a need for algorithms that solely require on-board relative sensors.

In a best case scenario, each robot can directly sense every other robot, in which case the swarm is fully connected (Bouffanais 2016). Several works operate under this assumption with valuable results. Gazi and Passino (2004) proved that a fully connected swarm can reach a stable formation for certain classes of attraction and repulsion functions. Such attraction and repulsion functions can also be manipulated to obtain certain patterns (Izzo and Pettazzi 2005). Alternatively, Suzuki and Yamashita (1999), Flocchini et al. (2008), Fujinaga et al. (2015), and Güzel et al. (2017) devised explicit algorithms for arbitrary pattern formation. Izzo et al. (2014) and Scheper and de Croon (2016) investigated the use of neural networks for asymmetrical pattern formation. Yamauchi and Yamashita (2014) developed an algorithm that can replicate patterns following a leader election. Pereira and Hsu (2008) and de Marina Peinado (2016) used formation control algorithms.

Unfortunately, the assumption that a swarm is fully connected does not hold for all applications. For instance, if robots sense each other with on-board cameras, they might be unable to see behind a nearby agent or beyond a certain distance. In this case, the swarm is connected (agents can sense each other) but not fully connected. Sensing topology is the graph of how agents in a swarm sense each other. Tanner (2004) showed how certain sensing topologies can endow one agent with (indirect) control over all agents, and used this to manipulate the swarm to form patterns, under the assumption that the sensing topology is fixed. In swarms, however, the topology is likely not fixed, but changing depending on how the agents are positioned at any given time. It is then impossible for any agent to take advantage (or know about) its control centrality.

Defining a functional hierarchy in the swarm can help to artificially endow certain agents with control centrality (regardless of the sensing topology), or to make agents act as “seeds” so that the other agents can use them as reference points. Rubenstein et al. (2014) notably used this approach to create patterns using a swarm of one thousand robots. Four seed agents acted as a static reference point for all other agents. Other methods that use seed/leader agents can be found in the works of Khaledyan and de Queiroz (2017), Cicerone et al. (2016), Hasan et al. (2018), and Wang et al. (2017). Furthermore, Derakhshandeh et al. (2016), Di Luna et al. (2017), and Yamauchi and Yamashita (2014) explored autonomous leader election algorithms to avoid manually defining leaders. However, using leader/seed agents means that the other agents need to identify them as such, which cannot happen when the agents are anonymous and/or the conditions are such that they cannot all communicate exhaustively in order to agree on a leader.

With a focus on self-assembly applications, Klavins (2002) proposed a strategy for homogeneous and anonymous agents using graph structures. Robots could randomly move in an environment and latch together upon encounter. Based on a set of instructions, they could stick together if the latching matched a sub-element of the structure, or else detach. Over time, the agents would latch into the final assembly. Other similar works include the work of Arbuckle and Requicha (2010), Arbuckle and Requicha (2012), Klavins (2007), Fox and Shamma (2015), and Haghighat and Martinoli (2017). These schemes assume that the agents can drift randomly in an enclosed area, and that by doing so they will eventually meet each other, at which point they will latch together. Once latched, the agents communicate in order to determine whether they should remain attached or whether they should detach and continue drifting until a new attachment is made.

In this work we focus on a minimalist swarm and remove all assumptions to the maximum extent possible. We assume that each agent only has information about the relative location of its closest neighbors, and that they cannot afford to separate from the swarm as they operate in an unbounded environment. To our knowledge, there exists little research with such limited cases. Krishnanand and Ghose (2005) explored using local information in order to align robots at specific bearings to each other, forming infinite grids or lines. Flocchini et al. (2005) explored the gathering problem, which is a special case of pattern formation where all robots have to gather towards the same location. Yamauchi and Yamashita (2013) examined the formation power of very limited agents and how this related to the initial condition by comparing the symmetricities of the patterns.

In this article, we present a general approach to design the local behaviors of robots such that they can form an arbitrary pattern, and we present a proof procedure to check that this can be achieved from any arbitrary initial configuration. This is shown to work for asynchronous agents and in an unbounded environment. The swarm can be proven to have the safety property (it never separates or experiences intra-swarm collisions) and the liveness property (the goal is eventually achieved). Based on their on-board sensors and actuators, the robots have a set of possible states that they can observe and actions that they can take. The state-space and the action-space of the agents are discretized, enabling a formal analysis of the system. This idea was introduced by Winfield et al. (2005a) and later explored by Dixon et al. (2012) and Gjondrekaj et al. (2012) using temporal logic paradigms, by explicitly defining the global states of the swarm in a formal environment. Using model checking techniques (Clarke et al. 1999), the idea behind these works is to verify emergent properties of the swarm by studying all its global states. However, as the size of the swarm grows, checking all global states leads to a computational explosion (Dixon et al. 2012). This was tackled by Konur et al. (2012) with the use of macroscopic swarm models. Macroscopic model use statistics to model the behavior of the entire swarm, yet this bears the disadvantage that the analysis is only as good as the statistical approximation and the validity of its assumptions, which are not always applicable (Lerman et al. 2001). Therefore, in this work, we instead focus on a proof based on the local states and actions of the agents. This has the advantage that it is independent of the size of the swarm and mitigates computational explosion, but it also does not make any statistical assumptions about the swarm. Using this approach, we can formally guarantee that the swarm remains aggregated, no collisions occur, and that there is always a sequence of actions that will lead it to form the global desired pattern. The global pattern is defined as a set of local states that build up the global state. Asynchronicity is tackled by allowing agents to move if and only if their neighbors are not moving.

3 Problem Definition

In this work, the global emergent behavior of the swarm is forming a pattern. The goal of the swarm is to shuffle into the desired pattern and hold it despite none of the agents explicitly knowing that this is the global goal that they are trying to achieve, or being able to observe the global pattern. The swarm and its agents are under the following constraints:

The swarm is comprised of homogeneous agents (all agents are identical);

The agents are anonymous (they cannot know each other’s ID);

The agents are reactive (they only act based on their current state);

The swarm is leaderless;

The agents cannot communicate with each other;

The agents make decisions locally (i.e., on-board);

The agents do not know their global position;

The agents exist in an unbounded space;

Each agent can only sense the relative location of neighboring agents that are closer than a minimum range and within a field of view as allowed by their sensors.

Furthermore, throughout this work, we make the following assumptions.

At the initial condition, the sensing topology of the swarm forms a connected graph;

The agents all have knowledge of a common direction (e.g., North) and are programmed to act with respect to it; 111 On real robots, a common direction can be known using on-board sensors such as a magnetic sensor (Oh et al. 2015).

The agents exist and operate on a two-dimensional plane;

When an agent senses the relative position of a neighbor, it can sense it with enough accuracy/frequency to establish if a neighbor is executing an action. Note: Assumptions A1 and A3 are not necessary. However, as will be shown by our framework, without Assumptions A1 the patterns that a swarm can generate become intrinsically limited due to the fact that the agents are unable to differentiate between certain states (this is further discussed in Sect. 8.4). Assumption A3 simplifies the analysis performed throughout the paper, but we expect our methods to also be extendable to three dimensions. Assumption A3 could also be challenged, but it will be shown that it is an important property of the robots if safe behavior is expected. The only assumption that cannot be removed is Assumption A3. This is because if a swarm does not start in a connected state (but, for instance, separated into two disconnected groups that cannot sense each other) then it cannot ever be expected for the groups to find each other in an unbounded space.

4 Method Description

Each agent in a swarm can measure the relative location of its neighbors — this forms the local state of the agent. The principal idea is to extract the set of local states that the agents are in when the global pattern is achieved. These are the desired local states of the agents. Being (or not being) in a desired local state can tell an agent something about the state of the whole swarm, similarly to how a puzzle piece tells something about a puzzle. As a simple example, if a swarm of 1000 robots had to form a line, and one agent sensed that it was surrounded by agents at all sides, then it could infer that the line has not yet been achieved (even if it is not able to sense all other agents). It could then move to amend the situation. Otherwise, it would remain where it is because from its perspective the global goal has been achieved. With this idea, we will show that just by informing the agents of a set of desired local states, we can cause them to reshuffle into the global pattern.

To ensure that the desired pattern is the only emergent possibility, we need to check how the desired local states can coexist and see if the desired pattern is the only solution. If this is not the case, then we know that the agents are under-informed — their sensors are insufficient to guarantee the pattern. This would require upgrading the sensors to, for instance, sense at a further range. If we know that the final pattern is unique, we can then analyze the local behavior of the agents to determine whether the pattern will be achieved from any initial condition. To conduct our analysis, we formally describe the sensory perception and the action capabilities of the agents. In doing so, we create a discrete description of what the robot can sense about its environment (local state space), and how it can move in this environment (local action space).

4.1 Local Sensor Layout and State Space Definition

With a sensor, a robot is expected to be capable of measuring the relative location of its neighbors. In order to set up a formal framework, we formalize the sensor readings according a robot’s sensor and its interplay with the expected inter-robot equilibrium distances (which may result, for instance, from attraction and repulsion forces). With this, we define the sensor layout of the robots.

Definition 1.

The sensor layout of a robot is a boolean array of length , i.e., arranged in space about the robot’s frame of reference. The array specifies, from the perspective of robot , whether a neighbor is located at each relative position that is sensed ( if a neighbor is sensed at , else ).

Definition 2.

A link is an element in the sensory layout , which indicates that a neighbor is sensed (“linked”) at that position.

Definition 3.

A local state is a realization of a sensor layout based on whether a neighbor is sensed or not at each link in .

Fig. 1 shows a swarm of robots at equilibrium distances to each other and how this relates to different sensor layouts and the state of the robot. Cases 1 and 2 feature omni-directional sensors which can sense at different ranges. Case 3 features a directional sensor (e.g., a camera). In Example Case 1, the local state of the agent is , as also visually depicted in the figure. We then define the local state space as the set of all possible local states that an agent can be in. It follows that consists of all possible realizations of , such that . In this paper, it is assumed that the swarm begins in a connected state (Assumption 3) and we also show that the swarm never disconnects; therefore, we eliminate the null state from , such that . Furthermore, for representation purposes, we will focus on the case where robots have an omni-directional sensor as seen in Case 1 and 2, although the methods in the paper can be extended to other sensor layouts.

Figure 1: Examples of three different sensor layouts and the resulting local state of an agent in the swarm. Notice that the agents exist in continuous space, but their presence is discretized to the closest grid point.

Local states of neighboring robots must be able to coexist. If robot sees robot , then it follows that robot should also see robot (if the two are both in each other’s field of view). Then, if robot sees a third robot at a position where should also be able to see robot , then it follows that robot should also see the third robot , and so on. If this is respected, then we say that the local states of robots match. Examples of local states that do not match and states that match are visualized in Fig. 2.

Definition 4.

Two (or more) local states match when they do not have conflicting information about the relative location of their neighbors and each other.

Figure 2: Examples of local states that do not match (left, center) and states that match (right). The sensor layout from Case 2 in Fig. 1 is used.

4.2 Action Space Definition

An action is a motion that the agent can perform in space. Similarly as to the state space, we discretize actions with respect to an egocentric frame of reference. Let be the action space, which is dependent on the actuators available and the degrees of freedom of the robots. As illustrated in Fig. 3, a robot that can move in all directions, such as quad-rotors or certain ground robots, would be described with an omni-directional action space. A more limited robot could be described with a constrained action space.

Figure 3: Examples of possible action spaces

4.3 Determining the Desired Local States Needed to Achieve a Pattern

Based on a desired global pattern and a given sensor layout, we can extract the local states that the agents are in when a desired pattern is composed. These states are referred to as the desired states, and are grouped under the set , where . This process is analogous to extracting the puzzle pieces that create a puzzle. In the general case, the size of set does not need to be equal to the number of agents in the swarm . Any number of agents can be assigned to any set . The patterns that can be formed stem from any possible matching of of the states in , with repetition. For any set and a swarm consisting of a fixed number of agents , we thus have one of four possible outcomes:

No pattern is possible: instances of the states in do not match in any way.

Only undesired patterns are possible: it is impossible to settle in the desired pattern but other patterns are possible.

Desired pattern is possible: it is possible to settle in the desired pattern but other patterns are also possible.

Desired pattern is possible and unique: it is only possible to settle in the desired pattern. Fig. 4 visually shows the four outcomes for different sets of , a swarm of 4 agents, and a specific sensor layout. Each of the 4 agents in the swarm can be any of the states in . In this example we deal with a small swarm, making it is possible to visually extract possible patterns. However, as the size of the swarm and the size of grows, there is a need for an automatic checker. Our implementation of this is detailed in Sect. 5.

Figure 4: Examples of outcomes for different sets by a swarm of 4 agents using the sensor layout from Case 1 in Fig. 1

4.4 Defining a Safe State-Action Relation

In this section, we wish to develop a state-action relation such that a connected swarm remains safe at all times, as defined in Definition 5.

Definition 5.

A connected swarm remains safe if neither of the following events occur: 1) a collision between two or more agents, 2) the swarm disconnects.

Our swarm consists of several asynchronous agents that can choose to take actions at any point in time. Safety can be guaranteed when agents do not simultaneously perform conflicting actions. To formalize this, we bring forward Proposition 1.

Proposition 1.

If the swarm never features more than one agent moving at the same time, then the swarm can remain safe.

Proof.

Consider a connected swarm organized into an arbitrary pattern . At a given time , agent decides to take an action based on action space . This action should last until . However, at time , an unsafe event takes place. It follows that the event must have been the fault of agent , because it was the only agent that moved. Therefore, if agent could select only from safe actions, this would be sufficient to guarantee that the swarm is safe at time . ∎

Given the constraints of our robots, Proposition 1 can be used in a formal analysis of the system, but it cannot be used in a real system. This explains the importance for Assumption A3 in Sect. 3: an agent must know whether its neighbors are executing an action in order to avoid causing conflicts. If the agents do not move whenever any of their neighbors are moving (according to a first come first served basis), then the swarm approaches the formal requirement of Proposition 1, and the swarm can remain safe provided that the moving agent executes safe actions. 222This will be explored in Sect. 7, where the system is tested with fully asynchronous agents in a continuous time and continuous space setting. To define which actions are safe, we bring forward Propositions 2 and 3.

Proposition 2.

If an agent is the only agent moving in the entire swarm, and this agent only selects actions in directions that can be sensed by its on-board sensors, then it can be guaranteed that collisions will not occur.

Proof.

Consider an agent in a swarm. Following Proposition 1, we know that the agent will be the only agent to move. The agent moves in the environment according to the action space . If all actions in lead to a location that is already sensed, then agent can establish whether the action will cause a collision, and it can choose against performing these actions. ∎

Proposition 3.

If an agent is the only agent moving in the entire swarm, and the agent only takes actions such that, at its new location, all its prior neighbors and itself remain connected, then the swarm will remain connected.

Proof.

Consider a connected swarm of agents. The graph of the swarm is connected if any node (agent) features a path to any other node (agent) . Consider the case where agent takes an action. If, following the action, agent is still connected to all its original neighbors, then the connectivity of the graph was not affected. If agent only selects actions where, at its final position, this principle is respected, then it will be able to move while guaranteeing that the swarm remains connected. ∎

Using Propositions 2 and 3 we can extract a state-action space where safe actions are guaranteed. Let be the state of an agent, and be the set of all local states that an agent can be in. Agents with state do not move, as they wish to remain in their current state. Ideally, all other agents would move until they all achieve a state , at which point the desired pattern is formed. The full state-action map is thus given by: , where . We then scan through to identify all state-action pairs that:

  1. are in the direction of a neighbor.
    These state-action pairs will lead to collisions. They form the set .

  2. feature an action in a direction that is not sensed.
    Following Proposition 2, these potentially lead to collisions. They form the set .

  3. may cause the graph to disconnect.
    Following Proposition 3, these actions will break the local connectivity, with potentially a global impact. They form the set .

We then define:

(1)

is a state-action mapping that only includes safe state-action relations. It may be that not all states from are present in . An agent in such a state would be unable to select any safe action — it would be blocked. All states in which an agent would be blocked are grouped under . Functionally speaking, the states in and are equivalent. In either case, the agent will not move. Based on this, we create a new set and its complement .

(2)
(3)

It is thus , and not only , that should be checked in order to assess whether the desired pattern is a unique solution.

4.5 Agent Behavior and Local Proof of Convergence

The behavior of an agent is presented as a FSM in Fig. 5. An agent can be in one of two macro states:

Static: The agent is in state and is unable to move.

Active: The agent is in state and can take an action based on

Figure 5: FSM of agent behavior

We now provide the local proof, and the necessary conditions, for the desired pattern to be achieved from any initial configuration of the swarm, provided that the swarm initiates in a connected state (Assumption A3). In the following analysis, let be an arbitrary initial pattern formed by a swarm, and let be the desired final pattern. is the number of agents that are needed in order to compose . Here, we assume that the swarm is always composed of agents. The only formation in which all agents are static is the desired formation . 333This property, assumed in this section, needs to be checked independently. Our implementation is detailed in Sect. 5. To reflect Proposition 1, we formally analyze how the swarm evolves by modeling actions in discrete time steps. At each time step , an arbitrary agent with state executes an action based on .

We begin by establishing that, for any pattern , there is always one active agent, as per Lemma 1.

Lemma 1.

For a swarm of agents, if is such that the desired pattern is possible and unique, any arbitrary pattern will feature at least agent with a state .

Proof.

By definition: and . For a swarm of agents that can be in states , instances of states can only coexist into , which is known to be the unique outcome. Therefore, it follows that any other pattern must feature at least one agent that is in a state , meaning that it is in a state . ∎

Following Lemma 1, we must determine whether the actions taken by the agents can lead to forming starting from any initial pattern . Overall, when an agent transitions from state to a state , its transition can be of three types:

By its own action, if , via an action from (when this happens, some neighbors might leave from view, while new neighbors might come into view).

By the action of a neighbor (when this happens, the neighbor could also move out of view).

By a new agent, previously outside of view, moving into view and becoming a new neighbor. Let be a directed graph, where is a set of vertices (or nodes) and is a set of edges. We will let each node of represent each local state in , such that . The edges of the graph are all local state transitions that an agent can experience as a result of the three transition types above. We will define as the edges describing Transition Type 1, as the edges describing Transition Type 2, and as the edges describing Transition Type 3. Based on this, let denote the subgraph of that only focuses on Transition Type (i.e., has edges ), such that . The graphs , , , and are illustrated in Fig. 6. Using this, we present Lemma 2, which expresses the conditions needed for a pattern to be achievable, as defined by Definition 6. Note that the condition in this Lemma does not imply that the pattern will be achieved from any initial configuration of the swarm (this comes later in the section), but it only establishes whether it is within the capabilities of the agents to achieve the local states required to make the pattern.

Figure 6: Depiction of how the local states of an agent can change as a result of movements in the swarm. Specifically, the figure shows a portion of graph and its subgraphs , , . Graph corresponds to Transition Types 1 (the edges depict the agent taking an action, each action can lead to different outcomes depending on whether new agents comes into view as a result of the action). Graph corresponds to Transition Types 2 (the edges depict the neighbors of the agent taking an action). Graph corresponds to Transition Types 3 (the edges depict a new agent coming into view). Green nodes indicate a desired state, blue nodes indicate an active state, and red nodes indicate a blocked state. The agents have an omni-directional sensor layout as in Case 2 from Fig. 1 and omni-directional motion as seen in Fig. 3.
Definition 6.

A pattern is achievable if each of its local constituent states in can be reached starting from any local state in .

Lemma 2.

If the digraph shows that each state in features a path to each state in , then is achievable independently of the local states that compose .

Proof.

is formed if and only if all agents have a state , where . Consider an arbitrary initial pattern for which the local states of the agents form an arbitrary set . Via Lemma 1 we know that there is at least one agent in the swarm that is active for any pattern , and in turn any set of states . As the active agents move, they will experience transitions described by , and their neighbors will experience transitions described by . By the unified graph we describe the local transitions that take place to an agent as it moves and as its neighbors move. Consider a state that is incapable (either by its own actions or by the actions of its potential neighbors) to transition to a state in . It follows that having this state in may mean that a state in cannot be achieved, and in turn that cannot be realized. However, if it is possible for any state in to experience local transitions such that it may reach any state , it follows that is achievable independently of the local states that compose (i.e, the set ), because there is no state that is incapable of executing the necessary transitions that would lead it to be in a state . By ignoring the role of , we restrict the system such that:

Any state that has too few links for a desired state will have to be active and move to a position where it is surrounded by enough agents.

Any state can become active by the actions of a neighbor.

The transitions that occur must occur because of changes in the local neighborhood. This additional restriction ensures that the system can rely on the actions of an agent and/or its neighbors. ∎

Lemma 2 decides to ignore the possible role of in order to be more restrictive. This ensures that the transitions do not rely on agents coming into view. The conditions of Lemma 2 guarantee that any initial state could potentially turn into a desired state, such that there are no restrictions on the local states that could compose . However, this is not equivalent to saying that will always eventually be formed from any arbitrary , which is the property that we wish to achieve. To extract conditions that guarantee that can be achieved from any , we look into the properties of a global graph . The nodes of are all possible patterns that the swarm can generate (including ), and the edges are all possible transitions between patterns that can exist as a result of an action taken by an active agent. Therefore, if the properties of are such that there is a path from any node to the node for , and that this path is free of deadlocks (see Definition 7) then we know that can be reached from any initial pattern .

Definition 7.

A deadlock is a situation in which the swarm continuously transitions into the same sequence of patterns, e.g. , and cannot transition to any other pattern.

At this point, a solution to determine that can be reached from any initial pattern , without deadlocks, would be to compute and directly inspect whether the property is fulfilled, but this would come at the cost of a large state explosion (Dixon et al. 2012). Therefore, we instead continue with a local proof and impose local conditions that guarantee that will have the desired properties. Inherently, as for Lemma 2 (which ignored the possible role of ), this proof comes at the cost of some additional local level restrictions which may not be necessary for all global patterns. However, a local approach enables us to determine sufficient conditions while only using the local state information, and is thus independent of the size of the swarm. The approach focuses on the role of simplicial states (Definition 8) and their cliques (Definition 9). 444 These definitions are borrowed from, but not equivalent to, the typical definitions of simplicial node and clique. In standard graph theory, a simplicial node is a node whose neighboring nodes are fully connected, not just connected, and thus form only one clique, and not several (Van Steen 2010).

Definition 8.

A simplicial state is a state for which, if that agent were to move away completely out of the neighborhood, its original neighbors would remain connected. Note that if all neighboring positions are occupied, then the agent cannot move away, so an agent in this state is not simplicial. The set of simplicial states is denoted , where . The set of states that are not simplicial is denoted .

Definition 9.

A clique is a connected set of neighbors of an agent. If the agent were removed, the members of each clique would remain connected amongst each other, but there would not be a connection between the different cliques. It follows that simplicial agents only have one clique, while blocked agents have two or more cliques.

Agents in a state can move away from their neighborhood — this is an important property. Intuitively, a simplicial active agent can cause the swarm to reshuffle into different achievable patterns without deadlocks. Alternatively, when simplicial agents are not present, this may not be possible. As exemplified in Fig. 6(a): when a pattern with no simplicial agents is reached, the active (non-simplicial) agents cannot remove themselves from the neighborhood and solve the deadlock situation. However, because the pattern in Fig. 6(b) has simplicial agents that can travel around all others, we can always begin from any initial condition and transition to . The proof we present stems from this intuition. We first present a local condition to ensure that any node (pattern) in always eventually transitions to a node with at least one active simplicial agent (unless is reached). Then, we will prove that this property enables for a graph that always features a path to from any initial condition.

(a) Example of a deadlock
(b) Pattern where a deadlock is not possible
Figure 7: Illustrations of how a swarm can transition between different patterns, based on movements of the agents that are in active states. More specifically, the figure shows a portion of for two possible desired patterns. Notice that the deadlock in (a) does not feature any simplicial active agents. The agents have an omni-directional sensor layout as in Case 2 from Fig. 1 and omni-directional motion as seen in Fig. 3.

Consider the graph (where the superscript “AS” stands for “Active and Simplicial”). The nodes of are all nodes of which feature one of the following:

one or more states . We group all these patterns in the set .

only states in (this is the pattern ). Therefore, the nodes of are all patterns . The edges of are all edges that connect these nodes as in . Due to the importance of active and simplicial agents in avoiding deadlocks, we wish to ensure that a pattern in will be reached from any pattern in . The conditions for this depend on the set . If , following Lemma 1, then . If , then we must impose further restrictions, put forward by Lemma 3. In this Lemma we also make use of a graph , which only considers the transitions in that do not feature an agent leaving the neighborhood, but only move about the agent.

Lemma 3.

If the following conditions are satisfied:

for all states , none of the cliques of each state can be formed uniquely by agents that are in a state ,

shows that all static states with two neighbors can directly transition to an active state, then the nodes in will be reached from any other node in .

Proof.

A blocked agent with state always has multiple agents surrounding it, or else it would not be blocked. The neighbors of agent either form two or more cliques, or they form one clique that fully surrounds the agent in all sensed directions. In either case, the pattern branches out in multiple directions that stem from agent . If we trace any branch, because only a finite number of agents exists, we have the two following possible situations:

The branch eventually features an agent with state . In the extreme, this is the leaf of the pattern. Here, we can have two situations:

. If this exists, then the simplicial agent is also static. Therefore, it is possible that the entire pattern does not feature any active and simplicial agent.

If cannot, by design, form the clique of a state in , then it is guaranteed that . Therefore, we can locally impose that situation (b) always occurs, that situation (a) never occurs, and we thus guarantee that (this is the first condition of this Lemma).

If all branches only feature non-simplicial states, then this is only explained if the branches form loops, otherwise at least one leaf would be present as in situation 1 above. However, it can be ensured that a loop will always collapse and feature one simplicial active agent. In a loop, all agents have two cliques, each formed by one neighbor. tells whether any static agent with two neighbors, by the action of its neighbors, will become active. If this is the case for all states, then we know that the action of any neighbor will cause a chain reaction about the loop which will eventually cause the loop to collapse about one corner point and create a simplicial leaf. This is the second condition of this Lemma. The collapse of two exemplary loops is depicted in Fig. 8. In summary, by creating the conditions such that situation 1(a) never occurs, we restrict the possible patterns that can exist outside of to patterns with only loops (situation 2). If is a loop, then through , we know that loop patterns will collapse into a pattern that exists within . Else, is not a loop and it already exists within . This means that any pattern will either exist within , or will transition into . ∎

Figure 8: Illustration of two exemplary loops that “collapse”. Notice that the active states present at the borders cause a chain reaction until eventually a simplicial active agent is present. This is a property that can be determined by inspecting , which will show that the static agents will become active and propel the chain reaction.
(a) Simplicial agent that can travel to all open positions in the pattern.
(b) Two possibilities for how, should the agent be globally surrounded in a loop and unable to travel to all open positions, then this a new simplicial and active agent will take over.
Figure 9: Illustration of how active and simplicial agents can travel to all open positions in the structure

With the conditions from Lemma 3 we ensure that a simplicial active agent will always be present regardless of . We now introduce Theorem 1, which we use to determine that will eventually form from and eliminate the chance for any deadlocks.

Theorem 1.

If the following conditions are satisfied:

is achievable,

can be reached from any initial pattern in ,

shows that any agent in state can move to explore all open positions surrounding its neighbors,

shows that any agent in any state will always, by the arrival of a new neighbor in an open position, transition into an active agent (with the exception of any agent that is, or becomes, surrounded), then can be reached from any initial pattern .

Proof.

In the following, we will show that any pattern in will keep transitioning until it forms the desired pattern . Consider a swarm of agents arranged in a pattern . If is achievable, via Lemma 2, it can be reached by the actions of the agents, meaning that the node is in (this is the first condition in this theorem). Through Lemma 3 we know that we can always get to a condition where one active and simplicial agent is present, such that we are in the graph (this is the second condition in this theorem). We observe the case where at least one agent, agent , exists with state . As agent moves, one of the following events can happen:

Agent enters a state . Via Lemma 3, at least one other agent is (or will be) in state .

Agent enters a state . If is not yet achieved, then at least one other agent in the swarm is in an active state (Lemma 1). If the active agent(s) are in state , then this takes us back to point 1 in this list. If the active agent(s) are in state , this takes us to point 3 in this list.

Agent , and/or agent(s) taking over, keeps moving and each time enters a state . Via we know that it can potentially explore all open positions surrounding all its neighbors (this is the third condition of this theorem). As it moves, its neighbors also change, such that it always can potentially explore all open positions around all agents, and thus all open positions in the pattern (see Fig. 8(a) for a depiction). This means that the swarm can evolve towards a pattern that is closer to the desired one. Therefore, any situation will always develop into the situation of point 3. This is free of deadlocks, as all possible deadlock situations are mitigated:

It may happen that the simplicial and active agents cannot actually visit all open positions in the swarm because, at the global level, it is enclosed in a loop by the other agents. By Lemma 3, the loop will always collapse, meaning that at least one active simplicial agent will be freed, or that a new active simplicial agent will form. The new agent will be able to travel to all positions external to the loop, avoiding a deadlock. This is depicted in Fig. 8(b).

Agent can travel about all open positions in the swarm as expected. Via , we can extract that this must cause at least one static agent to become active (this is the fourth condition of this theorem). Consider a static agent which becomes active when becomes its neighbor. This may lead to one of the following developments, all of which avoid deadlocks.

Agent remains active and simplicial. The pattern can evolve even further and a deadlock is trivially avoided.

Agent becomes static upon neighboring agent , prior to the departure of agent . In this case, either it will be freed by the departure of agent , taking us back to point 2(a) in this list, or else it will remain static following the departure of agent , taking us to point 2(c) in this list.

Agent becomes static upon neighboring agent , following the departure of agent . It is now agent that can explore all open positions in the swarm, and further continue the process elsewhere. It is not possible that agent can uniquely come back to its original position, because by analysis of we know that agent can free any static agent in the swarm, and not just agent .

Agent , while agent is moving, enters the position (and state) that was originally taken by agent . As in point 2(c) in this list, it is not possible that agent only frees agent in the same way that agent freed agent , because shows that agent can free any agent in the swarm, and not just agent . There is an exception to the rule, which are static states that either are, or become, surrounded by other agents. In this case, may not show that they can become free. However, it is trivially impossible (since there is a finite number of agents) for the swarm to only feature agents that are surrounded. A situation where all agents are all surrounded cannot occur; at least one agent will not be surrounded. This justifies the exception to the fourth condition in this theorem. With the above it is confirmed that 1) any open position in the pattern can potentially be filled, and 2) no deadlocks will arise. This means that the swarm will keep evolving into all achievable patterns. Therefore, any pattern in , including , will — given infinite time — always eventually be formed starting from any other pattern in . ∎

In this section, we presented a local proof that ensures that the desired pattern will be reached. We showed that, by ensuring a set of local conditions, we can determine that the pattern will be achieved from any initial configuration of the swarm. One of the main conditions is the need for simplicial active states, which brings interesting insights. The dependence on the set leads to limitations on the desired patterns that may be reached independently of . We note the following:

Desired states with only one neighbor may violate the first condition of Lemma 3. This is because this desired state can form the clique of a blocked state on its own. If this occurs, the local proof presented here is too restrictive to guarantee that the desired pattern will be formed without deadlocks.

Removing a dependency on North (Assumption A1) may lead to violating the first condition of Lemma 3. This is because states become rotation invariant, as discussed further in Sect. 8.4.

As it stands, the proof rests on the assumption that the desired pattern is the only pattern that can be formed where all agents are in a static state. In the following section, we will present a method to check for an arbitrary set of static states that a desired pattern is indeed the unique pattern in which all agents are static.

5 Checking if the emergent pattern is unique

There is a need to assess whether a swarm of robots with a set of static states will uniquely form the desired pattern . In this section we detail our implementation to check this for an arbitrary set . We focused on the case where all agents have omni-directional sensors, for which there exist several simplifying assumptions that enable a fast reduction of the search space. For a swarm of agents with a set consisting of states, the states could coexist in combinations, where

(4)

For each combination of states, there (may) exist multiple ways in which the states could be organized. To filter possible combinations and spatial arrangements, we used the implementation shown in Fig. 10. We first assess whether a combination of states is viable. If so, we check for all different spatial arrangements of the states whether a spanning tree can exist with no loose edges (an edge where two or more states do not match). If there are no loose edges, then we examine the pattern to see if it is . If this is not the case, then a counter-example has been found. If a counter-example is found, it means that the sensor layout is insufficient to guarantee that the desired pattern is the unique pattern that will be formed.

Figure 10: Diagram of automatic checker that checks whether , for a fixed number of agents , can only form the desired pattern.

5.1 Preliminaries

Consider a set with . We introduce two tools to describe how any pair of states in can be matched: the Link-Direction matrix (Definition 10) and the Match matrix (Definition 11).

Definition 10.

The Link-Direction matrix is a square matrix () that holds the links (Definition 2) along which any two states in match (Definition 4).

Definition 11.

The Match matrix is a matrix that holds the number of links (Definition 2) along which any two states in match (Definition 4). For omni-directional sensors, is symmetrical. Intuitively, this is because if agent sees agent , then agent can also see agent .

Figure 11: Arbitrary set used for examples in Sect. 5.1 and Sect. 5.2

Example

Consider the set depicted in Fig. 11. For this set:

Note that all 0 entries in correspond to empty entries in . From we can quickly extract that state can never connect to itself, but it can connect to states and . With we can see that can match with along , and with along . Note that matrix, although not strictly symmetric, also has a symmetry to it: each link always features, at its symmetry position, a link along the opposite direction. For example, if matches with along direction , then matches with along .

If includes a state that does not match with any state in the set, this will be seen as a null row in . If it can match, then will show whether all its links can be matched by the other states in . If these requirements are not met, then the state can be excluded from analysis since it can never coexist with the other states.

5.2 Combination Analysis

Completeness Test

A complete combination satisfies the following conditions:

The graph is complete.

Each link in any one direction should have a link in the opposite direction that will match it. Furthermore, any valid combination should consist of an even number of agents expecting an odd number of neighbors

(Van Steen 2010).

The pattern is finite along all directions. For each direction, there should be at least one state that does not require a link along that direction.

The edges of the pattern exist. For each direction, there must be at least one state that features a link in that direction, but not in the opposite direction.

Matching Test

Each state in a combination should be capable of being matched by the other states in a combination. This information is provided by the Match Matrix. The reasoning is best explained via an example. Consider, for a swarm of 5 agents with as the example in Sect. 5.1/Fig. 11, a potential combination . Using , we observe pair-wise matches that are possible between the states in . tells us that can only connect to in one direction. However, features two instances of and only one instance of . This means that one instance of can never be satisfied; the combination can never exist. Furthermore, there should be enough states that match to in order to accommodate all its links. If there are too few states that match , then we know that can never be satisfied in full, and the combination is also not valid.

5.3 Spanning Trees Analysis

If a combination passes the combination analysis, we construct and test spanning trees to determine whether and how the states could form a pattern. Spanning trees graphs are used here as a convenient tool to express how a pattern expands through space starting from an arbitrary root node. Let represent an arbitrary spanning tree generated from a combination . The nodes of are the states in , and the edges of are one of the links between the states. With the tests below, we first test higher level properties of a generated spanning tree. If these properties are met then we test the spanning tree spatially to determine if all states match in full. Examples of trees that fail or pass the conditions are shown in Fig. 12.

Figure 12: Representation of a valid spanning tree used to describe a pattern

Graph Edge test. If shows that any of the edges in cannot exist (because a match between those states does not exist), then is invalid. These spanning trees can be discarded.

Degree test. The degree of state should be less than or equal to the number of links that the state holds. If the degree of a node in is larger than the degree of the state, then is invalid. These spanning trees can be discarded.

Compression. If a combination features the same state multiple times, multiple spanning trees that created from the combination, such as and , can be duplicates of each other. They are duplicates because the agents connected have the same state, but because all agents are homogeneous and anonymous this does not matter. All duplicate spanning trees can be ignored and only one needs to be analyzed.

Graph Connectivity. It has been established that the swarm cannot disconnect, meaning that any pattern must have a connected spanning tree. If is not connected, then it is invalid.

Spatial test. Spanning trees that meet all conditions above are plotted in space and checked to make sure that all states match in full without loose ends. can be used to quickly generate the full pattern.

5.4 Pattern Check

If a valid spanning tree is identified, a possible pattern has been found. The pattern needs can be checked to determine whether it is equal to the desired pattern . A variety of methods can be used to do so automatically (Loncaric 1998). In our work, we used Fourier descriptors for plane closed curves to examine the contour of the pattern (Zahn and Roskies 1972) and check against the contour of the desired pattern.

6 Discrete space and discrete time simulations

In this section, the generation of different patterns by swarms is demonstrated and evaluated together with an exploration on how a further adaptation of the behavior may speed up the convergence to a desired pattern. The latter leads to insights on possible optimization strategies, which will be discussed in Sect. 8.3.

6.1 Simulation environment and test description

The agents exist in an unbounded two dimensional grid world. The sensor layout is omni-directional and extends up to the nearest grid points, mimicking Case 2 in Fig. 1 or the example from Fig. 11. The agents can take action omni-directionally as seen in Fig. 3. For the purposes of this simulation, to ensure that the swarm fully abides to Proposition 1, only one agent moves at any given time step (this is an assumption that will be lifted in the next section). At each time step, one random active agent in the swarm performs an action and moves to a new grid point. All tests begin by initializing the agents in a random formation and are repeated 100 times. We explored the formation of: 1) a triangle with 4 agents, 2) a triangle with 9 agents, and 3) a hexagon with 6 agents under the following behaviors:

Baseline: At each time step, a random agent with state is selected and executes a random safe action based on .

Alteration 1 (ALT1): same as baseline; however, when an agent moves at time-step , the same agent will not move at time-step (unless it is the only active agent).

Alteration 2 (ALT2): same as ALT1; additionally, all states with more than 5 neighbors are now included in .

Alteration 3 (ALT3): same as ALT2; additionally, all actions must ensure that all agents in the neighborhood, following the action, have at least one neighbor at North, South, East or West, else the action is discarded from . There is only one exception to this, and it is the state , for which otherwise a spurious pattern was found following the procedure in Sect. 5.

Alteration 4 (ALT4): same as ALT3; additionally, all states with more than 4 neighbors are now also included in .

The motivation behind the different behaviors is to explore which parameters may have an influence over how many steps it takes to form the pattern, on average. The reasoning behind ALT3 and ALT4 is to force the agents to “cut-corners”, as well as to give the agents less actions to choose from. ALT3 and ALT4 are such that the desired stated that compose the hexagon cannot be achieved. Therefore, based on Lemma 2, the hexagon should not be achievable by these controllers. We further note that ALT3 and ALT4 also do not meet condition 3 of Theorem 1, because some active simplicial agents are prevented from exploring all open positions surrounding their neighbors. However, as discussed in Sect. 4.5, the local conditions in Lemma 3 and Theorem 1 are more restrictive than required and do not necesserily apply to all global patterns. We will also use these simulations to explore this point.

6.2 Results

Distributions for the number of steps to completion are shown in Fig. 13. For the Baseline, ALT1, and ALT2, the final pattern is achieved in all tests. As the size of the pattern grows, ALT1 is seen to provide for a better performance. This is explained by the fact that it limits the possibility that an agent cycles back and forth between two spots, which is inherently inefficient. ALT2 further improves the results; blocking all states with 5 neighbors reduced the size of such that the swarm had less patterns to explore. ALT3 and ALT4 further reduced , leading to significant boosts in performance. However, as expected through Lemma 2, ALT3 and ALT4 did not work on the hexagon configuration — the hexagon pattern did not emerge and the swarm continued randomly reshuffling. This gives empirical confirmation of Lemma 2 and also provides practical insight into how tuning the state-action space can be beneficial for some patterns, but detrimental to others. We also note that, even though condition 3 of Theorem 1 was not met by ATL3 and ATL4, they still managed to achieve the pattern in all cases. This shows that the local proof, as presented in this paper, can be too restrictive and needs to be inspected further if one wishes to optimize the performance of the system (alternatively, it could also be possible that the agents were simply “lucky” to not encounter deadlock situations during any of our simulations).

(a) 4 agent triangle
(b) 9 agent triangle
(c) Hexagon
Figure 13: Probability distributions of steps to completion by different state-action spaces for three tested patterns

7 Continuous space and continuous time simulations with asynchronous agents

The discrete space and time experiments from Sect. 6 were ported to a continuous environment with asynchronous agents operating in continuous time. The aim was to see how well the system would port to a continuous, asynchronous setting, which is not accounted for in the proofs.

7.1 Simulation set-up and system description

Agent dynamics and behavior

The robots behave like accelerated particles, freely moving in an unbounded 2D space, and regulate their accelerations in a North-East frame of reference (we will denote for North, and for East). They can sense omni-directionally all their neighbors within a radius , and they can sense the motion of their neighbors with enough accuracy to determine whether they are computing an action (Assumption A3). Each robot determines its local state in following the barriers depicted in Fig. 13(a). When an agent is active and none of its neighbors are taking an action, it will try and take an action itself. Following an alignment maneuver of time , the agent will begin to take the action, moving with commanded speed . The agent interrupts the action if it senses another agent being too close or also performing an action, in an attempt to approach Proposition 1. If an action is completed or interrupted, the agent will perform a second alignment maneuver for time to settle into its new position. Using we can instill the same behavior introduced in ALT1 from Sect. 6, because the agent that has just taken action will not do so while adjusting, leaving a time window for its neighbors to take action. Note that alignment maneuvers are minimal and are thus not perceived by other agents as actions. Pseudo-code for the on-board controller of the agents is provided in Algorithm 1.

(a) Rounding method used by the agents to assess their state
(b) Several agents in action
(c) 9 agents converged to a triangle
Figure 14: State assessment (a) and two screenshots of continuous time and space simulations with asynchronous agents (b,c)
while running do
       Measure current relative positions of agents within ;
       Determine discrete state ;
       Determine whether any neighbors are taking an action;
       if Not taking an action then
             if All neighbors are not taking an action then
                   Adjust distance and bearing to closest neighbor(s) for ;
                   if () (distance to all neighbors ) then
                         Take an action from with velocity ;
                         Adjust distance and bearing to closest neighbor for ;
                        
                  
            
       else if (All neighbors are not taking an action) (taking an action) (distance to all neighbors ) then
             Continue action;
            
       else
             Stop action;
            
      
end while
Algorithm 1 Pseudo-code for agent controller

Distance and Bearing Adjustment Commands

When not performing actions, the robots are governed by attraction-repulsion and alignment forces with respect to their neighbors. Consider two robots and . The commanded velocity of aligning to along the -direction (and, equivalently, -direction) is given by

(5)

The first term is an attraction-repulsion term, and the second term is a bearing alignment term. Together, they cause to gravitate to a specific distance and bearing () to its neighbor . is the bearing to with respect to North. is a constant indicating the desired velocity of the bearing alignment. The velocity of the attraction-repulsion is given by

(6)

where: is a repulsion gain, is an attraction gain, is the distance that measures to , and is a shift in the attraction term used to tune the equilibrium point to a desired distance . For a given , , and , one can extract for . Eq. 6 has Lyapunov stability (Gazi and Passino 2002). Two agents are in equilibrium () when , , and . Note how the alignment between agents is reciprocal; such that for each , there is also a corresponding alignment. This is due to the identities and which manifest themselves via Eq. 5. Multiple alignment bearings can be defined, we then allow the agent to select the one that is closest to . For a robot which senses neighbors, the full alignment command in is , and the equivalent for . This is unless the closest neighbor is at a distance , where otherwise only that agent is considered.

Simulation Parameters

In our tests we used: m, m, m, s, s, , , m/s, m/s. The state-action set and the active set were as in ALT4 from Sect. 6. We provided the agents with , making them adjust at all bearings to each other that match the state space. For and , then we define m instead of m. The adjacency matrix describing how the swarm is connected is continuously computed, and a Breadth-First Search (BFS) is used to check that the swarm remains connected. If the swarm disconnects at any point, the simulation exits. Alternatively, the simulation exits once the desired pattern is achieved. For each pattern, simulations are repeated 50 times.

(a) Triangle with 4 agents
(b) Triangle with 9 agents
Figure 15: Simulated trajectories to the desired patterns
(a) Triangle with 4 agents, bin width=s
(b) Triangle with 9 agents, bin width=s
Figure 16: Probability density for time to convergence based on simulation results

7.2 Simulation Results

The results for the triangles with 4 and 9 agents from Sect. 6, using the controller from ALT4, were validated in this continuous setting. Fig. 15 shows sample trajectories over time. We can see that the agents reshuffle until the desired pattern is achieved. The triangle with 4 agents was achieved successfully in 50 out of 50 trials, with generally fast convergence times (within 100 seconds of simulated time). The triangle with 9 agents was achieved successfully in 49 out of 50 trials. One trial experienced a separation due to the violation of the condition in Proposition 1 which caused an unsafe maneuver. This happened as two non-neighboring agents chose to perform an action at approximately the same time, came into each other’s view, yet the alignment maneuver was such that two agents (who were the link between two parts of the swarm) moved further than m apart, which was the limit of the sensor. Although we could expect the swarm to reconnect, the issue is noted and should be tackled in future work. Nevertheless, this was the only unsafe maneuver that took place out of thousands of maneuvers executed over all 50 trials. The times to completion and their probability are shown in Fig. 16.

8 Discussion

8.1 Insight into emergent behavior of swarms

The approach presented in this work offers novel insights into how emergence for a fully distributed swarm with very limited agents can be achieved. In analogy to biological systems, the agents in the swarm merely functioned on the instincts to:

be safe (not collide with others);

be social (not risk separation from/of the group);

be happy (be in a set of desired local states). The final pattern emerged as the only unique combination of “happy” states. The agents had no knowledge that the desired local states were only a piece of a larger pattern, nor did they need to care. This shows that an emergent global swarm behavior can be reached merely by breaking it down into its locally observable constituents. Similarly, the framework can be used to guarantee the lack of emergence in case all static states cannot be arranged into any pattern.

The principles in this work were aimed at pattern formation, but future work could aim to extend to other applications, such as organized navigation or task allocation. Additionally, the formal methodology to avoid collisions and avoid separations also has several applications on its own. For instance, it can be used to guarantee that a wireless sensors network would never separate in multiple groups even when faced with obstacles. This proof is independent of the number of agents that are added or removed to/from the system, and has empirically been shown to work in a continuous time and space realm.

8.2 State Explosion

Our approach uses a local level analysis to determine whether a unique pattern will be achieved by the agents starting from any initial condition. This is independent of the number of agents in the swarm, and it is thus free of state explosion issues. Nevertheless, the approach as a whole still requires us to determine whether a desired pattern is unique, and this part was still done using a global analysis as in Sect. 5. With our implementation we aimed to mitigate a computation explosion by catching unfeasible patterns as early as possible, yet the issue remains. In future research, there should be efforts to further mitigate its effects for finite patterns.

Three solution avenues have been identified for this problem. The first avenue is to focus on the agents at the border of the structure, assuming that all other agents will be enclosed by these agents. The second avenue is to use repeating patterns. The local states could be made such that the agents can arrange into infinitely repeating patterns (e.g. infinitely connecting hexagons) and create a large complex structure without defining the larger structure in full. The third solution avenue is to allow blocked agents that have been blocked for a long time to (temporarily) perform unsafe maneuvers, which might set the system in motion (but may come at the cost of the swarm being disconnected).

8.3 Achieving Complex Patterns in a Short Time

The results in Sect. 6 indicate that, although the patterns will be achieved, it can take a significant amount of steps. However, our results also show that the behaviors can be tuned in order to improve performance by several orders of magnitude. We found that the tuning depends on the desired pattern (for instance, ALT3 and ALT4 could not be used to form a hexagon). This leads to the question: what is the optimum state-action mapping for a given pattern?

This problem could be solved using classical machine learning methodologies such as Reinforcement Learning (RL) or Evolutionary Robotics (ER). RL might not be well suited to the task since the agents only have partial knowledge of the environment, and are thus subjected to aliasing states

(Kaelbling et al. 1996). Given that the system is non-Markovian, ER might be a better candidate (de Croon et al. 2005). In this context, the objective would be to determine the optimal alteration of the state-action mapping such that the agents still achieve their goal and their time to completion is minimized. To this end, the conditions expressed by Lemmas 2, Lemma 3, and Theorem 1 allow to locally check for state-action relations that can reliably achieve the global goal. However, ALT3 and ALT4 have shown that certain conditions may be too restrictive. Future work should explore how far the restrictions can be lifted without affecting the local proof. Additionally, it might also be possible to improve convergence by adjusting waiting time, such that certain states wait longer than others before choosing to move, or by extending the sensor range, such that agents can take smarter actions towards desired states. Providing the agents with memory could be a further enhancement to the system (McCallum 1996).

8.4 The North Dependency

In this paper, we assumed all agents shared knowledge of a common direction (Assumption A1). This can appear as a significant limitation, yet the framework presented in this paper can take absence of a North direction into account. The knowledge of a common direction is not essential, but it does enable an agent to differentiate between otherwise equivalent states. For example, consider the state . Without a common direction:

Therefore, including a state in would automatically include all rotations of that state as well. Even if the final pattern that is formed is still unique, neglecting North has still been shown to be a potential problem for the condition imposed by Lemma 3, which could be subject to local situations that cause the pattern to never emerge. However, if such cases are accepted (or otherwise circumvented) then Assumption A1 can be lifted.

9 Conclusion and Future Work

This work introduced a method, complete with a proof procedure, to devise local behaviors of highly limited agents such that a unique global pattern always emerges. Approaching the problem from top-down, we first identify the local states that build the desired global pattern, and then check if these local states indeed lead uniquely to the desired global pattern or if the swarm can also converge to other undesired emergent solutions. If the desired pattern is a unique emergent solution, then we can locally prove, based on the agents’ local behavior, whether the pattern is achievable and whether it can be reached without any issues from any initial pattern. Despite breaking down the system to a discrete state-space, and imposing the requirement in our proof that only one agent can move at once, we have shown that results can be reproduced by asynchronous agents operating in continuous time and continuous (unbounded) space. The methodology shown here has been used for agents in a two dimensional spatial plane. At its core, however, the methodology is based on the more general idea of matching local states to each other in order to synthesize a unique larger global state. With the correct mapping, we expect this strategy to also be applicable to systems with significantly different state and action spaces.

Future work will focus on bringing this framework to real world robots. As pattern formation can take a long time if the agents take random actions, the first step will be to explore state-action optimization. This can be done either in the formal setting, by optimizing the state-action pairs directly, or in the dynamic setting, by studying how delays and waiting times in each state can affect the course of the structure. Optimization procedures should be such that, on average, the amount of steps to completion are minimized. A second step will be to explore the impact of noise and disturbances, which are inevitable in real world systems. A detailed study on the levels of noise and system errors that are deemed acceptable, and how to handle it, would be of very high importance. Of particular interest is the impact of false positives/false negative sensor readings, which may cause one of the agents to have a mistaken view of its local state. This may cause temporary heterogeneity in the system because that agent will not follow the rules as expected. This needs to be investigated. We expect that it should be possible to formally account for possible sensor errors by restricting the state-action mapping, although this could further restrict the possible patterns that can be formed while keeping the local proof intact. Finally, it would also be interesting to study the formation power that can be achieved when the assumption of north dependency is lifted, and the impact that this will have on the conditions of the local proof.

References

  • Achtelik et al. (2012) Achtelik M, Achtelik M, Brunet Y, Chli M, Chatzichristofis S, Decotignie JD, Doth KM, Fraundorfer F, Kneip L, Gurdan D, Heng L, Kosmatopoulos E, Doitsidis L, Lee GH, Lynen S, Martinelli A, Meier L, Pollefeys M, Piguet D, Renzaglia A, Scaramuzza D, Siegwart R, Stumpf J, Tanskanen P, Troiani C, Weiss S (2012) Sfly: Swarm of micro flying robots. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 2649–2650, DOI 10.1109/IROS.2012.6386281
  • Alonso-Mora (2014) Alonso-Mora J (2014) Collaborative motion planning for multi-agent systems. PhD thesis
  • Arbuckle and Requicha (2010) Arbuckle DJ, Requicha AAG (2010) Self-assembly and self-repair of arbitrary shapes by a swarm of reactive robots: algorithms and simulations. Autonomous Robots 28(2):197–211, DOI 10.1007/s10514-009-9162-7, URL https://doi.org/10.1007/s10514-009-9162-7
  • Arbuckle and Requicha (2012) Arbuckle DJ, Requicha AAG (2012) Issues in Self-Repairing Robotic Self-Assembly, Springer Berlin Heidelberg, Berlin, Heidelberg, pp 141–155. DOI 10.1007/978-3-642-33902-8˙6, URL https://doi.org/10.1007/978-3-642-33902-8_6
  • Bouffanais (2016) Bouffanais R (2016) A Network-Theoretic Approach to Collective Dynamics, Springer Singapore, Singapore, pp 45–74. DOI 10.1007/978-981-287-751-2˙4, URL http://dx.doi.org/10.1007/978-981-287-751-2_4
  • Cicerone et al. (2016) Cicerone S, Di Stefano G, Navarra A (2016) Asynchronous Embedded Pattern Formation Without Orientation, Springer Berlin Heidelberg, Berlin, Heidelberg, pp 85–98. DOI 10.1007/978-3-662-53426-7˙7, URL https://doi.org/10.1007/978-3-662-53426-7_7
  • Clarke et al. (1999) Clarke EM Jr, Grumberg O, Peled DA (1999) Model Checking. MIT Press, Cambridge, MA, USA
  • de Croon et al. (2005) de Croon G, Van Dartel M, Postma E (2005) Evolutionary learning outperforms reinforcement learning on non-markovian tasks. In: Workshop on Memory and Learning Mechanisms in Autonomous Robots, 8th European Conference on Artificial Life, Canterbury, Kent, UK
  • Dada and Ramlana (2013) Dada IG, Ramlana I (2013) A novel control algorithm for multi-robot pattern formation. International Journal of Advanced Research in Computer Science and Technology
  • Derakhshandeh et al. (2016) Derakhshandeh Z, Gmyr R, Richa AW, Scheideler C, Strothmann T (2016) Universal shape formation for programmable matter. In: Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures, ACM, New York, NY, USA, SPAA ’16, pp 289–299, DOI 10.1145/2935764.2935784, URL http://doi.acm.org/10.1145/2935764.2935784
  • Di Luna et al. (2017) Di Luna GA, Flocchini P, Santoro N, Viglietta G, Yamauchi Y (2017) Shape formation by programmable particles. arXiv preprint arXiv:170503538
  • Dixon et al. (2012) Dixon C, Winfield AF, Fisher M, Zeng C (2012) Towards temporal verification of swarm robotic systems. Robotics and Autonomous Systems 60(11):1429 – 1441, DOI http://dx.doi.org/10.1016/j.robot.2012.03.003, URL http://www.sciencedirect.com/science/article/pii/S0921889012000474, towards Autonomous Robotic Systems 2011
  • Engelen et al. (2011) Engelen S, Gill EKA, Verhoeven CJM (2011) Systems engineering challenges for satellite swarms. In: Proceedings of the 2011 IEEE Aerospace Conference, IEEE Computer Society, Washington, DC, USA, AERO ’11, pp 1–8, DOI 10.1109/AERO.2011.5747259, URL http://dx.doi.org/10.1109/AERO.2011.5747259
  • Flocchini et al. (2005) Flocchini P, Prencipe G, Santoro N, Widmayer P (2005) Gathering of asynchronous robots with limited visibility. Theoretical Computer Science 337(1):147 – 168, DOI https://doi.org/10.1016/j.tcs.2005.01.001, URL http://www.sciencedirect.com/science/article/pii/S0304397505000149
  • Flocchini et al. (2008) Flocchini P, Prencipe G, Santoro N, Widmayer P (2008) Arbitrary pattern formation by asynchronous, anonymous, oblivious robots. Theoretical Computer Science 407(1):412 – 447, DOI https://doi.org/10.1016/j.tcs.2008.07.026, URL http://www.sciencedirect.com/science/article/pii/S0304397508005379
  • Fox and Shamma (2015) Fox MJ, Shamma JS (2015) Probabilistic performance guarantees for distributed self-assembly. IEEE Transactions on Automatic Control 60(12):3180–3194, DOI 10.1109/TAC.2015.2418673
  • Fujinaga et al. (2015) Fujinaga N, Yamauchi Y, Ono H, Kijima S, Yamashita M (2015) Pattern formation by oblivious asynchronous mobile robots. SIAM Journal on Computing 44(3):740–785, DOI 10.1137/140958682, URL https://doi.org/10.1137/140958682, https://doi.org/10.1137/140958682
  • Gazi and Passino (2002) Gazi V, Passino KM (2002) A class of attraction/repulsion functions for stable swarm aggregations. In: Decision and Control, 2002, Proceedings of the 41st IEEE Conference on, vol 3, pp 2842–2847 vol.3, DOI 10.1109/CDC.2002.1184277
  • Gazi and Passino (2004) Gazi V, Passino KM (2004) Stability analysis of social foraging swarms. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 34(1):539–557, DOI 10.1109/TSMCB.2003.817077
  • Gjondrekaj et al. (2012) Gjondrekaj E, Loreti M, Pugliese R, Tiezzi F, Pinciroli C, Brambilla M, Birattari M, Dorigo M (2012) Towards a Formal Verification Methodology for Collective Robotic Systems, Springer Berlin Heidelberg, Berlin, Heidelberg, pp 54–70. DOI 10.1007/978-3-642-34281-3˙7, URL http://dx.doi.org/10.1007/978-3-642-34281-3_7
  • Gold et al. (2000) Gold TB, Archibald JK, Frost RL (2000) A utility approach to multi-agent coordination. In: Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), vol 3, pp 2052–2057 vol.3, DOI 10.1109/ROBOT.2000.846331
  • Güzel et al. (2017) Güzel MS, Gezer EC, Ajabshir VB, Bostancı E (2017) An adaptive pattern formation approach for swarm robots. In: 2017 4th International Conference on Electrical and Electronic Engineering (ICEEE), pp 194–198, DOI 10.1109/ICEEE2.2017.7935818
  • Haghighat and Martinoli (2017) Haghighat B, Martinoli A (2017) Automatic synthesis of rulesets for programmable stochastic self-assembly of rotationally symmetric robotic modules. Swarm Intelligence 11(3):243–270, DOI 10.1007/s11721-017-0139-4, URL https://doi.org/10.1007/s11721-017-0139-4
  • Hasan et al. (2018) Hasan E, Al-Wahedi K, Jumah B, Dawoud DW, Dias J (2018) Circle Formation in Multi-robot Systems with Limited Visibility, Springer International Publishing, Cham, pp 323–336. DOI 10.1007/978-3-319-70833-1˙27, URL https://doi.org/10.1007/978-3-319-70833-1_27
  • Hou and Cheah (2009) Hou SP, Cheah CC (2009) Multiplicative potential energy function for swarm control. In: 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 4363–4368, DOI 10.1109/IROS.2009.5354769
  • Izzo and Pettazzi (2005) Izzo D, Pettazzi L (2005) Equilibrium shaping: distributed motion planning for satellite swarm. In: Proc. 8th Intern. Symp. on Artificial Intelligence, Robotics and Automation in space
  • Izzo et al. (2014) Izzo D, Simões LF, de Croon GCHE (2014) An evolutionary robotics approach for the distributed control of satellite formations. Evolutionary Intelligence 7(2):107–118, DOI 10.1007/s12065-014-0111-9, URL http://dx.doi.org/10.1007/s12065-014-0111-9
  • Joordens and Jamshidi (2010) Joordens MA, Jamshidi M (2010) Consensus control for a system of underwater swarm robots. IEEE Systems Journal 4(1):65–73, DOI 10.1109/JSYST.2010.2040225
  • Kaelbling et al. (1996) Kaelbling LP, Littman ML, Moore AW (1996) Reinforcement learning: A survey. J Artif Int Res 4(1):237–285, URL http://dl.acm.org/citation.cfm?id=1622737.1622748
  • Khaledyan and de Queiroz (2017) Khaledyan M, de Queiroz M (2017) Formation maneuvering control of multiple nonholonomic robotic vehicles: Theory and experimentation. arXiv preprint arXiv:170607830
  • Áron Kisdi and Tatnall (2011) Áron Kisdi, Tatnall AR (2011) Future robotic exploration using honeybee search strategy: Example search for caves on mars. Acta Astronautica 68(11–12):1790 – 1799, DOI http://dx.doi.org/10.1016/j.actaastro.2011.01.013, URL http://www.sciencedirect.com/science/article/pii/S0094576511000245
  • Klavins (2002) Klavins E (2002) Automatic synthesis of controllers for distributed assembly and formation forming. In: Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292), vol 3, pp 3296–3302, DOI 10.1109/ROBOT.2002.1013735
  • Klavins (2007) Klavins E (2007) Programmable self-assembly. IEEE Control Systems 27(4):43–56, DOI 10.1109/MCS.2007.384126
  • Konur et al. (2012) Konur S, Dixon C, Fisher M (2012) Analysing robot swarm behaviour via probabilistic model checking. Robotics and Autonomous Systems 60(2):199 – 213, DOI http://dx.doi.org/10.1016/j.robot.2011.10.005, URL http://www.sciencedirect.com/science/article/pii/S0921889011001916
  • Krishnanand and Ghose (2005) Krishnanand K, Ghose D (2005) Formations of minimalist mobile robots using local-templates and spatially distributed interactions. Robotics and Autonomous Systems 53(3):194 – 213, DOI https://doi.org/10.1016/j.robot.2005.09.006, URL http://www.sciencedirect.com/science/article/pii/S0921889005001351
  • Lerman et al. (2001) Lerman K, Galstyan A, Martinoli A, Ijspeert A (2001) A macroscopic analytical model of collaboration in distributed robotic systems. Artificial Life 7(4):375–393
  • Loncaric (1998)

    Loncaric S (1998) A survey of shape analysis techniques. Pattern Recognition 31(8):983 – 1001, DOI 

    http://dx.doi.org/10.1016/S0031-2023(97)00122-2, URL http://www.sciencedirect.com/science/article/pii/S0031202397001222
  • de Marina Peinado (2016) de Marina Peinado HJG (2016) Distributed formation control for autonomous robots. University of Groningen
  • McCallum (1996) McCallum AK (1996) Reinforcement learning with selective perception and hidden state. PhD thesis, aAI9618237
  • Morgan et al. (2015) Morgan D, Chung SJ, Hadaegh FY (2015) Swarm assignment and trajectory optimization using variable-swarm, distributed auction assignment and model predictive control. In: AIAA guidance, navigation, and control conference, p 0599
  • Navarro and Matía (2012) Navarro I, Matía F (2012) An introduction to swarm robotics. ISRN Robotics 2013
  • Oh et al. (2015) Oh KK, Park MC, Ahn HS (2015) A survey of multi-agent formation control. Automatica 53(Supplement C):424 – 440, DOI https://doi.org/10.1016/j.automatica.2014.10.022, URL http://www.sciencedirect.com/science/article/pii/S0005109814004038
  • Pereira and Hsu (2008) Pereira AR, Hsu L (2008) Adaptive formation control using artificial potentials for euler-lagrange agents. {IFAC} Proceedings Volumes 41(2):10788 – 10793, DOI http://dx.doi.org/10.3182/20080706-5-KR-1001.01829, URL http://www.sciencedirect.com/science/article/pii/S1474667016406981, 17th {IFAC} World Congress
  • Rubenstein et al. (2014) Rubenstein M, Cornejo A, Nagpal R (2014) Programmable self-assembly in a thousand-robot swarm. Science 345(6198):795–799, DOI 10.1126/science.1254295, URL http://science.sciencemag.org/content/345/6198/795, http://science.sciencemag.org/content/345/6198/795.full.pdf
  • Şahin et al. (2008) Şahin E, Girgin S, Bayindir L, Turgut AE (2008) Swarm Robotics, Springer Berlin Heidelberg, Berlin, Heidelberg, pp 87–100. DOI 10.1007/978-3-540-74089-6˙3, URL http://dx.doi.org/10.1007/978-3-540-74089-6_3
  • Saska et al. (2016) Saska M, Vonásek V, Chudoba J, Thomas J, Loianno G, Kumar V (2016) Swarm distribution and deployment for cooperative surveillance by micro-aerial vehicles. Journal of Intelligent & Robotic Systems pp 1–24, DOI 10.1007/s10846-016-0338-z, URL http://dx.doi.org/10.1007/s10846-016-0338-z
  • Scheper and de Croon (2016) Scheper KYW, de Croon GCHE (2016) Abstraction as a Mechanism to Cross the Reality Gap in Evolutionary Robotics, Springer International Publishing, Cham, pp 280–292. DOI 10.1007/978-3-319-43488-9˙25, URL http://dx.doi.org/10.1007/978-3-319-43488-9_25
  • Suzuki and Yamashita (1999) Suzuki I, Yamashita M (1999) Distributed anonymous mobile robots: Formation of geometric patterns. SIAM Journal on Computing 28(4):1347–1363, DOI 10.1137/S009753979628292X, URL https://doi.org/10.1137/S009753979628292X, https://doi.org/10.1137/S009753979628292X
  • Tanner (2004) Tanner HG (2004) On the controllability of nearest neighbor interconnections. In: 2004 43rd IEEE Conference on Decision and Control (CDC) (IEEE Cat. No.04CH37601), vol 3, pp 2467–2472 Vol.3, DOI 10.1109/CDC.2004.1428782
  • Van Steen (2010) Van Steen M (2010) Graph theory and Complex Networks: An introduction. Maarten van Steen
  • Verhoeven et al. (2011) Verhoeven C, Bentum M, Monna G, Rotteveel J, Guo J (2011) On the origin of satellite swarms. Acta Astronautica 68(7–8):1392 – 1395, DOI http://dx.doi.org/10.1016/j.actaastro.2010.10.002, URL http://www.sciencedirect.com/science/article/pii/S0094576510003814
  • Wang et al. (2017) Wang X, Zerr B, Thomas H, Clement B, Xie Z (2017) Pattern formation for a fleet of auvs based on optical sensor. In: OCEANS 2017 - Aberdeen, pp 1–9, DOI 10.1109/OCEANSE.2017.8084615
  • Winfield et al. (2005a) Winfield AF, Sa J, Fernández-Gago MC, Dixon C, Fisher M (2005a) On formal specification of emergent behaviours in swarm robotic systems. International Journal of Advanced Robotic Systems 2(4), DOI 10.5772/5769, URL http://arx.sagepub.com/content/2/4/39.abstract, http://arx.sagepub.com/content/2/4/39.full.pdf+html
  • Winfield et al. (2005b) Winfield AFT, Harper CJ, Nembrini J (2005b) Towards Dependable Swarms and a New Discipline of Swarm Engineering, Springer Berlin Heidelberg, Berlin, Heidelberg, pp 126–142. DOI 10.1007/978-3-540-30552-1˙11, URL http://dx.doi.org/10.1007/978-3-540-30552-1_11
  • Yamauchi and Yamashita (2013) Yamauchi Y, Yamashita M (2013) Pattern Formation by Mobile Robots with Limited Visibility, Springer International Publishing, Cham, pp 201–212. DOI 10.1007/978-3-319-03578-9˙17, URL https://doi.org/10.1007/978-3-319-03578-9_17
  • Yamauchi and Yamashita (2014) Yamauchi Y, Yamashita M (2014) Randomized pattern formation algorithm for asynchronous oblivious mobile robots. In: Kuhn F (ed) Distributed Computing, Springer Berlin Heidelberg, Berlin, Heidelberg, pp 137–151
  • Zahn and Roskies (1972) Zahn CT, Roskies RZ (1972) Fourier descriptors for plane closed curves. IEEE Transactions on Computers C-21(3):269–281, DOI 10.1109/TC.1972.5008949