Development of swarm behavior in artificial learning agents that adapt to different foraging environments

04/01/2020 ∙ by Andrea López-Incera, et al. ∙ Leopold Franzens Universität Innsbruck University of Konstanz 0

Collective behavior, and swarm formation in particular, has been studied from several perspectives within a large variety of fields, ranging from biology to physics. In this work, we apply Projective Simulation to model each individual as an artificial learning agent that interacts with its neighbors and surroundings in order to make decisions and learn from them. Within a reinforcement learning framework, we discuss one-dimensional learning scenarios where agents need to get to food resources to be rewarded. We observe how different types of collective motion emerge depending on the distance the agents need to travel to reach the resources. For instance, strongly aligned swarms emerge when the food source is placed far away from the region where agents are situated initially. In addition, we study the properties of the individual trajectories that occur within the different types of emergent collective dynamics. Agents trained to find distant resources exhibit individual trajectories with Lévy-like characteristics as a consequence of the collective motion, whereas agents trained to reach nearby resources present Brownian-like trajectories.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Collective behavior is a common but intriguing phenomenon in nature. Species as diverse as locusts, and some families of fish or birds exhibit different types of collective motion in very different environments and situations. Although the general properties of swarms, schools and flocks have been widely studied (see e.g. (Zafeiris and Vicsek, 2012)

for a review), the emergence of global, coordinated motion from the individual actions is still a subject of study. Different approaches, ranging from statistical physics to agent-based models, have led to new insights and descriptions of the phenomenon. Statistical physics models are very successful at describing macroscopic properties such as phase transitions and metastable states

(Yates et al., 2009; Kolpas et al., 2007; Bode et al., 2010), but in order to apply the powerful tools of statistical mechanics, these models normally simplify the individuals to particles that interact according to certain rules dictated by the physical model adopted, as for instance the Ising-type interaction of the spins in a lattice. A different type of models are the so-called self-propelled particle (SPP) models (Vicsek et al., 1995; Czirók et al., 1999a, b; O' Loan and Evans, 1999), which enable higher complexity in descriptions at the individual level but still allow one to employ the tools of statistical physics. They describe individuals as particles that move with a constant velocity and interact with other individuals via fixed sets of rules that are externally imposed. In SPP models, the description of the interactions is not restricted to physically accepted first principles, but can include ad hoc rules based on specific experimental observations.

In this work, we follow a different approach and model the individuals as artificial learning agents. In particular, we apply Projective Simulation (PS) (Briegel and De las Cuevas, 2012)

, which is a model of agency that can incorporate learning processes via a reinforcement learning mechanism. The individuals are thus described as PS agents that interact with their surroundings, make decisions accordingly and learn from them based on rewards provided by the environment. This framework allows for a more detailed, realistic description in terms of the perceptual apparatus of the agent. One of the main differences with respect to previous models is that the interaction rules between agents are not imposed or fixed in advance, but they emerge as the result of learning in a given task environment. This type of agent-based models that employ artificial intelligence to model behavior are gaining popularity in the last few years. Artificial neural networks (ANN) have been used, for instance, in the context of navigation behaviors

(Morales et al., 2005; Mueller et al., 2011) and reinforcement learning (RL) algorithms have been applied to model collective behavior in different scenarios, such as pedestrian movement (Martinez-Gil et al., 2017) or flocking (Shimada and Bentley, 2018; Durve et al., 2019).

In contrast to other learning models such as neural networks, PS provides a transparent, explicit structure that can be analyzed and interpreted. This feature is particularly useful in modeling collective behavior, since we can study the individual decision making processes, what the agents learn and why they learn it. This way, we can directly address the questions of how and why particular individual interactions arise that in turn lead to collective behaviors. Initial work by Ried et al. (Ried et al., 2019a), where the authors use PS to model the density-dependent swarm behavior of locusts, laid the foundations of the present work.

Since the interaction rules are developed by the agents themselves, the challenge is to design the environment and learning task that will give rise to the individual and, consequently, collective behavior. In previous works, the agents are directly rewarded for aligning themselves with the surrounding agents (Ried et al., 2019a) or for not losing neighbours (Durve et al., 2019). Instead of rewarding a specific behavior, in this work we set a survival task that the agents need to fulfill in order to get the reward, and then analyze the emergent behavioral dynamics.

As a starting hypothesis, we consider the need to forage as an evolutionary pressure and design a learning task that consists in finding a remote food source. Due to this particular survival task, our work relates to the investigation of foraging theories and optimal searching behavior.

There is a vast number of studies devoted to the analysis of foraging strategies in different types of environments e.g., (Sinervo, 1997; Stephens et al., 2007; Pyke, 1984; Viswanathan et al., 2011). In the particular case of environments with sparsely distributed resources (e.g. patchy landscapes), there are two main candidates for the optimal search model: Lévy walks (Lévy, 1954; Shlesinger and Klafter, 1986; Viswanathan et al., 1999) and composite correlated random walks (CCRW) (Benhamou, 1992, 2007). Although the mathematical models behind them are fundamentally different, they have some common features that make the movement patterns hard to distinguish (Viswanathan et al., 1996; Sims et al., 2008; Benhamou, 2007; Edwards, 2011; Edwards et al., 2012). In broad terms, both models can produce trajectories that are a combination of short steps (with large turning angles in 2D), which are useful for exploring the patch area, and long, straight steps, which are efficient to travel the inter-patch distances. Even though both models have theoretical (Viswanathan et al., 1999; Benhamou, 1992) and experimental (e.g. (Humphries et al., 2012; Dragon et al., 2012)) support, it is not yet clear if animal foraging patterns can be described and explained by such models or if they are too complex to admit such simplifications.

Due to the fact that our learning task is directly related to foraging strategies, we link the present work to the aforementioned studies by analyzing the individual trajectories the agents produce as a consequence of the behavior developed in the different learning contexts.

The paper is organized as follows: an introduction to Projective Simulation and a detailed description of the model and the learning setup are given in Sec. II. In Sec. III, we present different learning tasks and analyze the resulting learned behaviors. In Sec. IV, we study the emergent group dynamics and individual trajectories within the framework of search models to determine if they can be described as Lévy walks or composite correlated random walks. Finally, we summarize the results and conclude in Sec. V.

Ii The model and the learning setup

A wide range of models and techniques have been applied to the study of collective behavior. In this work, we apply Projective Simulation, a model for artificial agency (Briegel and De las Cuevas, 2012; Mautner et al., 2015; Makmal et al., 2016; Melnikov et al., 2017, 2018; Ried et al., 2019b). Each individual is an artificial agent that can perceive its surroundings, make decisions and perform actions. Within the PS model, the agent’s decision making is integrated into a framework for reinforcement learning (RL) that allows one to design concrete scenarios and tasks that the individuals should solve and then study the resulting strategies111We remark that the notion of strategy employed throughout this work does not imply that the agents are able to plan. We use the word "strategy" to refer to the behavior the agents develop given a certain learning task. developed by the agents. In addition, each agent’s motor and sensory abilities can be modeled in a detailed, realistic way.

In our model of collective behavior, the interaction rules with other individuals are not fixed in advance; instead the agents develop them based on their previous experience and learning. The most natural interpretation of this approach is that it describes how a group of given individuals change their behavior over the course of their interactions, for example human children at play. However, our artificial learning agents can also be used to model simpler entities that do not exhibit learning in the sense of noticeable modifications of their responses over the course of a single individual’s lifetime, but only change their behavior over the course of several generations. In this case, a single simulated agent does not correspond to one particular individual, in one particular generation, but rather stands as an avatar for a generic individual throughout the entire evolution of the species. The evolutionary pressures driving behavioural changes over this time-scale can be easily encoded in a RL scenario, since the reward scheme can be designed in such a way that only the behaviors that happen to be beneficial under these pressures are rewarded. This allows us to directly test whether the evolutionary pressures are a possible causal explanation for the observed behavior or not.

Although other reinforcement learning algorithms may be used to model a learning agent, Projective Simulation is particularly suitable for the purpose of modeling collective behavior, since it provides a clear and transparent structure that gives direct access to the internal state of the agent, so that the deliberation process can be analyzed in an explicit way and can be related to the agent’s behavior. This analysis can help us gain new insight into how and why the individual interactions that lead to collective behaviors emerge.

ii.1 Projective Simulation

Projective Simulation (PS) is a model for artificial agency that is based on the notion of episodic memory (Briegel and De las Cuevas, 2012). The agent interacts with its surroundings and receives some inputs called percepts, which trigger a deliberation process that leads to the agent performing an action on the environment.

In the PS model, the agent processes the percepts by means of an internal structure called episodic and compositional memory (ECM), whose basic units are called clips and represent an episode of the agent’s experience. Mathematically, the ECM can be represented as a directed, weighted graph, where each node corresponds to a clip and each edge corresponds to a transition between two clips. All the edge weights are stored in the adjacency matrix of the graph, termed matrix. For the purpose of this work, the most basic two-layered structure is sufficient to model simple agents. Percept-clips are situated in the first layer and are connected to the action-clips, which constitute the second layer (see Fig. 1). Let us define these components of the ECM more formally.

  • The percepts are mathematically defined as -tuples , where is the Cartesian product . As it can be seen from this mathematical definition, the percept has several categories, represented by . Each component of the tuple is denoted by , where is the number of possible states of . The total number of percepts is thus given by .

  • Analogously, the actions are defined as , where and , where is the number of possible states of . The total number of actions is given by .

As an example, consider an agent that perceives both its internal state, denoted by , with two possible percepts , and some visual input, denoted by , with . Thus, one out of the four possible percepts could be . In this case, the possible actions may be .

Figure 1 represents the structure of the ECM in our model, which consists of a total of 25 percepts and 2 actions (see Sec. II.2 for a detailed description).

Figure 1: Structure of the ECM that consists of two layers, one for the percepts and one for the actions. Percepts and actions are connected by edges whose weight

determines the transition probability from the given percept to each action (see Sec. 

II.2 for details on the model).

Let us introduce how the agent interacts with the environment and makes decisions via the ECM. When the agent receives a percept, the corresponding percept-clip inside the ECM is activated, starting a random walk that only ends when an action-clip is reached, which triggers a real action on the environment. The transition probability from a given percept-clip to an action-clip is determined by the corresponding edge weight as,

(1)

where the normalization is done over all possible edges connected to clip . This process, starting with the presentation of a perceptual input that activates a percept clip and finishing when the agent performs an action on the environment, is termed an (individual) interaction round.

The structure of the ECM allows one to easily model learning by just updating the matrix at the end of each interaction round. Specifically, reinforcement learning is implemented by the environment giving a reward to the agent every time that it performs the correct action. The reward increases the -values222The

matrix is initialized with all its elements being 1, so that the probability distribution of the actions is uniform for each percept.

, and thus the transition probabilities, of the successful percept-action pair. Hence, whenever the agent perceives again the same percept, it is more likely to reach the correct action. However, in the context of this work, we are setting a learning task in which the agent should perform a sequence of several actions to reach the goal and get the reward. If the reward is given only at the last interaction round, only the last percept-action pair would be rewarded. Thus, some additional mechanism is necessary in order to store a sequence of several percept-action pairs in the agent’s memory. This mechanism is called glow and the matrix that stores the information about this sequence is denoted by . The components , corresponding to the percept-action transition , are initialized to zero and are updated at the end of every interaction round according to:

(2)

where is the glow parameter, which damps the intensity of the given percept-action memory. For close to one, the actions that are taken at interaction rounds in temporal vicinity to the rewarded action are more intensely remembered that the initial actions. If , all actions the agent performed until the rewarded interaction are equally remembered. The matrix is updated in such a way that the percept-action pairs that are used more often to get to the reward are proportionally more rewarded than the pairs that were rarely used. Note that the agent is not able to distinguish an ordered sequence of actions, but this is not necessary for the purpose of this work.

In the context of our learning task, the agent receives a reward from the environment at the end of the interaction round at which it reaches a goal. Then, the learning is implemented by updating the matrix with the rule,

(3)

where is the reward (only non-zero if the agent reached the goal at the given interaction round) and is the updated glow matrix 333Technically, the glow matrix is updated first, and then, if the agent is rewarded, the matrix is updated..

Since we model collective behavior, we consider a group of several agents, each of which has its own and independent ECM to process the surrounding information. Details on the specific learning task and the features of the agents are given in the following section.

ii.2 Details of the model

We consider an ensemble of individuals that we model as PS learning agents, which possess the internal structure (ECM) and the learning capabilities described in section II.1. This description of the agents can be seen as a simplified model for species with low cognitive capacities and simple deliberation mechanisms, or just as a theoretical approach to study the optimal behavior that emerges under certain conditions.

With respect to the learning, we set up a concrete task and study the strategy agents develop to fulfill it. In particular, we consider a one-dimensional circular world with sparse resources, which mimics patchy landscapes such as deserts, where organisms need to travel long distances to find food. Inspired by this type of environments, we model a task where agents need to reach a remote food source to get rewarded. The strategy the agents learn via the reinforcement learning mechanism does not necessarily imply that the individual organisms should be able to learn to develop it, but can also be interpreted as the optimal behavior that a species would exhibit under the given evolutionary pressures.

Let us proceed to detail the agents’ motor and sensory abilities. The positions that the agents can occupy in the world are discretized , where is the world size (total number of positions). Several agents can occupy the same position. At each interaction round, the agent can decide between two actions: either it continues moving in the same direction or it turns around and moves in the opposite direction. The agents move at a fixed speed of 1 position per interaction round. For the remainder of this work, we consider the distance between two consecutive positions of the world to be our basic unit of length. Therefore, unless stated otherwise, all distances given in the following are measured in terms of this unit. We remark that, in contrast to other approaches where the actions are defined with respect to other individuals444For instance, in the self-propelled particle models (Czirók et al., 1999a; Vicsek et al., 1995), the particle changes its orientation at each time step to align itself to the average orientation of the neighboring particles., the actions our agents can perform are purely motor and only depend on the previous orientation of the agent.

Perception is structured as follows: a given agent, termed the focal agent, perceives the relative positions and orientations of other agents inside its visual range555Radius with center at the agent’s position. , termed its neighbors. The percept space (see Sec. II.1) is structured in the Cartesian product form , where is the region in front of the focal agent and the region at the back. More precisely, each percept contains the information of the orientation of the neighbors in each region with respect to the focal agent and if the density of individuals in this region is high or low (see Fig. 2). Each category of percepts can take the values (25 percepts in total), which mean:

  • 0. No agents

  • . There are less than 3 neighbors in this region and the majority of them are receding from the focal agent.

  • . There are 3 or more neighbors in this region and the majority of them are receding from the focal agent.

  • . There are less than 3 neighbors in this region and the majority of them are approaching the focal agent.

  • . There are 3 or more neighbors in this region and the majority of them are approaching the focal agent.

Figure 2: Graphical representation of the percepts’ meaning. Only the front visual range (colored region) is considered, which corresponds to the values that category can take. The focal agent is represented with a larger arrow than the frontal neighbors. The agent can only see its neighbors inside the visual range and it can distinguish if the majority are receding (light blue) or approaching (dark blue) and if they are less or more than three.

In the following discussions, we refer to the situation where the focal agent has the same orientation as the neighbors as a percept of positive flow (majority of neighbors are receding at the front and approaching at the back). If the focal agent is oriented against its neighbors (these are approaching at the front and receding at the back), we denote it as a percept of negative flow. Note that the agents can only perceive information about the neighboring agents inside their visual range, but they are not able to see any resource or landmark present in the surroundings. This situation can be found in realistic, natural environments where the distance between resources is large and the searcher has no additional input while moving from one patch to another. Furthermore, the important issue of body orientation is thereby taken into account in our model (Pyke, 2015).

The interactions between agents are assumed to be sequential, in the sense that one agent at a time receives a percept, deliberates and then takes its action before another agent is given its percept666In order to do so, agents are given a label at the beginning of the simulation to keep track of the interaction sequence but they are placed at random positions in the world.. There are two reasons for this choice. For one, in a group of real animals (or other entities), different individuals typically take action at slightly different times, with perfect synchronization being a remarkable and costly exception. The second argument in favor of sequential updating is that it ensures that a given agent’s circumstances do not change from the time it receives its percept until the time when its acts. If the actions of all agents were applied simultaneously, a given focal agent would not be able to react on the actions of the other agents in the same round. Such a simplification would not allow us to take into account any sequential, time-resolved interactions between different agents of a group. In the real situation, while one focal agent is deliberating, other agents’ actions may change its perceptual input. Therefore, an action that may have been appropriate at the beginning of the round, would no longer be appropriate at this agent’s turn.

Figure 3: Structure of the simulation. Each ensemble of agents is trained for trials, where each trial consists of 50 global interaction rounds (g.i.r.). At each g.i.r., the agents interact sequentially (see text for details).

The complete simulation has the structure displayed in Fig. 3, where:

  • With each ensemble of agents, we perform a simulation of trials during which the agents develop new behaviors to get the reward (RL mechanism). This process is denoted as learning process or training from this point on.

  • Each trial consists of global interaction rounds. At the beginning of each trial, all agents of the ensemble are placed in random positions within the initial region (see Fig. 4).

  • We define a global interaction round to be the sequential interaction of the ensemble, where agents take turns to perform their individual interaction round (perception-deliberation-action). Note that each agent perceives, decides and moves only once per global interaction round.

The learning task is defined as follows: at the beginning of each trial, all the agents are placed at random positions within the first positions of the world, with orientations also randomized. Each agent has a fixed number of interaction rounds over the course of a trial to get to a food source, located at positions and (Fig. 4). At each interaction round, the agent first evaluates its surroundings and gets the corresponding percept. Given the percept, it decides to perform one out of the two actions ("go" or "turn"). After a decision is made, it moves one position. If the final position of the agent at the end of an interaction round is a food point, the agent is rewarded () and its ECM is updated according to the rules specified in Sec. II.1. Each agent can only be rewarded once per trial.

Figure 4: 1D environment (world). Agents are initialized randomly within the first positions. Food is located at positions and . is the distance from the center of the initial region to the food positions.

We consider different learning scenarios by changing the distances

at which food is positioned. However, note that a circular one-dimensional world admits a trivial strategy for reaching the food without any interactions, namely going straight in one direction until food is reached. Thus, in order to emulate the complexity that a more realistic two-dimensional scenario has in terms of degrees of freedom of the movement, we introduce a noise element that randomizes the orientation of each agent every

steps777Not all agents randomize the orientations at the same interaction round, which would lead to random global behavior. (it changes orientation with probability 1/2). This randomization can be also interpreted biologically as a fidgeting behavior or even as a built-in behavior to escape predators888Protean movement has been observed in several species (Bilecenoğlu, 2005; Eifler and Eifler, 2014; Yager et al., 1990; Combes et al., 2012) and there exist empirical studies that show that unpredictable turns (Jones et al., 2011) and complex movement patterns (Richardson et al., 2018) decrease the risk of predation. (Humphries and Driver, 1970). If the memory of the organism is not very powerful, we can also consider that, at these randomization points, it forgets its previous trajectory and needs to rely on the neighbors’ orientations in order to stabilize its trajectory. The agent can do so, since the randomization takes place right before the agent starts the interaction round.

Under these conditions, we study how the agents get to the food when the only input information available to them is the orientation of the agents around them.

Iii Learned behavior in different scenarios

We consider different learning scenarios characterized by the distance (see Fig. 4). We study how the dynamics that the agents develop in order to reach the food source change as the distance increases. In particular, we focus on two extreme scenarios: one where the resource is within the initial region () —agents are initialized within the first positions of the world—, and the other one where the resource is at a much larger distance. As a scale for this distance, we consider how far an agent can travel on average with a random walk, which is providing that it moves one position per interaction round. Hence, the other extreme scenario is such that . The first situation, where , mimics an environment with densely distributed resources, whereas the second one () resembles a resource-scarce environment where a random walk is no longer a valid strategy for reaching food sources.

Agent Environment
Description Value Description Value
Visual range () 6 Number of agents () 60
Reorient. freq. () 5 World size () 500
Glow () 0.2 Int. rounds per trial () 50
Reward () 1 Number of trials
Table 1: Description of the parameters used in the learning simulations with PS.
Figure 5: Learning curves for and for non-interacting (n.i.) agents. The curve shows the percentage of agents that reach the food source and obtain a reward of

at each trial. For each task, the average is taken over 20 (independent) ensembles of 60 agents each and the shaded area indicates the standard deviation. Zooming into the initial phase of the learning process, the inset figure shows a faster learning in the task with

than in the task with . In the case of , no agent is able to reach the food source in the first trial, and it takes the interacting agents approx. 200 trials to outperform the n.i. agents.

The parameters of the model that are used in all the learning processes are given in Table 1. Providing that , we consider values of ranging from 2 to 21 and focus on the cases with as the representative examples of resource-dense and resource-scarce environments, respectively. All agents start the learning process with a newly initialized matrix, so they perform each action ("go" or "turn") with equal probability. Figure 5 shows the learning curves for three different scenarios, where the food is placed at . The learning processes are independent from each other, that is, the distance does not change within one complete simulation of trials. In this way, we can analyze the learned behaviors separately for each . The learning curve displays the percentage of agents that reach the food source and obtain a reward at each trial. As a baseline for comparison, we also set the same learning task with for non-interacting (n.i.) agents (we set , so they cannot see the neighbors). The n.i. agents learn to go straight almost deterministically999Therefore, the agent performs a random walk with steps of length , which allows it to cover a distance of positions. —the probability for the action "go" at the end of the learning process is almost for percept —. The rest of percepts are never encountered, so the initial values remain the same. Due to the periodic randomization of the agents’ orientation, it can be seen that they do not reach the efficiency rate of the interacting agents (see figure 5) and only one out of three agents reaches the reward at each trial. Figure 5 shows that, for , the food source is so close (inside the initial region) that the agents get the reward in all the trials from the beginning. On the other hand, the tasks with show a learning process that takes more trials for the agents to come up with a behavior that allows them to get to the reward. In particular, only of the agents are able to reach the goal with the initial behavior (Brownian motion) in the scenario with and this percentage drops to almost in the case with . Note that it takes more trials for the agents to learn how to get to the furthest point () than it takes for (see inset in Fig. 5). The interacting agents start outperforming the n.i. agents in the task with at trial 200, where they start to form aligned swarms, as one can also see from the increase in the alignment parameter at the same trial in Fig. 10 (see Sec. III.2.1 for more details).

iii.1 Individual responses

The behavior the agents have learned at the end of the training can be studied by analyzing the final state of the agents’ ECM, from where one obtains the final probabilities for each action depending on the percept the agents get from the environment (see Eq. (1)). These final probabilities are given in Fig. 6 for the learning tasks with .

(a)
(b)
(c) Ideal probabilities for alignment
(d) Ideal probabilities for cohesion
Figure 6: Learned behavior at the end of the training process. The final probabilities in the agents’ ECM for the action "go" are shown for each of the 25 percepts ( table). Tables (a) and (b) show the final probabilities learned in the scenarios with and respectively. The average is taken over 20 ensembles (each learning task) of 60 agents each. Background colors are given to easily identify the learned behavior, where blue denotes that the preferred action for that percept is "go" and orange denotes that it is "turn". More specifically, the darker the color is, the higher the probability for that action, ranging from grey (), light () and normal () to dark (). Figures (c) and (d) show what the tables would look like if the behavior is purely based on alignment (agent aligns to its neighbors with probability 1) or cohesion (agent goes towards the region with higher density of neighbors with probability 1), respectively. See text for details.

Tables of figure 6 show the probability of taking the action "go" for each of the percepts. We focus on the learning tasks with , which represent the two most distinctive behaviors that we observe.

Let us start with the case of (Fig. 6 (a)), which corresponds to a task where the food is located much further away than the distance reachable with a random walk. In this case, highly aligned swarms emerge as the optimal collective strategy for reaching the food (see also figs. 10 and 9), since the orientations of the surrounding neighbors allow the focal agent to stabilize its orientation against the periodic randomization. The individual responses that lead to such collective behavior can be studied by looking at table (a): the diagonal corresponds to percepts with a clear reaction leading to alignment, i.e. to keep going when there is a positive flow of neighbors and to turn if there is a negative flow. More specifically, one can see that when the agent is in the middle of a swarm and aligned with it, the probability that it keeps going is 0.99 for dense swarms [percept ()] and 0.90 for sparse swarms [percept ()]. In the same situations, the agent that is not aligned turns around with probability 0.97 for dense swarms [percept ()] and 0.57 for sparse swarms [percept ()]. Outside the diagonal, one observes that the probability of turning is high when a high density of agents are approaching the focal individual from the front (last row) and the agents in the back are not approaching. We can also analyze the learned behavior at the back edge of the swarm, which is important to keep the cohesion of the swarm. When an agent is at the back of a dense swarm and aligned with it [percept ()], the probability of keeping the orientation is 0.81. If instead, the agent is oriented against the swarm [percept ()] the probability of turning around to follow the swarm is 0.65. This behavior is less pronounced when the swarm is not so dense [percepts (), ()], in fact, when a low density of neighbors at the back are receding from the focal agent [percept ()], the focal agent turns around to rejoin the swarm with probability 0.4, which results in this agent leaving the swarm with higher probability. If the agent is alone [percept ()], it keeps going with probability 0.77.

Figure 7: Final probability of taking the action "go" depending on the learning task (increasing distance to food source ) for four significant percepts. The percepts are , respectively (see legend). The average is taken over the agents’ ECM of 20 independently trained ensembles (1200 agents) at the end of the learning process. Each ensemble performs one task per simulation ( does not change during the learning process).

A very different table is observed for (Fig. 6 (b)). In this task, the food source is located inside the initial region where the agents are placed at the beginning of the trials, so the agents perceive, in general, high density of neighbors around them. For this reason, they rarely encounter the nine percepts encoding low density —that correspond to the ones at the center of the table, with grey background (Table (b) in Fig. 6)— throughout the interaction rounds they perform until they get the reward. The corresponding probabilities are the initialized ones, i.e. for each action. For the remaining percepts, we observe that the agents have learned to go to the region with higher density of neighbors, which leads to very cohesive swarms (see also Sec. III.2.2). Since the food source is placed inside the initialization region in this case —which is also within the region agents can cover with a random walk—, there is a high probability that there are several agents already at the food source when an agent arrives there, so they learn to go to the regions with higher density of agents. This behavior can be observed, for instance, for percepts in the first column (high density at the back) and second, third and fourth row (low/no density at the front), where the agents turn around with high probability. In addition, we observe that there is a general bias towards continuing in the same direction, which can be seen for example in percepts with the same density in both regions (e.g. percepts at the corners of the table). The tendency to keep walking is always beneficial in one-dimensional environments to get to the food source (non-interacting agents learn to do so deterministically, as argued for Fig. 5). In general, we observe that, in order to find the resource point at , agents do not need to align with their neighbors because the food is close enough that they can reach it by performing a Brownian walk.

Figures 6 (c) and (d) show what the tables would look like if the agents had deterministically (with probability 1) learned just to align with the neighbors (c) or just to go to the region inside the visual range with higher density of neighbors (d). In these figures, percepts for which there is no pronounced optimal behavior have grey background.

In Fig. 7, we select four representative percepts that show the main differences in the individual behaviors and plot the average probability of taking the action "go" at the end of a wide range of different learning scenarios where the distance to the food source is increasingly large. We observe that there are two clear regimes with a transition that starts at . This is the end of the initial region (see Fig. 4, with in our simulations) where the agents are positioned at the beginning of each trial (see appendix A.1 for details on why this transition occurs at ). The main difference between regimes is that, when the food is placed near the initial positions of the agents, they learn to "join the crowd", whereas, if the food is placed farther away, they learn to align themselves and "go with the flow". More specifically, for , the orientations of the surrounding neighbors do not play a role, but the agents learn to go to the region (front/back) with higher number of neighbors, which leads to unaligned swarms with high cohesion. On the contrary, for the tasks with , the agents tend to align with their neighbors. This difference in behavior can be observed, for instance, in the dark blue (squares) curve of figure 7, which corresponds to the percept "positive flow and higher density at the back". We observe that for , the preferred action is "turn" (the probability of taking action "go" is low), since there are more neighbors at the back. However, for , the agents tend to continue in the same direction, since there is a positive flow (neighbors have the same orientation as the focal agent). Analogously, the brown curve (triangles) shows the case where there is a negative flow and higher density at the front, so agents trained to find nearby food () have high probability of going, whereas agents trained to find distant food () have high probability of turning.

In general, we observe that agents with the same motor and sensory abilities can develop very different behaviors in response to different reward schemes. Agents start with the same initial ECM in all the learning scenarios, but depending on the environmental circumstances, in our case the distance to food, some responses to sensory input happen to be more beneficial than others in the sense that they eventually lead the agent to get a reward. For instance, agents that happen to align with their neighbors are the ones that reach the reward when the food is far away, so this response is enhanced in that particular scenario, but not in the one with nearby food.

iii.2 Collective dynamics

In this section, we study the properties of the collective motion that emerges from the learned individual responses analyzed in the previous section. We focus on two main properties of the swarms, namely alignment and cohesion. Figures 8 and 9 show the trajectories of the agents of one ensemble before (Fig. 8) any learning process and at the end of the learning processes with (Fig. 9). One can see that the collective motion developed in the two scenarios differs greatly in terms of alignment and cohesion. Thus, we quantify and analyze these differences in the following.

Figure 8: Trajectories (position vs. time) of an ensemble of 60 agents in one trial prior to any learning process. The vertical axis displays the position of the agent in the world and the horizontal axis the interaction round (note that the trial consists of rounds). Each line corresponds to the trajectory of one agent. However, some agents’ trajectories overlap, which is indicated by the color intensity. The trajectory of one particular agent is highlighted for clarity.
(a) Trajectories after training with
(b) Trajectories after training with
Figure 9: Trajectories of all agents of an ensemble in the last trial of the learning process for (a) and (b) . Ensembles of agents trained to find distant food form aligned swarms (a), whereas agents trained to find nearby food form cohesive, unaligned swarms (b). With the same number of interaction rounds, aligned swarms (a) cover larger distances than cohesive swarms (b). In addition, observe that trajectories in panel (b) spread less than in Fig. 8.

iii.2.1 Alignment

The emergence of aligned swarms as a strategy for reaching distant resources is studied by analyzing the order parameter, defined as

(4)

where is the total number of agents and the orientation of each agent (clockwise or counterclockwise). The order parameter or global alignment parameter goes from 0 to 1, where 0 means that the orientations of the agents average out and 1 means that all of them are aligned. In addition, we also evaluate the local alignment parameter, since the visual perception of the agent only depends on its local surroundings, and so does the action it takes. In this case, the order parameter is computed for each agent , considering only the orientation of its neighbors.

Figure 10: Evolution of the global alignment parameter through the learning processes with . At each trial, there is one data point that displays the average of the order parameter, first over all the (global) interaction rounds of the trial and then over 20 different ensembles of agents, where each ensemble learns the task independently. Shaded areas represent one standard deviation.

Figure 10 shows how agents that need to find nearby food do not align, whereas those whose task is to find distant resources learn to form strongly aligned swarms as a strategy for getting the reward, as can be seen from the increase in the order parameter over the course of the training. The inset in Fig. 10 shows that agents with the reward at start to align with the neighbors from trial 200, which leads to the conclusion that increasing the alignment is the behavior that allows them to get to the reward (note that the agents start to be rewarded also from trial 200, as can be seen in the inset of Fig. 5). The large standard deviation in the case is due to the fact that, in some trials, agents split in two strongly aligned groups that move in opposite directions (see Fig. 24 (a) in appendix A.2 for details).

iii.2.2 Cohesion

In this section, we study the cohesion and stability of the different types of swarms. In particular, we quantify the cohesion by means of the average number of neighbors (agents within visual range of the focal agent),

(5)

where is the number of neighbors of the th agent.

Figure 11: Evolution of the average number of neighbors around each agent through the learning processes with . At each trial, there is one data point that displays the average of , first over all the (global) interaction rounds of the trial and then over 20 different ensembles of agents, where each ensemble learns the task independently. Shaded areas represent one standard deviation.

Figure 11 shows the evolution of the average number of neighbors through the learning processes with . In the training with , we observe a decay in in the first 200 trials, due to the fact that agents start to learn to align locally (see appendix A.3 and Fig. 25 therein for details), but the global alignment is not high enough to entail an increase in the average number of neighbors. Therefore, as agents begin to move in straight lines for longer intervals (instead of the initial Brownian motion), they tend to leave the regions with a higher density of agents and drops. From trial 200 onwards, agents start to form aligned swarms —global alignment parameter increases (see inset of Fig. 10)— to get to the food, which leads to an increase in (see inset of Fig. 11). In the training with , agents learn quickly (first 50 trials) to form cohesive swarms, so increases until a stable value of 36 neighbors is attained.

Up to this point, all the analyses have been done with trials of interaction rounds. However, this is insufficient for assessing the stability of the swarm. For this purpose, we take the already trained ensembles and let them walk for longer trials101010The agents do not learn anything new in these simulations, i.e. their ECMs remain unchanged. so that we can analyze how the cohesion of the different swarms evolves with time. We place the agents (one ensemble of 60 agents per simulation) in a world that is big enough so that they cannot complete one cycle within one trial. This resembles infinite environments insofar as agents that leave the swarm have no possibility of rejoining it. This allows us to study the stability of the swarm cohesion and the conditions under which it disperses.

(a) Ensemble trained with
(b) Ensemble trained with
Figure 12: Trajectories of an ensemble of 60 agents, in a world of size , shown over 5000 interaction rounds. (a) Agents trained with form a swarm that continuously loses members until it dissolves completely. (b) Agents trained with form a highly cohesive swarm for the entire trial. The centered inset of this plot shows the first 2500 rounds, with a re-scaled vertical axis to observe the movement of the swarm. Insets on the right zoom in to 20 interaction rounds so as to resolve individual trajectories.

Figure 12 shows the trajectories of ensembles of agents trained with different distances . In the case with (Fig. 12 (a)), there is a continuous drop of agents from the swarm until the swarm completely dissolves. On the other hand, agents trained with (Fig. 12 (b)) present higher cohesion and no alignment (see inset of Fig. 12 (b)). Note that this strong cohesion makes individual trajectories spread less than the Brownian motion exhibited by agents prior to the training (see Fig. 8). The evolution of the average number of neighbors throughout the simulation is given in figure 13, where we compare the cohesion of ensembles of agents trained with . In the latter case, the agents leave the swarm continuously, so the average number of neighbors decreases slowly until the swarm is completely dissolved. For () the individual responses are such that the average number of neighbors increases (decreases) in the first 30 rounds until the swarm stabilizes and from then on stays at a stable value of () neighbors. The average number of neighbors is correlated to the swarm size, which we measure by the difference between the maximum and minimum world positions occupied by the agents (modulo world size ). As one can see in Fig. 12 (b), all agents remain within the swarm. If the swarm size increases, the average number of neighbors decreases, since the agents are distributed over a wider range of positions. The swarm stabilizes at a given size depending on the individual responses learned during the different trainings. For instance, swarms formed by agents trained with stabilize at swarm sizes of approximately 9 positions, whereas those trained with stabilize at larger swarm sizes (around 17 positions, see e.g. inset of fig. 12 (b)), which explains the lower value of observed in Fig. 13 for .

Figure 13: Evolution of the average number of neighbors throughout the trial of 5000 interaction rounds (average is taken over 20 ensembles of 60 agents each, where for each ensemble the simulation is performed independently. Shaded areas indicates one standard deviation).

iii.2.3 Comparison between learning scenarios

Finally, we compare how the alignment and cohesion of the swarms change as a function of the distance at which the resource is placed in the training. Figure 14 shows the average local and global alignment parameters, together with the average number of neighbors (at the end of the training) as a function of the distance with which the ensembles were trained. We observe that the farther away the resource is placed, the more strongly the agents align with their neighbors (local alignment) in order to reach it. This is directly related to the individual responses analyzed in Fig. 7, where one can see that for the agents react to positive and negative flow by aligning themselves with their neighbors. Specifically, the observed collective dynamics can be explained in terms of individual responses as follows. The probability of turning around when there is a negative flow and there are not a lot of neighbors (orange-diamonds curve in Fig. 7) becomes higher as the increases, from at to at . The change in the other individual alignment responses (in particular, the other curves in Fig. 7) is not so large in the region where , which suggests that the increase in the local alignment and cohesion we observe for is mostly due to the strength of the tendency the agents have to turn around when there is a negative flow, even when there are not a lot of neighbors. In addition, the lower values of the global alignment parameter observed in the grey (circles) curve in Fig. 14 for correspond to the behavior analyzed in Sec. III.2.1, where it is shown that strongly aligned swarms split into two groups in some of the trials (see also Fig. 24). With respect to the average number of neighbors, we observe that almost all the agents are within each other’s visual range when . As increases, swarms become initially less cohesive, but once , they become strongly aligned and consequently once again more cohesive (see discussion in Sec. III.2.2 and also figures 11 and 25 for details).

Figure 14: Average number of neighbors (in percentage), global and local alignment parameter as a function of the distance to the point where food is placed during the training. Each point is the average of the corresponding parameter over all interaction rounds () of one trial, and over trials. already trained ensembles are considered.

iii.3 Foraging efficiency

In this section, we study how efficient each type of collective motion is for the purpose of foraging. First, we perform a test where we evaluate how the trained ensembles explore the different world positions. For this test, we analyze which positions in the world are visited by which fraction of agents. The results are given in Fig. 15. We observe that, for positions within the initial region, agents trained with perform better than the others, since they do a random walk that allows them to explore all these positions exhaustively (as evidenced by high percentages of agents that explored positions before the edge of the initial region in Fig. 15). On the other hand, agents trained with perform worse when exploring nearby regions, since they form aligned swarms and move straight in one direction. This behavior prevents the agents that are initialized close to the edge of the initial region from exploring the positions inside it. The closer the position is to the edge of the initial region, the more agents visit it because they pass through it when traveling within the swarm. Thus, we conclude that the motion of these swarms is not the optimal to exploit a small region of resources that are located close to each other (a patch).

Non-interacting (n.i.) agents trained with perform slightly better at the intermediate distances than agents trained with , since they typically travel five steps in a straight line before being randomly reoriented, thereby covering an expected total of 16 positions in one trial (see Sec. III). Both curves (grey diamonds and orange squares) show a faster decay in this region than the other two cases (), which is due to the fact that agents do not walk straight for long distances in these two types of dynamics, since they do not stabilize themselves by aligning.

Agents trained with reach the best performance for longer distances. In particular, their performance is always better than the performance of agents trained with , showing that the strategy developed by agents trained with , namely strong alignment, is the most efficient one for traveling long distances (distance from patch to patch). Agents trained with do not align as strongly (see local alignment curve in Fig. 14) and there are more agents that leave the swarm before reaching the furthest positions (see also Fig. 24), which explains the lower performance at intermediate/long distances (the light blue curve (triangles) has a linear decrease that is stronger in this region than the dark blue curve (circles)). Note that the maximum distance reached by agents is 56; this is simply due to the fact that each trial lasts 50 rounds and the initial positions are within (see Fig. 4).

Figure 15: Percentage of agents that visit the positions situated at a distance from given on the horizontal axis (see Fig. 4). Since C is located at world position 6, a distance of e.g. 10 on the horizontal axis refers to the world positions 16 and 496. The already trained ensembles walk for one trial of 50 interaction rounds. For each of the four trainings (see legend), the performance of 20 ensembles is considered.

In addition, we study the swarm velocity for the different types of collective motions. To do so, we compute the average net distance traveled per round. Considering that the swarm walks for a fixed number of rounds111111The maximum distance agents can travel is 50 because they move at a fixed speed of 1 position per round. (), we define the normalized swarm velocity as,

(6)

where is the number of agents and is the net distance traveled by the th agent from the initial position () to the final position after interaction rounds (), that is,

(7)

where stands for interaction round and is the world size.

Figure 16 displays the swarm velocity as a function of the distance at which food was placed during the learning process. Agents trained to find distant resources (e.g. ) are able to cover a distance almost as large as the number of rounds for which they move. However, while the ensembles trained to find nearby resources (e.g. ) form very cohesive swarms, they are less efficient in terms of net distance traveled per interaction round. We observe that the transition between the two regimes happens at —corresponding to the end of the initialization region—, which is consistent with the transitions observed in figures 7 and 14 (see discussion in appendix A.1 for more details).

Figure 16: Swarm velocity as a function of the training distance . Each point is the average over the agents of 20 independently trained ensembles that have performed 50 independent trials each.

Iv Analysis of the trajectories

In this section, we analyze the individual trajectories that result from the different types of swarm dynamics. In order to gather enough statistics, we consider ensembles of agents that have been trained under various conditions, as described above, and let them walk for longer trials so that the individual trajectories are long enough to obtain reliable results. During this process, the agents do not learn anything new anymore; that is, the agents’ ECMs remain as they are at the end of the training. Thus, we study the trajectories that emerge from the behavior at the end of the learning process, which can be interpreted as the behavior developed on the level of a population in order to adapt to given evolutionary pressures. The individuals’ capacity for learning does not play a role in this analysis.

We focus on the two most representative types of swarms we have observed, i.e. the swarms that emerge from the training with close resources (e.g. ), characterized by strong cohesion; and the swarms that result from the training with distant resources (e.g. ), characterized by strong alignment. For easier readability, in the following we will refer to the swarms formed by agents trained with as cohesive swarms, and to the swarms formed by agents trained with as aligned swarms.

In the simulations for this analysis, we let each ensemble of agents perform interaction rounds121212Note that each agent moves one position per interaction round. in a world of size and analyze the individual trajectories. An example of such individual trajectories for the case of agents trained with is given in figure 17. We observe that some agents leave the swarm at certain points; however, due to the ’closed’ nature of our world model, they have the possibility of rejoining the swarm once it completes a cycle and starts a new turn around the world. Due to these environmental circumstances, the agents exhibit two movement modes: when they are alone and when they are inside the swarm. By looking at Fig. 17, one can see how agents exhibit directional persistence when they move within the swarm, since they have learnt to align themselves with their neighbours as a strategy for stabilizing their orientations. However, trajectories become more tortuous as agents leave the swarm and walk on their own. Note that it is only possible for individuals to leave the swarm131313The specific probabilities of doing so are given in Fig.6 (a) and analyzed in Sec.III.1. because of the weaker cohesion exhibited by aligned swarms (see Sec. III.2.2). This bimodal behavior can occur in nature (see e.g. collective motion and phase polyphenism in locusts (Ariel and Ayali, 2015; Pener and Simpson, 2009)), where individuals may benefit from collective alignment, for instance, to travel long distances in an efficient way, but they move independently to better explore nearby resources (see Sec.III.3 for details on exploration efficiency of the different collective dynamics).

Figure 17: Trajectories of one ensemble of 60 agents that were trained with . The world size is . Color intensity indicates the number of agents following the same trajectory, i.e. moving within the swarm. Some agents leave the swarm and then rejoin it when the swarm completes the cycle and starts a new turn. Only the first interaction rounds (of a total of ) are shown.

In the following sections, we characterize the trajectories and assess how well the agents’ movement patterns fit to well-known foraging models such as Lévy walks or composite correlated random walks.

iv.1 Theoretical foraging models

This work is directly related to foraging theory, since the task we set for the learning process is to find food in different environmental conditions. For this reason, we will analyze our data to determine whether the movement patterns that emerge from this learning process support any of the most prominent search models. For environments with scarce resources (e.g. patchy landscapes), these models are the Lévy walks (Viswanathan et al., 1999) and the composite correlated random walks (CCRW) (Benhamou, 1992).

In order to analyze the trajectories and determine which type of walk fits them best, the distribution of step lengths is studied, where a step length

is defined as the distance between two consecutive locations of an organism. Intuitively, the optimal strategy for navigating a patchy landscape allows for both an exhaustive exploration inside patches and an efficient displacement between patches, employing some combination of short and long steps. Lévy walks have a distribution of step lengths in which short steps have higher probability of occurrence but arbitrarily long steps can also occur due to its power-law (PL) tail. In two- and three-dimensional scenarios, the direction of motion is taken from a uniform distribution from 0 to

, which implies that Lévy walks do not consider directionality in the sense of correlation in direction between consecutive steps (Pyke, 2015)

. On the other hand, CCRW and the simpler version thereof, composite random walks (CRW), consist of two modes, one intensive and one extensive, which are mathematically described by two different exponential distributions of the step lengths. The intensive mode is characterized by short steps (with large turning angles in 2D) to exploit the patch, whereas the extensive mode —whose distribution has a lower decay rate— is responsible for the inter-patch, straight, fast displacement. CCRW in addition allow for correlations between the directions of successive steps.

Even though the models are conceptually different, the resulting trajectories may be difficult to distinguish (Benhamou, 2007; Plank and James, 2008; Plank and Codling, 2009), even more if the data is incomplete or comes from experiments where animals are difficult to track. In the past years, many works have been published that try to provide techniques to uniquely identify Lévy walks (Reynolds, 2012; Humphries et al., 2013; Gautestad, 2012) and to differentiate between the two main models (Benhamou, 2007; Jansen et al., 2012; Auger-Méthé et al., 2015). For instance, some of the experiments that initially supported the hypothesis that animals perform Lévy walks (Viswanathan et al., 1996; Sims et al., 2008; de Jager et al., 2011) were later reanalyzed to support the conclusion that more sophisticated statistical techniques are, in general, needed (Edwards et al., 2007; Edwards, 2011; Edwards et al., 2012; Jansen et al., 2012). Apart from that, there exist several studies that relate different models of collective dynamics to the formation of Lévy walk patterns under certain conditions (Santos et al., 2009; Reynolds and Ouellette, 2016). For instance, it has been shown (Reynolds, 2013a) that Lévy walk movement patterns can arise as a consequence of the interaction between effective leaders and a small group of followers, where none of them has information about the resource.

In our study, we consider the three models we have already mentioned (PL, CCRW and CRW), together with Brownian motion (BW) as a baseline for comparison. Since our model is one-dimensional, a distribution of the step lengths is sufficient to model the trajectories we observe, and no additional distributions, such as the turning angle distributions, are needed. In addition, the steps are unambiguously identified: a step has length if the agent has moved in the same direction for

consecutive interaction rounds. Finally, since space in our model is discretized, we consider the discrete version of each model’s probability density function (PDF). More specifically, the PDFs we consider are,

  1. Brownian motion (BW):

    (8)

    where is the decay rate and the minimum value a step length can have is, in our case, known to be , since agents move at a constant speed of one position per interaction round.

  2. Composite random walk (CRW):

    (9)

    where is the probability of taking the intensive mode, is its decay rate and is the decay rate of the extensive mode. In this case, again, the minimum step length is .

  3. Composite correlated random walk (CCRW):

    (10)
    (11)
    (12)
    (13)

    where and are the PDFs of the step lengths corresponding to the intensive and extensive mode respectively. Denoting the mode in which the agent is as and the mode to which the agent transitions as , is the transition probability from the intensive to the extensive mode and , from the extensive to the intensive mode. and

    are parameters of the model. The main difference between the CRW and the CCRW models is that, in the latter, the step lengths are correlated, i.e. the order of the sequence of step lengths, and thus the order in which the movement modes alternate, matters. The CCRW is modeled as a hidden Markov model (HMM) (see

    (Auger-Méthé et al., 2015; Zucchini and MacDonald, 2009)) with two modes, the intensive and the extensive. Figure 18 shows the details of the model and the notation for the transition probabilities between modes.

  4. Power-law (PL):

    (14)

    where the normalization factor is the Hurwitz zeta function (Clauset et al., 2009). The parameter gives rise to different regimes of motion: Lévy walks are characterized by a heavy-tailed distribution, with exponents , which produces superdiffusive trajectories, whereas corresponds to normal diffusion, as exhibited by Brownian walks. We note that the above distribution starts at , which is the shortest possible distance that our agents move in a straight line. The scale of this minimum step length is determined by the embodied structure of the organism and is typically considered to be one body length (Pyke, 2015). Some other works (e.g. (Clauset et al., 2009; Humphries et al., 2013)) consider a variant of the above distribution that only follows the PL form for steps longer than some threshold , for example when analysing experimental data that become increasingly noisy at short step-lengths. However, since the step lengths resulting from our simulations are natively discrete, the unbounded PL distribution given in eq. (14) seems appropriate. Moreover, if one were to introduce a lower bound , one would need to add more parameters in the model to account for the probabilities for all , which we consider an unnecessary complication. This is particularly relevant when it comes to comparing PL to BW, CRW or CCRW as models for fitting our data: since none of the other models include lower bounds, we achieve a more consistent comparison by a parsimonious approach that includes all step lengths in the PL model and thereby abstains from additional free parameters.

Figure 18: Hidden Markov model for the CCRW. There are two modes, the intensive and the extensive, with probability distributions given by and (see text for details). The probability of transition from the intensive (extensive) to the extensive (intensive) mode is given by (), where and are the probabilities of remaining in the intensive and extensive mode respectively. is the probability of starting in the intensive mode.

iv.2 Visual analysis

In this section, we study the general characteristics of the trajectories of both types of swarm dynamics. We start by analyzing how diffusive the individual trajectories are depending on whether the agents belong to an ensemble trained with (dynamics of aligned swarms) or (dynamics of cohesive swarms). More specifically, we analyze the mean squared displacement (MSD), defined as,

(15)

where is the reference (initial) position and is the position after time elapsed. In general, the MSD increases with the time elapsed as . Depending on the exponent

, the diffusion is classified as normal diffusion (

), subdiffusion () or superdiffusion (), which is called ballistic diffusion when . For instance, a Brownian particle undergoes normal diffusion, since its MSD grows linearly with time.

Figure 19: Log-log (base 2) plot of the MSD as a function of the time interval for two types of trajectories: trajectories performed by agents trained with (blue curve, circles) and by agents trained with (orange curve, triangles). We observe that the former present ballistic diffusion, whereas the latter exhibit close-to-normal diffusion. 600 individual trajectories (10 ensembles of 60 agents) are considered for each case.

Figure 19 shows that the dynamics of aligned swarms leads to superdiffusive individual trajectories (ballistic, with ), whereas the trajectories of agents that belong to cohesive swarms exhibit close-to-normal diffusion. The anomalous diffusion (superdiffusion) exhibited by the agents trained with (curve with blue circles in Fig. 19) favors the hypothesis that the swarm behavior may induce Lévy-like movement patterns, since Lévy walks are one of the most prominent models describing superdiffusive processes. However, CCRW can also produce superdiffusive trajectories (Benhamou, 1992, 2007). In contrast, agents trained with do not align with each other and the normal diffusion shown in Fig. 19 is indicative of Brownian motion.

The analysis presented above already shows a major difference between the two types of swarm dynamics but it is in general not sufficient to determine which theoretical model (Lévy walks or CCRW) best fits the data from aligned swarms. According to (Benhamou, 2007)

, one possible way to distinguish between composite random walks and Lévy walks is to look at their survival distributions, which is the complement of the cumulative distribution function, giving the fraction of steps longer than a given threshold. Lévy walks would exhibit a linear log-log relationship when this type of distribution is plotted, whereas CCRW exhibit a non-linear relation. Figure 

20 compares the survival distributions of two trajectories, one from each type of swarm, to those predicted by the best-fitting models of each of the four classes. The maximum length observed in the -trajectory is of the order of , whereas in the case of the -trajectory, it is one order of magnitude larger. The most prominent features one infers from these figures are that all models except PL seem to fit the data of the -trajectory, and that Brownian motion is clearly not a good model to describe the -trajectory. In addition, Fig. 20 (a) is curved and seems to be better fit by the CCRW. However, when other trajectories of agents trained with are plotted in the same way, we see that data seems to better follow the straight line of the PL rather than the CCRW (see for example Fig. 29).

While visual inspection may be an intuitive way of assessing model fit, and one that is easy to apply at small scales, it would be preferable to use a method that yields quantitative and objective, repeatable assessments of how well various models fit a given data set. Moreover, we generated 600 individual trajectories per type of swarm, in order to support statistically meaningful conclusions, and at this scale visual inspection quickly becomes infeasible. For this reason, we now turn to a more rigorous statistical analysis of the individual trajectories.

(a) Trajectory of an agent that belongs to an aligned swarm.
(b) Trajectory of an agent that belongs to a cohesive swarm.
Figure 20:

Survival probability (percentage of step lengths larger than the corresponding value on the horizontal axis) as a function of the step length. Each panel depicts the data from the trajectory of one agent picked from (a) aligned swarms and (b) cohesive swarms, so that this figure represents the most frequently observed trajectory for each type of dynamics. The survival distributions of the four candidate models are also plotted. The distributions for each model are obtained considering the maximum likelihood estimation of the corresponding parameters (see Sec. 

IV.3 for details). The curve for the CCRW model is obtained by an analytic approximation of the probabilities of each step length, given the maximum likelihood estimation of its parameters. Since the order of the sequence of step lengths is not relevant for this plot, we estimate the probabilities of each step length as (see eq. (9)) with .

iv.3 Statistical analysis

In order to determine which of the mentioned models best fits our data, we perform the following three-step statistical analysis for each individual trajectory: (i) first, we optimize each family of models to get the PDF that most likely fits our data via a maximum likelihood estimation (MLE) of the model parameters. (ii) Then, we compare the four different candidate models among them by means of the Akaike information criterion (AIC) (Burnham and Anderson, 2004) and (iii) finally, we apply an absolute fit test for the best model. We repeat this analysis for agents trained with and , yielding a total of 600 individual trajectories per type of training (10 ensembles of aligned swarms and 10 of cohesive swarms, where each ensemble has 60 agents). The simulation of interaction rounds is performed for each ensemble independently. In order to do the statistical analysis, each individual trajectory is divided into steps, which are defined in our case as the distance the agent travels without turning. We obtain sample sizes that range from to steps for trajectories of agents trained with and from to steps in the case with .

The following provides more detail on the analysis, starting with the MLE method, which consists in maximizing the likelihood of each model candidate with respect to its parameters. The likelihood function is generally defined as,

(16)

where is the sample size and is the PDF of the given model —that depends on the model parameters — evaluated at the data point . Details on the maximization process and the computation of the likelihood function in the case of CCRW, which is more complicated since consecutive step lengths are not sampled independently, are given in appendix B.1. In the following, we denote the values of the parameters that maximize the likelihood and the value of the maximum likelihood with hatted symbols.

Table 2 shows the MLE parameters we have obtained for each model and for each swarm type. We observe that, in the case, the decay rates of the exponential distributions () are very small (approx. of the order of ) compared to the decay rates in the case (approx. of the order of ), which implies that the former allows for longer steps to occur with higher probability. The decay rates of the intensive modes () are comparable to the BW decay rate of because they account for the shorter, more frequent steps, which occur in both types of dynamics —in the case, agents perform shorter steps when they leave the swarm and move on their own—. Also note that the power-law coefficient in the case implies that the PL model is that of a Lévy walk.

Model
BW 0.083 0.032 0.305 0.052
CRW 0.37 0.15 0.6 1.1
0.0126 0.0069 0.302 0.048
0.879 0.040 0.196 0.059
PL 1.59 0.10 1.657 0.066
CCRW 0.23 0.31 0.12 0.13
0.37 0.15 0.36 0.21
0.0134 0.0066 0.300 0.047
0.839 0.078 0.749 0.060
0.013 0.020 0.09 0.26
Table 2: Average values of the MLE parameters for the different models. 600 trajectories are analyzed for each type of swarm.

Once the value of the maximum likelihood is obtained for each model, it is straightforward to compute its Akaike value,

(17)

where is the number of parameters of the model. The model with the lowest () is the best model (out of the ones that are compared) to fit the data (Burnham and Anderson, 2004). In order to compare the models in a normalized way, the Akaike weights are obtained from the Akaike values as,

(18)

where is the Akaike weight of the th model, and , with the Akaike value of the th model. The interpretation of is not straightforward but, as it was argued in (Symonds and Moussalli, 2011), "Akaike weights can be considered as analogous to the probability that a given model is the best approximating model". is the total number of models under comparison, so that the Akaike weights are normalized as . In appendix B.3, we present detailed tables with the results of this statistical analysis for three trajectories, two for training with and one with .

(a) Akaike weights of the 600 trajectories of agents trained with .
(b) Akaike weights of the 600 trajectories of agents trained with .
Figure 21: Violin plots that represent the Akaike weights obtained for each model for the trajectories of agents trained with (a) (aligned swarms) and (b) (cohesive swarms). 600 individual trajectories —per type of swarm— were analyzed for each plot. The ’’ symbol represents the median and the vertical lines indicate the range of values in the data sample (e.g. PL model in figure (a) has extreme values of 0 and 1). Shaded regions form a smoothed histogram of the data (e.g. the majority of Akaike weights of the CCRW model in figure (a) have value 1, and there are no values between 0.2 and 0.8). See text for more details.

Figure 21 shows the results of the Akaike weights obtained for each of the 600 trajectories analyzed for each type of swarm. In the case of the aligned swarms (figure 21 (a)), we observe that the BW model is discarded in comparison to the other models, since its Akaike weight is zero for all trajectories. of trajectories have Akaike weight of 1 for the CCRW model141414 of trajectories have , which is to be expected since the MLE parameters of both CRW and CCRW models are roughly the same. and 0 for the rest of the models, whereas of trajectories have Akaike weight of 1 for the PL model and 0 for the rest. This result is in agreement with previous works that claim that "selection pressures give CCRW Lévy walk characteristics" (Reynolds, 2013b). Therefore, the majority of individual trajectories are best fit by CCRW with two exponential distributions whose means are and , which give the movement patterns Lévy-walk features. In addition, a considerable percentage of trajectories are indeed best fit by a power-law distribution with exponent , that is, a Lévy walk.

On the other hand, the cohesive swarms (figure 21 (b)) show high Akaike weights for all models except the PL, which implies that only the PL model can be discarded as a description of the observed movement patterns. of trajectories have Akaike weights , and for the BW, CRW and CCRW models, respectively. The remaining of trajectories have . However, note in Table 2 that the MLE parameters for the four models in fact specify particular limiting cases that correspond to very similar probability distributions, which indicates that the movement has essentially the same characteristics in all models (see also Fig. 20 (b)). In particular, the intensive and extensive modes in the CRW and CCRW models are of the same order, which implies that there is effectively one mode. Overall, the type of motion that the agents in these swarms exhibit has Brownian motion characteristics.

Finally, we study the goodness-of-fit (GOF) of the different models. For models that deal with i.i.d. variables (BW, CRW, PL), it is enough to perform a likelihood ratio test, whose -value indicates how well the data is fit by the model. Within our framework, a low -value, namely , means that the model can be rejected as a description for the observed data with a confidence of . The closer is to 1, the better the model fits the data. In the case of CCRW, a more involved method is needed due to the correlation in the data. Specifically, we first compute the uniform pseudo-residuals (see (Zucchini and MacDonald, 2009)) and then we perform a Kolmogorov-Smirnov (KS) test to check for uniformity of the mid-pseudo-residuals. Details on the methods used in both GOF tests are given in Appendix B. Even though a visual inspection of figure 20 suggests that the CRW, PL, CCRW models fit the data reasonably well, a quantitative analysis gives -values of for most of the trajectories fitted by the BW and PL models. Some trajectories fitted by the CRW model give better fittings, e.g. the best -values are and for a trajectory of an agent trained with and , respectively. In the CCRW case, we give the average value of the KS distance obtained in the 600 trajectories, which is and for and -trajectories respectively 151515Note that a perfect fit gives and the worst possible fitting gives .. More details on the GOF tests and their results are given in appendix B.2.

A closer inspection reveals that this relatively poor fit is mostly due to irregularities in the tails of the observed distributions. However, more importantly, we note that the trajectories were in fact not drawn from a theoretical distribution chosen for its mathematical simplicity, but result from individual interactions of agents that have learned certain behaviors. In this regard, with respect to the sometimes low goodness-of-fit values, our simulations lead to similar challenges as the analysis of experimental data from real animals (see e.g. (Auger-Méthé et al., 2015)). Nonetheless, the above analysis does provide a more robust account of key features of the collective dynamics.

V Conclusions

We have studied the collective behavior of artificial learning agents, more precisely PS agents, that arises as they attempt to survive in foraging environments. More specifically, we design different foraging scenarios in one-dimensional worlds in which the resources are either near or far from the region where agents are initialized.

This ansatz differs from existing work in that PS agents allow for a complex, realistic description of the sensory (percepts) and motor (actions) abilities of each individual. In particular, agents can distinguish how other agents within visual range are oriented and if the density of agents is high or low in the front and at the back of their visual area. Based on this information, agents can decide whether to continue moving in their current direction or to turn around and move in the opposite direction. Crucially, there are no fixed interaction rules, which is the main difference that sets our work apart from previous approaches, like the self-propelled particle (SPP) models or other models from statistical physics. Instead, the interactions emerge as a result of the learning process agents perform within a framework of reinforcement learning. The rewards given as part of this learning process play a role analogous to evolutionary pressures in nature, by enhancing the behaviors that led the agent to be rewarded. Therefore, by varying the task and reward scheme and studying the resulting behaviors, our approach allows us to test different causal explanations for specific observed behaviors, in the sense of evolutionary pressures proposed to have led to these behaviors.

In this work, we have considered scenarios where the food is situated inside or far from the region where agents are initialized and we have observed that the initially identical agents develop very different individual responses —leading to different collective dynamics— depending on the distance they need to cover to reach the reward (food source). Agents learn to form strongly aligned swarms to get to distant food sources, whereas they learn to form cohesive (but weakly aligned) swarms when the distance to the food source is short.

Since we model each individual as an artificial learning agent, we are able not only to study the collective properties that arise from the given tasks, but also to analyze the individual responses that agents learn and that, in turn, lead to the swarm formation. Thus, we observe for instance that the tendency to align with the neighbors in the case increases with the density of neighbors surrounding the agent. In the case of a training with , we observe that the individuals tend to move to the region with higher number of neighbors, which leads to high cohesion at the collective level.

We note that the task faced by our artificial agents, of reaching a food source, is closely related to the behaviors studied in the context of foraging theory. For this reason, we compare the individual trajectories that result from the learning process to the principal theoretical models in that field. We show that most of the individual trajectories resulting from the training with distant resources —which leads to strongly aligned swarms— are best fitted by composite correlated random walks consisting of two modes, one intensive and one extensive, whose mean step lengths are and , respectively. A smaller fraction of these trajectories is best fitted by power-law distributions with exponents , that is, Lévy walks. The exponent of the power-law distribution we obtain is close to 2, which is the optimal Lévy walk for maximizing the rate of target encounters in environments with sparsely distributed, renewable resources (Viswanathan et al., 1999; Raposo et al., 2009; Wosniack et al., 2017). Moreover, our results are in agreement with the study of Reynolds (Reynolds, 2013b) that shows that animals can approximate Lévy walks by adopting a composite correlated random walk.

In contrast, agents that were trained to find nearby resources and follow the dynamics of cohesive swarms present normal-diffusive, Brownian-like trajectories that do not exhibit two movement modes but just one.

One crucial point of this analysis is that our simulated agents move in a multi-agent context and their movement patterns are therefore determined by the swarm dynamics they have developed through the learning process. In particular, we provide a new perspective and additional insight on the studies mentioned above regarding Lévy walks and CCRW, since the individual trajectories that are best fit by these two models arise from a collective motion with very specific features such as strong alignment and decaying cohesion. This, together with the fact that the individual responses emerge as a result of the learning process, provides an example of how Lévy-like trajectories can emerge from individual mechanisms that are not generated by a Lévy walk process. In this sense, our work provides an unusual example to consider within the emergentist versus evolutionary debate on Lévy walks (see e.g. (Pyke, 2015; Wosniack et al., 2017)).

To conclude, we have applied a model of artificial agency (PS) to different foraging scenarios within the framework of collective motion. We have shown that, without any prior hard-wired interaction rules, the same agents develop different individual responses and collective interactions, depending on the distance they need to travel to reach a food source. Agents form strongly aligned swarms to stabilize their trajectories and reach distant resources, whereas they form cohesive, unaligned swarms when the resources are near. In addition, we have shown that Lévy-like trajectories can be obtained from individual responses that do not have a simple theoretical model as the underlying process, but instead are generated and arise from the interplay of a fine-grained set of learned individual responses and the swarm behavior that emerges from them at a collective level.

This work provides a new framework for the study of collective behavior, which supports more detailed and realistic representations of individuals’ sensory and motor abilities and different types of environmental pressures. It would be interesting to apply this approach to the more complex collective behaviors that arise in two- and three-dimensional environments. Furthermore, the PS model allows for a variety of new scenarios to explore in the context of behavioral biology, since different reward schemes can easily be implemented and studied.

Acknowledgements

We would like to thank Oleksandr Chepizhko, Vladimir Palyulin, Gorka Muñoz Gil, Nicolás Tizón Escamilla and Fernando Peruani for helpful discussions. This work was supported in part by the Austrian Science Fund (FWF) through the SFB BeyondC F71. HJB was also supported by the Ministerium für Wissenschaft, Forschung und Kunst Baden Württemberg (AZ:33-7533-30-10/41/1).

References

Appendix A Additional analysis

In this section, we provide additional information about the dynamics presented in Section III of the main text.

a.1 Transition from cohesive to aligned dynamics

First, we analyze in detail why there is a transition at (see Figs. 7 and 14) from the regime where cohesive swarms emerge to the regime where aligned swarms emerge as a result of the learning processes. We attribute this phenomenon to the fact that the agents are initialized in a region of size (12 in our case), which means that a food source placed at is exactly at the edge of this region. Consider the case where the food is placed inside the initialization region: in this case, it is most likely that agents will find the food—which is the condition for being rewarded— while they are surrounded by many neighbours. Consequently, behaviors that entail approaching or staying with other agents are more likely to lead to rewards —effectively, agents learn to ’join the crowd’. However, if the food is placed outside the initial region, agents need to leave regions where the density of agents is high at the beginning of the trial, but they also need to stabilize their orientations, which is best achieved by aligning with one’s neighbors. We have tested this hypothesis by changing the initial region. Figures 22 and 23 show analogous data to Figures 7 and 14, but with agents initialized in the first positions of the world (half of the previous region). We observe that the transition in behavior happens at in this case, which is the edge of the initial region.

Figure 22: Final probability of taking the action "go" depending on the learning task (increasing distance to food source ) for four significant percepts (see legend). Average is taken over one ensemble consisting of 60 agents.
Figure 23: Average number of neighbors (as a fraction of the total ensemble size), global and local alignment parameter as a function of the distance to the point where food is placed during training. Each point is the average of the corresponding parameter over all interaction rounds () of one trial, and then over trials. One trained ensemble of 60 agents is considered for each value of .

a.2 Details on analysis of alignment

In this section, we elaborate more on the splitting of the swarm that we observe in some of the trials for training with . In order to study this, we perform a simulation of 100 trials with ensembles of agents that are already trained with . Figure 24 (a) shows that, in some of these trials, almost all agents form one big swarm161616We take the threshold for ’a single swarm’ to be that of agents move in the same direction. () that goes in one direction, with few agents moving away from the swarm (grey histogram), whereas in other trials they form two swarms (), roughly of similar size, that travel in opposite directions (pink histogram). Locally, agents are strongly aligned, as can be seen in Fig. 14, where average local alignment parameter reaches 0.9 for . For , the swarm behavior is similar to the one observed for (see Fig. 24 (b)), but the local alignment is not so strong, so there are more agents that go out of the swarm. For swarms trained with (Fig. 24 (c)), we observe that there is no splitting and agents do not move beyond the initial region.

(a) Agents trained with
(b) Agents trained with
(c) Agents trained with
Figure 24: Stacked bar graph showing the number of agents that are located at a given position at the end of one trial (at the 50th interaction round). The graph is centered in , which is the middle of the initial region (value 0 in the horizontal axis). Each data set (for each trial) is processed such that the majority of agents travel to the positive side of the horizontal axis. 100 trials of (already trained) ensembles of 60 agents are considered (one ensemble per trial). (a) Out of the 100 ensembles, 72 travel as one big swarm (grey) and 28 split into two subswarms that go in opposite directions (pink). In order to show that these are complementary subsets of the data, grey bars are stacked on top of pink bars. (b) Out of the 100 ensembles, 83 travel as one big swarm (grey) and 17 split in two subswarms that go in opposite directions (pink). (c) All ensembles are strongly cohesive and do not split. Agents do not travel beyond the initial region (marked in the horizontal axis).

a.3 Details on analysis of cohesion

In this section, we provide an additional plot (Fig. 25) of the evolution of the local alignment parameter through the learning process for . We observe that the increase of the local alignment parameter from trial 100 to trial 200 is the reason why the average number of neighbors decays at these same trials in Fig. 11 (see inset). At these trials, agents have not yet learned to form swarms, but some of them have learned to go straight and started to learn to align with their neighbors. Thus, these agents are already able to go away from the initial region where the rest of agents are still doing a random walk. Consequently, these agents in particular have fewer neighbors, which reduces the overall average number of neighbors. For higher values of the local alignment parameter, as seen from trial 200 onwards, agents start to form strongly aligned swarms, which increases cohesion and consequently the number of neighbours .

Figure 25: Evolution of the local alignment parameter through the learning processes with . Each point is the average of the corresponding parameter over all interaction rounds () of one trial. 20 independently trained ensembles of 60 agents each are considered for the average.

Appendix B Statistical methods

The statistical methods we have applied in Sec. IV.3 of the main text are explained and detailed in this section. The statistical analysis consists of three steps: (i) the MLE estimation of the model parameters, for all models; (ii) the best model selection by means of the AIC and (iii) the absolute fit tests for the best model.

b.1 Likelihood functions and MLE

The expression of the likelihood function (16) can lead to computational underflows when the sample size is large171717Large product of terms below one gives values close to zero., so the usual procedure is to maximize its logarithm instead (which leads to the same result since the function (16) is monotonic). For i.i.d. variables, the log-likelihood function has the simple expression,

(B.1)

where is the sample size and is the PDF of the given model —that depends on the model parameters — evaluated at the data point .

The first three models have i.i.d. variables, so the computation of their log-likelihood functions is straightforward once the PDFs of each model are defined (eqs. (8), (9), (14)). However, the log-likelihood function of the CCRW model cannot be expressed as a sum of the logarithms of the PDFs evaluated at each data point. From eq. (16), the expression in the case of the CCRW can be written as,

(B.2)

where,

(B.3)
(B.4)
(B.5)
(B.6)

and and are given in eqs. (10) and (11) (note that they depend on and , respectively). Since the variables are not independent in this case, the log-likelihood function cannot be directly obtained with expression (B.1). In addition, function (B.2) cannot be directly computed due to underflow errors. To avoid this, we apply the techniques explained in chapter 3 of (Zucchini and MacDonald, 2009) (specifically, the algorithm for the computation of the log-likelihood function given in appendix A.1.3 of (Zucchini and MacDonald, 2009)).

Once the log-likelihood functions are computed, the maximization (minimization of the negative log-likelihood function) with respect to the model parameters is performed using the Python function scipy.optimize.minimize. The MLE parameters obtained for each model are given in Table 2. The MLE of the minimum step length can be directly considered to be the observed one (Edwards et al., 2012) (in our case it is since agents move one position per interaction round).

b.2 Goodness-of-fit tests

In this work, we have performed two types of goodness-of-fit (GOF) tests; one for the models with i.i.d. variables (BW, CRW and PL) and a different one to account for the temporal autocorrelation of the CCRW model. For the BW, CRW and PL models, we apply a likelihood ratio test to compare the likelihood of the observed frequencies to the likelihood of the theoretical distribution that corresponds to the given model. More specifically, we compute the log-ratio(Clauset et al., 2009),

(B.7)

where is the sample size and , are the observed and theoretical frequencies of the th step length, respectively. Note that the theoretical frequency is just the probability (eq. (8), eq. (9) or eq. (14) depending on the analyzed model) of the th step length times the sample size .

Normally, likelihood ratios like above are used to compare two competing theoretical models, in which case a large absolute value of indicates that one model is clearly better than the other. In order to assess how much better it is, one asks how likely it is that a given absolute value of could have arisen purely from chance fluctuations, if in fact both models were equally good. This is quantified by the -value (App. C, eq. (C.6) of ref. (Clauset et al., 2009)). When one compares two theoretical models, finds a large and its corresponding -value is small, this indicates that the value is unlikely to be a chance fluctuation, and that one can therefore exclude one model with high confidence.

In our case, however, a good fit between the theoretical model and the observed frequencies manifests as small and correspondingly large . Small -values, on the other hand, indicate that it is unlikely that the data were generated by the proposed model. One can therefore interpret as the probability with which we can rule out the proposed theoretical model. The -values obtained in our analysis are given in Fig. 26.

Figure 26: Histograms of the -values obtained for the BW, CRW and PL models in the and cases. In our goodness-of-fit test, -values close to zero rule out the proposed theoretical model, while values close to 1 represent compatibility with the model.

In the case of the CCRW model, one cannot directly perform a GOF test on the raw data points due to the autocorrelation present in the HMM model.

We circumvent this problem using pseudo-residuals, as described in (Zucchini and MacDonald, 2009)

. Given a continuous random variable

and a function defined by the cumulative distribution function (CDF),

(B.8)

the pseudo-residual is obtained by sampling a value of , then taking the corresponding value of the function . If is sampled from some probability distribution and we take to be the CDF of that same distribution,

(B.9)

then one can show that the resulting probability distribution over the pseudo-residuals is in fact uniform (Zucchini and MacDonald, 2009). If, on the other hand, we take to be the CDF derived from some proposed theoretical distribution , then the pseudo-residuals will in general not be uniformly distributed. By testing whether the pseudo-residuals with respect to a given theoretical model are uniformly distributed, one can therefore test whether the model is a good fit for the data.

In order to accommodate discrete variables, one introduces so-called mid-pseudo-residuals,

(B.10)

where is obtained by sampling a value of and taking the corresponding , as above, while is the value of at the greatest possible realization that is strictly less than the sampled .

Our data consists of a time-series of step lengths , each of which gives rise to one mid-pseudo-residual . Therefore, the first step length is denoted and the last one , since is the sample size. In order to be consistent with notation in Sec. IV for step lengths, we use in the following the upper case to denote the random variable and the lower case to denote one realization of it.

Crucially, the probability distribution over step lengths at each time-step is different, since it is correlated with the lengths of preceding steps:

(B.11)
(B.12)

where the expression for the conditional probability ((Zucchini and MacDonald, 2009), (Chapter 5)) is in our case,

(B.13)

where , , and 1 are defined in eqs. (B.3), (B.4), (B.5) and (B.6) respectively and,

(B.14)
(B.15)

where and are the PDFs defined in eq. (10) and (11) and and are their corresponding CDFs, respectively. Note that, in this expression, the parameters of the model are fixed (MLE parameters). Again in this case, a rescaling is needed in order to avoid underflows in the computation (see algorithm in App. A.2.9 of ref. (Zucchini and MacDonald, 2009)).

In summary, we first compute the mid-pseudo-residual for each data point and then we perform a GOF test on them. Since the probability distribution of the mid-pseudo-residuals approaches that of a continuous variable, one can apply a Kolmogorov-Smirnov (KS) test to check for uniformity. The KS statistic computes the distance () between the CDF of the empirical data (in this case, the values ) and the CDF of the reference distribution (in this case, ). Therefore, a value means that the data is distributed exactly as the reference distribution. The maximum KS distance is . One obtains one value of for each individual trajectory (we perform the analysis on 600 trajectories for each type of swarm dynamics). The average value of the KS distance that we have obtained is for the trajectories of agents trained with and for the ones of agents trained with . All the values of are displayed in a histogram form in Fig. 27.

Figure 27: Histograms of the distances obtained in the GOF test of the CCRW model, for (left) and (right) .

b.3 Tables

Examples of the results of the statistical analysis for one trajectory are given in Tables 3, 5, and 5. The trajectories considered correspond to the ones displayed in figures 20 (b), 28 and 29, respectively. In addition, figures 28 and 29 provide the survival distributions of the trajectories that have the best goodness-of-fit parameter for the CCRW and the PL models, respectively.

Model k -value
BW 2 130534.03 0 0.87 0.0018
CRW 4 130538.03 4 0.12 0.96
PL 2 144670.67 14136.64 0
CCRW 6 130542.03 8 0.01
Table 3: Results of the statistical analysis of the trajectory from Fig. 20 (b). This individual was chosen for achieving the closest fit to the BW and CRW models of all agents trained with .
Model k -value
BW 2 87207.06 2104.71 0
CRW 4 85676.13 573.78 0
PL 2 94815.86 9713.51 0
CCRW 6 85102.35 0 1
Table 4: Results of the statistical analysis of the trajectory from Fig. 28. This individual was chosen for achieving the closest fit to the CCRW model of all agents trained with .
Model k -value
BW 2 85615.24 28236.12 0
CRW 4 58473.79 1094.67 0
PL 2 57379.12 0 1
CCRW 6 58471.36 1092.24 0
Table 5: Results of the statistical analysis of the trajectory from Fig. 29. This individual was chosen for achieving the closest fit to the PL model of all agents trained with .
Figure 28: Survival probability (cumulative percentage of step lengths larger than the corresponding value in the horizontal axis) as a function of the step length. Trajectory of one agent trained with , which has an Akaike value of 1 for the CCRW model. This individual was chosen for achieving the closest fit to the CCRW model of all agents trained with . The survival distributions of the four candidate models are also plotted. The distributions for each model are obtained considering the MLE parameters.
Figure 29: Survival probability (cumulative percentage of step lengths larger than the corresponding value in the horizontal axis) as a function of the step length. Trajectory of one agent trained with , which has an Akaike value of 1 for the PL model. This individual was chosen for achieving the closest fit to the PL model of all agents trained with . The survival distributions of the four candidate models are also plotted. The distributions for each model are obtained considering the MLE parameters.