Adaptive Neighborhood Resizing for Stochastic Reachability in Multi-Agent Systems

05/21/2018 ∙ by Anna Lukina, et al. ∙ TU Wien 0

We present DAMPC, a distributed, adaptive-horizon and adaptive-neighborhood algorithm for solving the stochastic reachability problem in multi-agent systems, in particular flocking modeled as a Markov decision process. At each time step, every agent calls a centralized, adaptive-horizon model-predictive control (AMPC) algorithm to obtain an optimal solution for its local neighborhood. Second, the agents derive the flock-wide optimal solution through a sequence of consensus rounds. Third, the neighborhood is adaptively resized using a flock-wide, cost-based Lyapunov function V. This way DAMPC improves efficiency without compromising convergence. We evaluate DAMPC's performance using statistical model checking. Our results demonstrate that, compared to AMPC, DAMPC achieves considerable speed-up (two-fold in some cases) with only a slightly lower rate of convergence. The smaller average neighborhood size and lookahead horizon demonstrate the benefits of the DAMPC approach for stochastic reachability problems involving any controllable multi-agent system that possesses a cost function.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

V-formation in a flock of birds is a quintessential example of emergent behavior in a stochastic multi-agent system. V-formation brings numerous benefits to the flock. It is primarily known for being energy-efficient due to the upwash benefit a bird in the flock enjoys from its frontal neighbor. In addition, it offers each bird a clear frontal view, unobstructed by any flockmate. Moreover, its collective spatial flock mass can be intimidating to potential predators. It is therefore not surprising that interest in V-formation is on the rise in the aircraft industry [5].

Recent work on V-formation has shown that the problem can be viewed as one of optimal control, model-predictive control (MPC) in particular. In [13], we introduced adaptive-horizon MPC (AMPC), a highly effective control algorithm for multi-agent cyber-physical systems (CPS) modeled as a Markov decision process (MDP). Traditional MPC uses a fixed prediction horizon, i.e. number of steps to compute ahead, to determine the optimal, cost-minimizing control action. The downside of the fixed look-ahead is that the algorithm may get stuck in a local minimum. For a controllable MDP, AMPC chooses its prediction horizon dynamically, extending it out into the future until the cost function (shown in blue in Fig. 1) decreases sufficiently. This implicitly endows AMPC with a Lyapunov function (shown in red in Fig. 1), providing statistical guarantees of convergence to a goal state such as V-formation, even in the presence of adversarial agents. It should be noted that AMPC works in a centralized manner, with global knowledge of the state of the flock at its disposal.

Figure 1: Left: Blue bars are the values of the cost function in every time step. Red dashed line is the cost-based Lyapunov function used for horizon and neighborhood adaptation. Black solid line is neighborhood resizing for the next step given the current cost. Right: Step-by-step evolution of the flock of seven birds bringing two separate formations together. Each color-slice is a configuration of the birds at a particular time step.

This paper introduces DAMPC, a distributed version of AMPC that extends it along several dimensions. First, at every time step, DAMPC runs a distributed consensus algorithm to determine the optimal action (acceleration) for every agent in the flock. In particular, each agent starts by computing the optimal actions for its local subflock. The subflocks then communicate in a sequence of consensus rounds to determine the optimal actions for the entire flock. Secondly, DAMPC features adaptive neighborhood resizing (black line in Fig. 1) in an effort to further improve the algorithm’s efficiency. In a similar way as for the prediction horizon in AMPC, neighborhood resizing utilizes the implicit Lyapunov function to guarantee eventual convergence to a minimum neighborhood size. DAMPC thus treats the neighborhood size as another controllable variable that can be dynamically adjusted for efficiency purposes. This leads to reduced communication and computation compared to the centralized solution, without sacrificing statistical guarantees of convergence like those offered by its centralized counterpart AMPC.

The proof of statistical global convergence is intricate. For example, consider the scenario shown in Fig. 1. DAMPC is decreasing the neighborhood size for all agents, as the system-wide cost function follows a decreasing trajectory. Suddenly and without warning, the flock begins to split into two, undoubtedly owing to an unsuitably low value of , leading to an abrupt upward turn in . DAMPC reacts accordingly and promptly, increasing its prediction horizon first and then , until system stability is restored. The ability for DAMPC to do this is guaranteed, for in the worst case will be increased to , the total number of birds in the flock. It can then again attempt to monotonically decrease , but this time starting from a lower value of , until V-formation is reached.

A smoother convergence scenario is shown in Fig. 3. In this case, the efficiency gains of adaptive neighborhood resizing are more evident, as the cost function follows an almost purely monotonically decreasing trajectory. A formal proof of global convergence of DAMPC

with high probability is given in the body of the paper, and represents one of the paper’s main results.

Apart from the novel adaptive-horizon adaptive-neighborhood distributed algorithm to synthesize a controller, and its verification using statistical model checking, we believe the work here is significant in a deeper way. The problem of synthesizing a sequence of control actions to drive a system to a desired state can be also viewed as a falsification problem, where one tries to find values for (adversarial) inputs that steer the system to a bad state.

These problems can be cast as constraint satisfaction problems, or as optimization problems. As in case of V-formation, one has to deal with non-convexity, and popular techniques, such as convex optimization, will not work. Our approach can be seen as a tool for solving such highly nonlinear optimization problems that encode systems with notions of time steps and spatially distributed agents. Our work demonstrates that a solution can be found efficiently by adaptively varying the time horizon and the spatial neighborhood. A main benefit of the adaptive scheme, apart from efficiency, is that it gives a path towards completeness. By allowing adaptation to consider longer time horizons, and larger neighborhoods (possibly the entire flock), one can provide convergence guarantees that would be otherwise impossible (say, in a fixed-horizon MPC).

The rest of the paper is organized as follows. Section 2 discusses related work. Section 3 describes the flocking model and the cost function. Section 4 defines the stochastic reachability problem. Section 5 introduces DAMPC, our main contribution. Section 6 provides proofs of our theoretical results. Section 7 presents our statistical evaluation of the algorithm. Section 8 offers concluding remarks and indicates directions for future research.

2 Related Work

In [8]

, the ARES algorithm generates plans incrementally, segment by segment, using adaptive horizons (AH) to find the next best step towards the global optimum. In particular, ARES calls a collection of particle swarm optimizers (

PSO), each with its own AH. PSOs with the best results are cloned, while the others are restarted from the current level on the way to the goal (similar to importance splitting [6]). When the V-formation is achieved, the complete plan is composed of the best segments. The AHs are chosen such that the best PSOs can succeed to decrease the objective cost by at least a pre-defined value, implicitly defining a Lyapunov function that guarantees global convergence.

The presence of an adversary able to disturb the state of the system at every time step (e.g., remove a bird) is investigated in [13]. In this case, planning is not sufficient. Instead, a controller (AMPC) finding the best accelerations at every step has to be designed. This calls only one PSO with a given horizon and uses the first flock-wide accelerations (from the sequence returned by PSO) as the next actions. Under the assumption that the flock is controllable, even if the returned accelerations are not the optimal (in terms of the entire run), AMPC can correct for this in the future. AH is again picked based on the distance of the system to the goal states. This results in a global AMPC that, unlike classical MPC (which uses a fixed horizon), is guaranteed to converge.

In [16], the problem of taking an arbitrary initial configuration of agents to a final configuration, where every pair of stationary “neighbors” is a fixed distance apart, is considered. They present centralized and distributed algorithms for this problem, both of which use MPC to determine the next action. The problem in [16] is related to our work. However, we consider nonconvex and nonlinear cost functions, which require overcoming local minima to ensure convergence. In contrast, [16] deals with convex functions, which do not suffer from problems introduced by the presence of multiple local minima. Furthermore, in the distributed control procedure of [16], each agent publishes the control value it locally computed, which is then used by other agents to calculate their own. A quadratic number of such steps is performed before each agent fixes its control input for the next time step. In our work, we limit this number to linear.

Other related work, including [3, 2, 15], focuses on distributed controllers for flight formation that operate in an environment where the multi-agent system is already in the desired formation and the distributed controller’s objective is to maintain formation in the presence of disturbances. A distinguishing feature of these approaches is the particular formation they are seeking to maintain, including a half-vee [3], a ring and a torus [2], and a leader-follower formation [15]. These works are specialized for capturing the dynamics of moving-wing aircraft. In contrast, we use DAMPC with dynamic neighborhood resizing to bring a flock from a random initial configuration to a stable V-formation.

Although DAMPC uses global consensus, our main focus is on adaptive neighborhood resizing and global convergence, and not on fault tolerance [12, 1].

3 Background on V-Formation

Dynamical model.

In the flocking model used, the state of each bird is given by four variables: a 2-dimensional vector

denoting the position of the bird in 2D continuous space, and a 2-dimensional vector denoting the velocity of the bird. We use to denote a state of a flock with birds. The control actions of each bird are 2-dimensional accelerations .

Let , , and denote the position, velocity, and acceleration, of -th bird at time , , respectively. Given an initial configuration inside a bounding box of a given size, the discrete-time behavior of bird is given by Eq. 3:


In the simulations described in Section 7, the time interval between two successive updates of the positions and velocities of all birds in the flock equals one. The initial state is generated uniformly at random inside a bounding box. The accelerations are the output of the particle swarm optimization (PSO) algorithm [7], which samples particles uniformly at random subject to the following constraints on the maximum velocities and accelerations: , , where is a constant and .

Thus, introduces uncertainty in our model. We chose this model for its simplicity, as the cost function described in Section 3, is nonlinear, nonconvex, nondifferentiable, and therefore sufficient enough to make reachability analysis extremely challenging. We propose an approximate algorithm to tackle reachability. In principle, more sophisticated models of flocking dynamics can be considered, but we leave those for future work, and focus on the simplest one.

The problem of bringing a flock from an arbitrary configuration to a V-formation can be posed as a reachability question, where the goal is the set of states representing a V-formation. A key assumption is that the reachability goal can be specified as , where is a cost function that assigns a nonnegative real value to each state , and is a small positive constant.

Cost Function.

In order to define the cost function, we recall the definitions of the metrics determining the cost of a state from [14] (see Appendix for details).

  • Clear View: is defined by accumulating the percentage of a cone with angle , blocked by other birds. The minimum value is and attained in a perfect V-formation where all birds have an unobstructed view.

  • Velocity Matching: for flock state is defined as the difference between the velocity of a given bird and all other birds, summed up over all birds in the flock. The minimum value is and attained in a perfect V-formation where all birds have the same velocity.

  • Upwash Benefit: the trailing upwash is generated near the wingtips of a bird, while downwash is in the center of a bird. An upwash measure is defined on the 2D space using a Gaussian-like model that peaks at the appropriate upwash and downwash regions. for flock state is the sum of for . The upwash benefit in V-formation is , as all birds, except for the leader, have minimum upwash-benefit metric (), while the leader has an upwash-benefit metric of ().

Given the above metrics, the overall objective function is defined as a sum-of-squares of , , and , as follows:


A state is considered to be a V-formation if , for a small positive .

4 The Stochastic Reachability Problem

Given the stochasticity introduced by PSO

, the V-formation problem can be formulated in terms of a reachability problem for a Markov Chain, induced by the composition of a Markov decision process (MDP) and a controller.

Definition 1

A Markov decision process (MDP) is a 5-tuple consisting of a set of states , a set of actions , a transition function , where is the probability of transitioning from state to state under action , a cost function , where is the cost associated with state , and an initial state distribution .

The MDP modeling a flock of birds is defined as follows. The set of states is , as each bird has a D position and a D velocity vector, and the flock contains birds. The set of actions is , as each bird takes a D acceleration action and there are birds. The cost function is defined by Eq. 2. The transition function is defined by Eq. 3. As the acceleration vector for bird at time

is a random variable, the state vector

, is also a random variable. The initial state distribution

is a uniform distribution from a region of state space where all birds have positions and velocities in a range defined by fixed lower and upper bounds.

Before we can define traces, or executions, of , we need to fix a controller, or strategy, that determines which action from to use at any given state of the system. We focus on randomized strategies. A randomized strategy over is a function of the form , where

is the set of probability distributions over

. That is, takes a state and returns an action consistent with the probability distribution . Once we fix a strategy for an MDP, we obtain a Markov chain. We refer to the underlying Markov chain induced by over as . We use the terms strategy and controller interchangeably.

In the bird-flocking problem, a controller would be a function that determines the accelerations for all the birds given their current positions and velocities. Once we fix a controller, we can iteratively use it to (probabilistically) select a sequence of flock accelerations. The goal is to generate a sequence of actions that takes an MDP from an initial state to a state with .

Definition 2

Let be an MDP, and let be the set of goal states of . Our stochastic reachability problem is to design a controller for such that for a given probability of the underlying Markov chain to reach a state in in steps, for a given , starting from an initial state, is at least .

We approach the stochastic reachability problem by designing a controller and quantifying its probability of success in reaching the goal states. In [8], a stochastic reachability problem was solved by appropriately designing centralized controllers . In this paper, we design a distributed procedure with an adaptive horizon and adaptive neighborhood resizing and evaluate its performance.

5 Adaptive-Neighborhood Distributed Control

In contrast to [8, 13], we consider a distributed setting with the following assumptions about the system model.

  1. Each bird is equipped with the means for communication. The communication radius of each bird changes its size adaptively. The measure of the radius is the number of birds covered and we refer to it as the bird’s local neighborhood , including the bird itself.

  2. All birds use the same algorithm to satisfy their local reachability goals, i.e. to bring the local cost , , below the given threshold .

  3. The birds move in continuous space and change accelerations synchronously at discrete time points.

  4. After executing its local algorithms, each bird broadcasts the obtained solutions to its neighbors. This way every bird receives solution proposals, which differ due to the fact that each bird has its own local neighborhood. To find consensus, each bird takes as its best action the one with the minimal cost among the received proposals. The solutions for the birds in the considered neighborhood are then fixed. The consensus rounds repeat until all birds in the flock have fixed solutions.

  5. Every time step the value of the cost function is obtained globally for all birds in the flock and checked for improvement. The neighborhood for each bird is then resized based on this global check.

  6. The upwash modeled in Section 3 maintains connectivity of the flock along the computations, while our algorithm manages collision avoidance.

The main result of this paper is a distributed adaptive-neighborhood and adaptive-horizon model-predictive control algorithm we call DAMPC. At each time step, each bird runs AMPC to determine the best acceleration for itself and its neighbors (while ignoring the birds outside its neighborhood). The birds then exchange the computed accelerations with their neighbors, and the whole flock arrives at a consensus that assigns each bird to a unique (fixed) acceleration value. Before reaching consensus, it may be the case that some of ’s neighbors already have fixed solutions (accelerations) – these accelerations are not updated when runs AMPC. A key idea of our algorithm is to adaptively resize the extent of a bird’s neighborhood.

5.1 The Distributed AMPC Algorithm

Input : 
Output : ,
1 ; ; ; ; ; ;   // Init
2 while (  do
3        ;   // Initially no bird has a fixed solution
4        while  do // while not all birds have a fixed solution
5               ; // Birds without a fixed solution run LocalAMPC
6               for   do in parallel
7                      ; // Find nearest neighbors of
8                      ; ;
9               endfor
10              ; // Find the bird with the best solution
11               forall  // Fix ’s neighbors solution
12               do
13                      ;   // is the solution for bird
15               end forall
17        end while
18       ; ;   // First action and next state
19        if  then
20               ; ;   // Proceed to the next level
22        end if
23       ;   // Adjust the neighborhood size
25 end while
Algorithm 1 DAMPC

DAMPC (see Alg. 1) takes as input an MDP , a threshold defining the goal states , the maximum horizon length , the maximum number of time steps , the number of birds , and a scaling factor . It outputs a state in , and a sequence of actions taking from to a state in .

The initialization step (Line 1) picks an initial state from , fixes the initial level as the cost of , sets an arrays of costs to infinite values, sets the initial time, and sets the number of birds to process.

The outer while loop (Lines 2-23) is active as long as has not reached and time has not expired. In each time step, DAMPC first sets the sequences of accelerations to “not fixed yet” (nfy), and then iterates (Lines 4-17) until all birds fix their accelerations through global consensus. This happens as follows. First, all birds determine their neighborhood (subflock) and the cost decrement that will bring them to the next level (Lines 8-9). Second, they call LocalAMPC (see Section 5.2), which returns (Line 10): a sequence of actions of length for the subflock, which should decrease the subflock cost by , the state of the subflock after executing the first action in , the state after executing the last action, and the cost in the last state. Third, they determine the subflock with lowest cost as a winner (Line 12)111This step requires global consensus, but more generally, the loop on Lines 4-17 requires birds to have global information at multiple places. This can be quite inefficient in practice. A more practical approach, given in [10], is based on a dynamic local (among neighbors) consensus, for fixed neighborhood graphs. Since we are adapting and changing neighborhood sizes, the results are not directly applicable. Nevertheless, we can still use a similar truly distributed approach (in place of global consensus on Lines 4-17), but preferred to experiment with the easier to implement global version, since adapting neighborhoods size is the main focus of our paper, and our goal was to evaluate if neighborhood sizes really shrink, and remain small as the flock converges to a state in , with . and fix the acceleration sequences of all birds in this subflock (Lines 13-16).

After all accelerations sequences are fixed, that is is true, the first accelerations in this sequence are selected for the output (Line 18). The next state is set to , the state of the flock after executing , and has the state after executing last action in . If we found a path that eventually decreases the cost by , we reached the next level, and advance time (Lines 19-21). In that case, we optionally decrease the neighborhood, and increase it otherwise (Line 22).

Figure 2: Left: Last round of consensus for neighborhood size four where Bird 2 runs Local AMPC taking as an input for PSO fixed accelerations of Birds 3, 4, and 6 together with nfy value for Bird 1. Middle: Second consensus round during the next time step where the neighborhood size was reduced to three as a result of the decreasing cost at the previous time step. Right: Third consensus round during the same time step where Bird 7 is the only one whose acceleration has not been fixed yet and it simply has to compute the solution for its neighborhood given fixed accelerations of Birds 4, 5, and 6.

Fig. 2 illustrates DAMPC for two consecutive consensus rounds after neighborhood resizing. Bigger yellow circles represent birds that are running LocalAMPC. Smaller blue circles represent birds whose acceleration sequences are not completely fixed yet. Black squares mark birds with already fixed accelerations. Connecting lines are neighborhood relationship.

Working with a real CPS flock requires careful consideration of energy consumption. Our algorithm accounts for this by using the smallest neighborhood necessary during next control input computations. Regarding deployment, we see the following approach. Alg. 2 can be implemented as a local controller on each drone and communication will require broadcasting positions and output of the algorithm to other drones in the neighborhood through a shared memory. In this case, according to Alg. 1, a central agent will be needed to periodically compute the global cost and resize the neighborhood. Before deployment, we plan to use OpenUAV simulator [11] to test DAMPC on drone formation control scenarios described in [9].

5.2 The Local AMPC Algorithm

LocalAMPC is a modified version of the AMPC algorithm[13], as shown in Alg. 2. Its input is an MDP , the current state of a subflock , a vector of acceleration sequences , one sequence for each bird in the subflock, a cost decrement to be achieved, a maximum horizon and a scaling factor . In some accelerations may not be fixed yet, that is, they have value nfy.

Its output is a vector of acceleration sequences , one for each bird, that decreased the cost of the flock at most, the state of the subflock after executing the first action in the sequence, the state after executing all actions, and the cost actually achieved by the subflock in state .

Input : , , , , ,
Output : , , ,
1 ; Inf; // Initialization
2 while  do
3        // Run PSO with local information and partial solution
4        PSO();
5        ; ; ; // increase horizon
7 end while
Algorithm 2 LocalAMPC

LocalAMPC first initializes (Line 1) the number of particles to be used by the particle swarm optimization algorithm (PSO), proportionally to the input horizon of the input accelerations , to the number of birds , and the scaling factor . It then tries to decrement the cost of the subflock by at least , as long as the maximum horizon is not reached (Lines 2-6).

For this purpose it calls PSO (Line 4) with an increasingly longer horizon, and an increasingly larger number of particles. The idea is that the flock might have to first overcome a cost bump, before it gets to a state where the cost decreases by at least . PSO extends the input sequences of fixed actions to the desired horizon with new actions that are most successful in decreasing the cost of the flock, and it computes from scratch the sequence of actions, for the nfy entries. The result is returned in . PSO also returns the states and of the flock after applying the first and the last actions, respectively. Using this information, it computes the actual cost achieved by the flock.

Lemma 1 (Local Convergence)

Given , an MDP with cost function cost, and a nonempty set of target states with . If the transition relation is controllable with actions in for every (local) subset of agents, then there exists a finite (maximum) horizon such that LocalAMPC is able to find the best actions that decreases the cost of a neighborhood of agents in the states by at least a given .


In the input to LocalAMPC, the accelerations of some birds in may be fixed (for some horizon). As a consequence, the MDP may not be fully controllable within this horizon. Beyond this horizon, however, PSO is allowed to freely choose the accelerations, that is, the MDP is fully controllable again. The result now follows from convergence of AMPC (Theorem 1 from [13]).∎

5.3 Dynamic Neighborhood Resizing

The key feature of DAMPC is that it adaptively resizes neighborhoods. This is based on the following observation: as the agents are gradually converging towards a global optimal state, they can explore smaller neighborhoods when computing actions that will improve upon the current configuration.

Adaptation works on lookahead cost, which is the cost that is reachable in some future time. Line 20 of DAMPC is reached (and the level is incremented) whenever we are able to decrease this look-ahead cost. If level is incremented, neighborhood size is decremented, and incremented otherwise, as follows:


In Fig. 3 we depict a simulation-trace example, demonstrating how levels and neighborhood size are adapting to the current value of the cost function.

Figure 3: Left: Blue bars are the values of the cost function in every time step. Red dashed line in the value of the Lyapunov function serving as a threshold for the algorithm. Black solid line is resizing of the neighborhood for the next step given the current cost. Right: Step-by-step evolution of the flock from an arbitrary initial configuration in the left lower corner towards a V-formation in the right upper corner of the plot.

6 Convergence and Stability

Since we are solving a nonlinear nonconvex optimization problem, the cost itself may not decrease monotonically. However, the look-ahead cost – the cost of some future reachable state – monotonically decreases. These costs are stored in level variables in Algorithm DAMPC and they define a Lyapunov function .


where the levels decrease by at least a minimum , that is, .

Lemma 2

defined by (4) is a valid Lyapunov function, i.e., it is positive-definite and monotonically decreases until the system reaches its goal state.


Note that the cost function is positive by definition, and since equals for some state , is nonnegative. Line 19 of Algorithm DAMPC guarantees that is monotonically decreasing by at least .∎

Lemma 3 (Global Consensus)

Given Assumptions 1-7 in Section 5, all agents in the system will fix their actions in a finite number of consensus rounds.


During the first consensus round, each agent in the system runs LocalAMPC for its own neighborhood of the current size . Due to Lemma 1, such that a solution, i.e. a set of action (acceleration) sequences of length , will be found for all agents in the considered neighborhood . Consequently, at the end of the round the solutions for at least all the agents in , where is the agent which proposed the globally best solution, will be fixed. During the next rounds the procedure recurses. Hence, the set of all agents with nfy values is monotonically decreasing with every consensus round.∎

Global consensus is reached by the system during communication rounds. However, to achieve the global optimization goal we prove that the consensus value converges to the desired property.

Definition 3

Let be a sequence of random vector-variables and be a random or non-random. Then converges with probability one to if .

Lemma 4 (Max-neighborhood convergence)

If DAMPC is run with constant neighborhood size , then it behaves identically to centralized AMPC.


If DAMPC uses neighborhood , then it behaves like the centralized AMPC, because the accelerations of all birds are fixed in the first consensus round.

Theorem 6.1 (Global Convergence)

Let be an MDP with a positive and continuous cost function and a nonempty set of target states , with . If there exists a finite horizon and a finite number of execution steps , such that centralized AMPC is able to find a sequence of actions that brings from a state in to a state in , then DAMPC is also able to do so, with probability one.


We illustrate the proof by our example of flocking. Note that the theorem is valid in the general formulation above for the fact that as global Lyapunov function approaches zero, the local dynamical thresholds will not allow neighborhood solutions to significantly diverge from reaching the state obtained as a result of repeated consensus rounds. Owing to Lemma 1, after the first consensus round, Alg. 2 finds a sequence of best accelerations of length , for birds in subflock , decreasing their cost by . In the next consensus round, birds outside have to adjust the accelerations for their subflock , while keeping the accelerations of the neighbors in to the already fixed solutions. If bird fails to decrease the cost of its subflock with at least within prediction horizon , then it can explore a longer horizon up to . This allows PSO to compute accelerations for the birds in in horizon interval , decreasing the cost of by . Hence, the entire flock decreases its cost by (this defines Lyapunov function in Eq. 4) ensuring convergence to a global optimum. If is reached before the cost of the flock was decreased by , the size of the neighborhood will be increased by one, and eventually it would reach . Consequently, using Theorem 1 in [13], there exists a horizon that ensures global convergence. For this choice of and for maximum neighborhood size, the cost is guaranteed to decrease by , and we are bound to proceed to the next level in DAMPC. The Lyapunov function on levels guarantees that we have no indefinite switching between “decreasing neighborhood size” and “increasing neighborhood size” phases, and we converge (see Fig. 1).∎

Fig. 1 illustrates the proof of global convergence of our algorithm, where we overcome a local minimum by gradually adapting the neighborhood size to proceed to the next level defined by the Lyapunov function. In the plot on the right, we see birds starting from an arbitrary initial state near the origin , and eventually reaching V-formation at position . However, around , the flock starts to drift away from a V-formation, but our algorithm is able to bring it back to a V-formation. Let us see how this is reflected in terms of changing cost and neighborhood sizes. In the plot on the left, we see the cost starting very high (blue lines), but mostly decreasing with time steps initially. When we see an unexpected rise in cost value at time steps in the range (corresponding to the divergence at ), our algorithm adaptively increases the horizon first, and eventually the neighborhood size, which eventually increases back to , to overcome the divergence from V-formation, and maintain the Lyapunov property of the red function. Note that the neighborhood size eventually decreases to three, the minimum for maintaining a V-formation.

The result presented in [13] applied to our distributed model, together with Theorem 6.1, ensure the validity of the following corollary.

Corollary 1 (Global Stability)

Assume the set of target states has been reached and one of the following perturbations of the system dynamics has been applied: a) the best next action is chosen with probability zero (crash failure); b) an agent is displaced (sensor noise); c) an action of a player with opposing objective is performed. Then applying Algorithm 1 the system converges with probability one from a disturbed state to a state in .

7 Experimental Results

We comprehensively evaluated DAMPC

to compute statistical estimates of the success rate of reaching a V-formation from an arbitrary initial state in a finite number of steps

. We considered flocks of size birds. The specific reachability problem we addressed is as follows. Given a flock MDP with birds and the randomized strategy of Alg. 1, estimate the probability of reaching a state where the cost function , starting from an initial state in the underlying Markov chain induced by on .

Since the exact solution to this stochastic reachability problem is intractable (infinite/continuous state and action spaces), we solve it approximately using statistical model checking (SMC). In particular, as the probability estimate of reaching a V-formation under our algorithm is relatively high, we can safely employ the additive error -Monte-Carlo-approximation scheme from [4]. This requires i.i.d. executions (up to a maximum time horizon), determining in if execution reaches a V-formation, and returning the mean of the random variables . Recalling [4], we compute by using Bernstein’s inequality to fix and obtain where approximates with additive error and probability . In particular, we are interested in a Bernoulli random variable returning 1 if the cost is less than and 0 otherwise. In this case, we can use the Chernoff-Hoeffding instantiation of the Bernstein’s inequality [4], and further fix the proportionality constant to . Executing the algorithm times for each flock size gives us a confidence ratio and an additive error of .

We used the following parameters: number of birds , cost threshold , maximum horizon , number of particles in PSO . DAMPC is allowed to run for a maximum of steps. The initial configurations are generated independently, uniformly at random, subject to the following constraints on the initial positions and velocities: and . To perform the SMC evaluation of DAMPC, and to compare it with the centralized AMPC from [13], we designed the above experiments for both algorithms in C, and ran them on the 2x Intel Xeon E5-2660 Okto-Core, 2.2 GHz, 64 GB platform.

Our experimental results are given in Table 1. We used three different ways of computing the average number of neighbors for successful runs. Assuming a successful run converges after steps, we (1) compute the average over the first steps, reported as “for good runs until convergence”; (2) extend the partial -step run into a full -step run and compute the average over all steps, reported as “for good runs over steps”; or (3) take an average across steps, reported as “for good runs after convergence”, to illustrate global stability.

Number of Birds 5 7 9 5 7 9
Success rate,
Avg. convergence duration,
Avg. horizon,
Avg. execution time in sec.
Avg. neighborhood size,
for good runs until convergence
for good runs over steps
for good runs after convergence
for bad runs
Table 1: Comparison of DAMPC and AMPC [13] on runs.

We obtain a high success rate for 5 and 7 birds, which does not drop significantly for 9 birds. The average convergence duration, horizon, and neighbors, respectively, increase monotonically when we consider more birds, as one would expect. The average neighborhood size is smaller than the number of birds, indicating that we improve over AMPC [13] where all birds need to be considered for synthesizing the next action. We also observe that the average number of neighbors for good runs until convergence is larger than the one for bad runs, except for 5 birds. The reason is that in some bad runs the cost drops quickly to a small value resulting in a small neighborhood size, but gets stuck in a local minimum (e.g., the flock separates into two groups) due to the limitations imposed by fixing the parameters , , and . The neighborhood size remains small for the rest of the run leading to a smaller average.

Finally, compared to the centralized AMPC [13], DAMPC is faster (e.g., two times faster for 5 birds). Our algorithm takes fewer steps to converge. The average horizon of DAMPC is smaller. The smaller horizon and neighborhood sizes, respectively, allow PSO to speed up its computation.

8 Conclusions

We introduced DAMPC, a distributed adaptive-neighborhood, adaptive-horizon model-predictive control algorithm, that synthesizes actions for a controllable Markov decision process (MDP), such that the MDP eventually reaches a state with cost close to zero, provided that the MDP has such a state.

The main contribution of DAMPC is that it adaptively resizes an agent’s local neighborhood, while still managing to converge to a goal state with high probability. Initially, when the cost value is large, the neighborhood of an agent is the entire multi-agent system. As the cost decreases, however, the neighborhood is resized to smaller values. Eventually, when the system reaches a goal state, the neighborhood size remains around a pre-defined minimal value.

This is a remarkable result showing that the local information needed to converge is strongly related to a cost-based Lyapunov function evaluated over a global system state. While our experiments were restricted to V-formation in bird flocks, our approach applies to reachability problems for any collection of entities that seek convergence from an arbitrary initial state to a desired goal state, where a notion of distance to it can be suitably defined.


  • [1] Dolev, D., Lynch, N.A., Pinter, S.S., Stark, E.W., Weihl, W.E.: Reaching approximate agreement in the presence of faults. Journal of the ACM (JACM) 33(3), 499–516 (1986)
  • [2] D’Andrea, R., Dullerud, G.E.: Distributed control design for spatially interconnected systems. IEEE Transactions on Automatic Control 48(9) (2003)
  • [3] Fowler, J.M., D’Andrea, R.: Distributed control of close formation flight. In: Proceedings of 41st IEEE Conference on Decision and Control (Dec 2002)
  • [4] Grosu, R., Peled, D., Ramakrishnan, C.R., Smolka, S.A., Stoller, S.D., Yang, J.: Using statistical model checking for measuring systems. In: Proceedings of the International Symposium Leveraging Applications of Formal Methods, Verification and Validation. LNCS, vol. 8803, pp. 223–238. Springer (2014)
  • [5] Johnsson, J.: Boeing copies flying geese to save fuel (2017),
  • [6] Kahn, H., Harris, T.E.: Estimation of particle transmission by random sampling. National Bureau of Standards applied mathematics series 12, 27–30 (1951)
  • [7]

    Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of 1995 IEEE International Conference on Neural Networks. pp. 1942–1948 (1995)

  • [8] Lukina, A., Esterle, L., Hirsch, C., Bartocci, E., Yang, J., Tiwari, A., Smolka, S.A., Grosu, R.: ARES: adaptive receding-horizon synthesis of optimal plans. In: Tools and Algorithms for the Construction and Analysis of Systems - 23rd International Conference, TACAS 2017. LNCS, vol. 10206, pp. 286–302 (2017)
  • [9] Lukina, A., Kumar, A., Schmittle, M., Singh, A., Das, J., Rees, S., van Buskirk, C.P., Sztipanovits, J., Grosu, R., Kumar, V.: Formation control and persistent monitoring in the openuav swarm simulator on the NSF CPS-VO. In: Proceedings of the 9th ACM/IEEE International Conference on Cyber-Physical Systems, ICCPS 2018. pp. 353–354. IEEE / ACM (2018)
  • [10] Saber, R.O., Murray, R.M.: Consensus protocols for networks of dynamic agents (2003)
  • [11] Schmittle, M., Lukina, A., Vacek, L., Das, J., van Buskirk, C.P., Rees, S., Sztipanovits, J., Grosu, R., Kumar, V.: Openuav: a UAV testbed for the CPS and robotics community. In: Proceedings of the 9th ACM/IEEE International Conference on Cyber-Physical Systems, ICCPS 2018. pp. 130–139. IEEE / ACM (2018)
  • [12] Su, L., Vaidya, N.H.: Fault-tolerant multi-agent optimization: optimal iterative distributed algorithms. In: Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing. pp. 425–434. ACM (2016)
  • [13] Tiwari, A., Smolka, S.A., Esterle, L., Lukina, A., Yang, J., Grosu, R.: Attacking the V: on the resiliency of adaptive-horizon MPC. In: Automated Technology for Verification and Analysis - 15th International Symposium, ATVA 2017. LNCS, vol. 10482, pp. 446–462. Springer (2017)
  • [14] Yang, J., Grosu, R., Smolka, S.A., Tiwari, A.: Love thy neighbor: V-formation as a problem of model predictive control (extended abstract). In: Proceedings of CONCUR 2016, 27th International Conference on Concurrency Theory (Aug 2016)
  • [15] Ye, D., Zhang, J., Sun, Z.: Extended state observer–based finite-time controller design for coupled spacecraft formation with actuator saturation. Advances in Mechanical Engineering 9(4), 1–13 (2017)
  • [16] Zhan, J., Li, X.: Flocking of multi-agent systems via model predictive control based on position-only measurements. IEEE Trans. Industrial Informatics 9(1), 377–385 (2013)


Figure 4: Illustration of the clear view (), velocity matching (), and upwash benefit () metrics. Left: Bird ’s view is partially blocked by birds and . Hence, its clear view is . Middle: A flock and its unaligned bird velocities results in a velocity-matching metric . In contrast, when the velocities of all birds are aligned. Right: Illustration of the (right-wing) upwash benefit that bird receives from bird depending on how it is positioned behind bird . Note that bird ’s downwash region is directly behind it.

Clear View.

This metric is defined by accumulating the percentage of a cone with angle , blocked by other birds:

where represents the bird’s wing span. and are the horizontal and vertical distances between birds and , respectively, w.r.t the bird ’s direction. when the bird is in front of the bird . for flock state is the sum of the clear-view metric of all birds: . The minimum value is and attained in a perfect V-formation where all birds have an unobstructed view.

Velocity Matching.

for flock state is defined as the difference between the velocity of a given bird and all other birds, summed up over all birds in the flock: , where is the bird ’s velocity. The minimum value is and attained in a perfect V-formation where all birds have the same velocity.

Upwash Benefit.

The trailing upwash is generated near the wingtips of a bird, while downwash is in the center of a bird. An upwash measure is defined on the 2D space using a Gaussian-like model that peaks at the appropriate upwash and downwash regions:

where separates the upwash and downwash regions, is a smoothing function with representing the error, and is a Gaussian-like function. Means are chosen to maximize the upwash benefit at , and minimize it at . For bird  with upwash , the upwash-benefit metric is , and for flock state is the sum of for . The upwash benefit in V-formation is , as all birds, except for the leader, have minimum upwash-benefit metric (), while the leader has an upwash-benefit metric of ().