I Introduction
Reinforcement Learning is concerned with maximizing rewards from an environment through repeated interactions and trial and error. Such methods often rely on various approximations of the Bellman equation and include value function approximation, policy gradient methods, and more [1]
. The Evolutionary Computation community, on the other hand, have developed a suite of methods for black box optimization and heuristic search
[2]. Such methods have been used to optimize the structure of neural networks for vision tasks, for instance
[3].Recently, Salimans et al. have shown that a particular variant of evolutionary computation methods, termed Evolution Strategies (ES) are a fast and scalable alternative to other reinforcement learning approaches, solving the difficult humanoid MuJoCo task in 10 minutes [4]
. The authors argue that ES has several benefits over other reinforcement learning methods: 1) The need to backpropagate gradients through a policy is avoided, which opens up a wider class of policy parameterizations; 2) ES methods are massively parallelizable, which allows for scaling up learning to larger, more complex problems; 3) ES often finds policies which are more robust than other reinforcement learning methods; and 4) ES are better at assigning credit to changes in the policy over longer timescales, which enables solving tasks with longer time horizons and sparse rewards. In this work we leverage all four of these advantages by using ES to solve a problem with: 1) a more complex and decipherable policy architecture which allows for safety considerations; 2) a largescale simulated environment with many interacting elements; 3) multiple sources of stochasticity including variations in intial conditions, disturbances, etc.; and 4) sparse rewards which only occur at the very end of a long episode.
A common critique of evolutionary computation algorithms is a lack of convergence analysis or guarantees. Of course, for problems with nondifferentiable and nonconvex objective functions, analysis will always be difficult. Nevertheless, we show that the Evolution Strategies algorithm proposed by [4] is a special case of a class of modelbased stochastic search methods known as GradientBased Adaptive Stochastic Search (GASS) [5]. This class of methods generalizes many stochastic search methods such as the wellknown Cross Entropy Method (CEM) [6], CMAES [7], etc. By casting a nondifferentiable, nonconvex optimization problem as a gradient descent problem, one can arrive at nice asymptotic convergence properties and known convergence rates [8].
With more confidence in the convergence of Evolution Strategies, we demonstrate how ES can be used to efficiently solve both cooperative and competitive largescale multiagent problems. Many approaches to solving multiagent problems rely on handdesigned and handtuned algorithms (see [9] for a review). One such example, distributed Model Predictive Control, relies on independent MPC controllers on each agent with some level of coordination between them [10, 11]
. These controllers require handdesigning dynamics models, cost functions, feedback gains, etc. and require expert domain knowledge. Additionally, scaling these methods up to more complex problems continues to be an issue. Evolutionary algorithms have also been tried as a solution to multiagent problems; usually with smaller, simpler environments, and policies with low complexity
[12, 13]. Recently, a hybrid approach combining MPC and the use of genetic algorithms to evolve the cost function for a handtuned MPC controller has been demonstrated for a UAV swarm combat scenario
[14].In this work we demonstrate the effectiveness of our approach on two complex multiagent UAV swarm combat scenarios: where a team of fixed wing aircraft must attack a welldefended base, and where two teams of agents go head to head to defeat each other. Such scenarios have been previously considered in simulated environments with less fidelity and complexity [15, 14]. We leverage the computational efficiency and flexibility of the recently developed SCRIMMAGE multiagent simulator for our experiments (Figure 1) [16]. We compare the performance of ES against the Cross Entropy Method. We also show for the competitive scenario how the policy learns over time to coordinate a strategy in response to an enemy learning to do the same. We make our code freely available for use (https://github.com/ddfan/swarm_evolve).
Ii Problem Formulation
We can pose our problem as the nondifferentiable, nonconvex optimization problem
(1) 
where , a nonempty compact set, is the space of solutions, and is a nondifferentiable, nonconvex realvalued objective function . could be any combination of decision variables of our problem, including neural network weights, PID gains, hardware design parameters, etc. which affect the outcome of the returns . For reinforcement learning problems usually represents the parameters of the policy and is an implicit function of the sequential application of the policy to the environment. We first review how this problem can be solved using GradientBased Adaptive Stochastic Search methods and then show how the ES algorithm is a special case of these methods.
Iia GradientBased Adaptive Stochastic Search
The goal of modelbased stochastic search methods is to cast the nondifferentiable optimization problem (1) as a differentiable one by specifying a probabilistic model (hence ”modelbased”) from which to sample [8]. Let this model be , where
is a parameter which defines the probability distribution (e.g. for Gaussian distributions, the distribution is fully parameterized by the mean and variance
). Then the expectation of over the distribution will always be less than the optimal value of , i.e.(2) 
The idea of Gradientbased Adaptive Stochastic Search (GASS) is that one can perform a search in the space of parameters of the distribution rather than , for a distribution which maximizes the expectation in (2):
(3) 
Maximizing this expectation corresponds to finding a distribution which is maximally distributed around the optimal . However, unlike maximizing (1), this objective function can now be made continuous and differentiable with respect to . With some assumptions on the form of the distribution, the gradient with respect to can be pushed inside the expectation.
The GASS algorithm presented by [8] is applicable to the exponential family of probability densities:
(4) 
where , and
is the vector of sufficient statistics. Since we are concerned with showing the connection with ES which uses parameter perturbations sampled with Gaussian noise, we assume that
is Gaussian. Furthermore, since we are concerned with learning a large number of parameters (i.e. weights in a neural network), we assume an independent Gaussian distribution over each parameter. Then, and , where andare vectors of the mean and standard deviation corresponding to the distribution of each parameter, respectively.
We present the GASS algorithm for this specific set of probability models (Algorithm 1), although the analysis for convergence holds for the more general exponential family of distributions. For each iteration , The GASS algorithm involves drawing samples of parameters . These parameters are then used to sample the return function . The returns are fed through a shaping function and then used to calculate an update on the model parameters .
The shaping function is required to be nondecreasing and bounded from above and below for bounded inputs, with the lower bound away from 0. Additionally, the set must be a nonempty subset of the set of solutions of the original problem
. The shaping function can be used to adjust the exploration/exploitation tradeoff or help deal with outliers when sampling. The original analysis of GASS assumes a more general form of
where can change at each iteration. For simplicity we assume here it is deterministic and unchanging per iteration.GASS can be considered a secondorder gradient method and requires estimating the variance of the sampled parameters:
(5) 
In practice if the size of the parameter space is large, as is the case in neural networks, this variance matrix will be of size and will be costly to compute. In our work we approximate with independent calculations of the variance on the parameters of each independent Gaussian. With a slight abuse of notation, consider as a scalar element of . We then have, for each scalar element a variance matrix:
(6) 
Theorem 1 shows that GASS produces a sequence of that converges to a limit set which specifies a set of distributions that maximize (3). Distributions in this set will specify how to choose to ultimately maximize (1). As with most nonconvex optimization algorithms, we are not guaranteed to arrive at the global maximum, but using probabilistic models and careful choice of the shaping function should help avoid early convergence into suboptimal local maximum. The proof relies on casting the update rule in the form of a generalized RobbinsMonro algorithm (see [8], Thms 1 and 2). Theorem 1 also specifies convergence rates in terms of the number of iterations , the number of samples per iteration , and the learning rate . In practice Theorem 1 implies the need to carefully balance the increase in the number of samples per iteration and the decrease in learning rate as iterations progress.
Assumption 1

The learning rate , as , and .

The sample size , where ; also and jointly satisfy .

is bounded on .

If is a local maximum of (3), the Hessian of is continuous and symmetric negative definite in a neighborhood of .
IiB Evolutionary Strategies
We now review the ES algorithm proposed by [4] and show how it is a firstorder approximation of the GASS algorithm. The ES algorithm consists of the same two phases as GASS: 1) Randomly perturb parameters with noise sampled from a Gaussian distribution. 2) Calculate returns and calculate an update to the parameters. The algorithm is outlined in Algorithm 2. Once returns are calculated, they are sent through a function which performs fitness shaping [17]. Salimans et al. used a rank transformation function for which they argue reduced the influence of outliers at each iteration and helped to avoid local optima.
It is clear that the ES algorithm is a subcase of the GASS algorithm when the sampling distribution is a point distribution. We can also recover the ES algorithm by ignoring the variance terms on line in Algorithm 1. Instead of the normalizing term , ES uses the number of samples . The small constant in GASS becomes the variance term in the ES algorithm. The update rule in Algorithm 2 involves multiplying the scaled returns by the noise, which is exactly in Algorithm 1.
We see that ES enjoys the same asymptotic convergence rates offered by the analysis of GASS. While GASS is a secondorder method and ES is only a firstorder method, in practice ES uses approximate secondorder gradient descent methods which adapt the learning rate in order to speed up and stabilize learning. Examples of these methods include ADAM, RMSProp, SGD with momentum, etc., which have been shown to perform very well for neural networks. Therefore we can treat ES a firstorder approximation of the full secondorder variance updates which GASS uses. In our experiments we use ADAM
[18] to adapt the learning rate for each parameter. As similarly reported in [4], when using adaptive learning rates we found little improvement over adapting the variance of the sampling distribution. We hypothesize that a first order method with adaptive learning rates is sufficient for achieving good performance when optimizing neural networks. For other types of policy parameterizations however, the full secondorder treatment of GASS may be more useful. It is also possible to mix and match which parameters require a full variance update and which can be updated with a firstorder approximate method. We use the rank transformation function for and keep constant.IiC Learning Structured Policies for MultiAgent Problems
Now that we are more confident about the convergence of the ES/GASS method, we show how ES can be used to optimize a complex policy in a largescale multiagent environment. We use the SCRIMMAGE multiagent simulation environment [16] as it allows us to quickly and in parallel simulate complex multiagent scenarios. We populate our simulation with 6DoF fixedwing aircraft and quadcopters with dynamics models having 10 and 12 states, respectively. These dynamcis models allow for full ranges of motion within realistic operating regimes. Stochastic disturbances in the form of wind and control noise are modeled as additive Gaussian noise. Ground and midair collisions can occur which result in the aircraft being destroyed. We also incorporate a weapons module which allows for targeting and firing at an enemy within a fixed cone projecting from the aircraft’s nose. The probability of a hit depends on the distance to the target and the total area presented by the target to the attacker. This area is based on the wireframe model of the aircraft and its relative pose. For more details, see our code and the SCRIMMAGE simulator documentation.
We consider the case where each agent uses its own policy to compute its own controls, but where the parameters of the policies are the same for all agents. This allows each agent to control itself in a decentralized manner, while allowing for beneficial group behaviors to emerge. Furthermore, we assume that friendly agents can communicate to share states with each other (see Figure 2). Because we have a large number of agents (up to 50 per team), to keep communication costs lower we only allow agents to share information locally, i.e. agents close to each other have access to each other’s states. In our experiments we allow each agent to sense the states of the closest 5 friendly agents for a total of incoming state messages.
Additionally, each agent is equipped with sensors to detect enemy agents. Full state observability is not available here, instead we assume that sensors are capable of sensing an enemy’s relative position and velocity. In our experiments we assumed that each agent is able to sense the nearest 5 enemies for a total of dimensions of enemy data ( states = [relative xyz position, distance, and relative xyz velocities]). The sensors also provide information about home and enemy base relative headings and distances (an additional states). With the addition of the agent’s own state ( states), the policy’s observation input has a dimension of . These input states are fed into the agent’s policy: a neural network with 3 fully connected layers with sizes 200, 200, and 50, which outputs 3 numbers representing a desired relative heading . Each agent’s neural network has more than 70,000 parameters. Each agent uses the same neural network parameters as its teammates, but since each agent encounters a different observation at each timestep, the output of each agent’s neural network policy will be unique. It may also be possible to learn unique policies for each agent; we leave this for future work.
With safety being a large concern in UAV flight, we design the policy to take into account safety and control considerations. The relative heading output from the neural network policy is intended to be used by a PID controller to track the heading. The PID controller provides lowlevel control commands to the aircraft (thrust, aileron, elevator, rudder). However, to prevent cases where the neural network policy guides the aircraft into crashing into the ground or allies, etc., we override the neural network heading with an avoidance heading if the aircraft is about to collide with something. This helps to focus the learning process on how to intelligently interact with the environment and allies rather than learning how to avoid obvious mistakes. Furthermore, by designing the policy in a structured and interpretable way, it will be easier to take the learned policy directly from simulation into the real world. Since the neural network component of the policy does not produce lowlevel commands, it is invariant to different lowlevel controllers, dynamics, PID gains, etc. This aids in learning more transferrable policies for realworld applications.
Iii Experiments
We consider two scenarios: a base attack scenario where a team of 50 fixed wing aircraft must attack an enemy base defended by 20 quadcopters, and a team competitive task where two teams concurrently learn to defeat each other. In both tasks we use the following reward function:
(7) 
The reward function encourages airtoair combat, as well as suicide attacks against the enemy base (e.g. a swarm of cheap, disposable drones carrying payloads). The last term encourages the aircraft to move towards the enemy during the initial phases of learning.
Iiia Base Attack Task
Scores per iteration for the base attack task. Top: Scores earned by perturbed policies during training. Scores are on average lower because they result from policies which are parameterized by randomly peturbed values. Bottom: Scores during the course of training earned by the updated policy parameters. Red curve is Evolution Strategies algorithm, blue is Cross Entropy Method. Bold line is the median, shaded areas are 25/75 quartile bounds.
In this scenario a team of 50 fixedwing aircraft must attack an enemy base defended by 20 quadcopters (Figure 3). The quadcopters use a handcrafted policy where in the absence of an enemy, they spread themselves out evenly to cover the base. In the presence of an enemy they target the closest enemy, match that enemy’s altitude, and fire repeatedly. We used , a time step of seconds, and total episode length of seconds. Initial positions of both teams were randomized in a fixed area at opposide ends of the arena. Training took two days with full parallelization on a machine equipped with a Xeon Phi CPU (244 threads).
We found that over the course of training the fixedwing team learned a policy where they quickly form a Vformation and approach the base. Some aircraft suicideattack the enemy base while others begin dogfighting (see supplementary video^{1}^{1}1http://https://goo.gl/dWvQi7). We also compared our implementation of the ES method against the wellknown crossentropy method (CEM). CEM performs significantly worse than ES (Figure 4). We hypothesize this is because CEM throws out a significant fraction of sampled parameters and therefore obtains a worse estimate of the gradient of (3). Comparison against other full secondorder methods such as CMAES or the full secondorder GASS algorithm is unrealistic due to the large number of parameters in the neural network and the prohibitive computational difficulties with computing the covariances of those parameters.
IiiB Two Team Competitive Match
The second scenario we consider is where two teams each equipped with their own unique policies for their agents learn concurrently to defeat their opponent (Figure 5). At each iteration, simulations are spawned, each with a different random perturbation, and with each team having a different perturbation. The updates for each policy are calculated based on the scores received from playing the opponent’s perturbed policies. The result is that each team learns to defeat a wide range of opponent behaviors at each iteration. We observed that the behavior of the two teams quickly approached a Nash equilibrium where both sides try to defeat the maximum number of opponent aircraft in order to prevent higherscoring suicide attacks (see supplementary video). The end result is a stalemate with both teams annihilating each other, ending with tied scores (Figure 6). We hypothesize that more varied behavior could be learned by having each team compete against some past enemy team behaviors or by building a library of policies from which to select from, as frequently discussed by the evolutionary computation community [19].
Iv Conclusion
We have shown that Evolution Strategies are applicable for learning policies with many thousands of parameters for a wide range of complex tasks in both the competitive and cooperative multiagent setting. By showing the connection between ES and more wellunderstood modelbased stochastic search methods, we are able to gain insight into future algorithm design. Future work will include experiments with optimizing mixed parameterizations, e.g. optimizing both neural network weights and PID gains. In this case, the secondorder treatment on nonneural network parameters may be more beneficial, since the behavior of the system may be more sensitive to perturbations of nonneural network parameters. Another direction of investigation could be optimizing unique policies for each agent in the team. Yet another direction would be comparing other evolutionary computation strategies for training neural networks, including methods which use a more diverse population [20], or more genetic algorithmtype heuristics [21].
References
 [1] Y. Li, “Deep Reinforcement Learning: An Overview,” ArXiv eprints, Jan. 2017.
 [2] K. Stanley and B. Bryant, “Realtime neuroevolution in the NERO video game,” IEEE transactions on, 2005. [Online]. Available: http://ieeexplore.ieee.org/xpls/abs–_˝all.jsp?arnumber=1545941
 [3] O. J. Coleman, “Evolving Neural Networks for Visual Processing,” Thesis, 2010.
 [4] T. Salimans, J. Ho, X. Chen, S. Sidor, and I. Sutskever, “Evolution Strategies as a Scalable Alternative to Reinforcement Learning,” ArXiv eprints, Mar. 2017.
 [5] J. Hu, “Modelbased stochastic search methods,” in Handbook of Simulation Optimization. Springer, 2015, pp. 319–340.
 [6] S. Mannor, R. Rubinstein, and Y. Gat, “The cross entropy method for fast policy search,” in Machine LearningInternational Workshop Then Conference, vol. 20, no. 2, 2003, Conference Proceedings, p. 512.
 [7] N. Hansen, “The CMA evolution strategy: A tutorial,” CoRR, vol. abs/1604.00772, 2016. [Online]. Available: http://arxiv.org/abs/1604.00772
 [8] E. Zhou and J. Hu, “Gradientbased adaptive stochastic search for nondifferentiable optimization,” IEEE Transactions on Automatic Control, vol. 59, no. 7, pp. 1818–1832, 2014.
 [9] L. Panait and S. Luke, “Cooperative multiagent learning: The state of the art,” Autonomous Agents and MultiAgent Systems, vol. 11, no. 3, pp. 387–434, 2005. [Online]. Available: http://link.springer.com/10.1007/s1045800526312
 [10] J. B. Rawlings and B. T. Stewart, “Coordinating multiple optimizationbased controllers: New opportunities and challenges,” Journal of Process Control, vol. 18, no. 9, pp. 839–845, 2008.
 [11] W. AlGherwi, H. Budman, and A. Elkamel, “Robust distributed model predictive control: A review and recent developments,” The Canadian Journal of Chemical Engineering, vol. 89, no. 5, pp. 1176–1190, 2011. [Online]. Available: http://doi.wiley.com/10.1002/cjce.20555
 [12] G. B. Lamont, J. N. Slear, and K. Melendez, “UAV swarm mission planning and routing using multiobjective evolutionary algorithms,” in IEEE Symposium Computational Intelligence in Multicriteria Decision Making, no. Mcdm, 2007, Conference Proceedings, pp. 10–20.
 [13] A. R. Yu, B. B. Thompson, and R. J. Marks, “Competitive evolution of tactical multiswarm dynamics,” IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans, vol. 43, no. 3, pp. 563–569, 2013.
 [14] D. D. Fan, E. Theodorou, and J. Reeder, “Evolving cost functions for model predictive control of multiagent uav combat swarms,” in Proceedings of the Genetic and Evolutionary Computation Conference Companion, ser. GECCO ’17. New York, NY, USA: ACM, 2017, pp. 55–56. [Online]. Available: http://doi.acm.org/10.1145/3067695.3076019
 [15] U. Gaerther, “UAV swarm tactics: an agentbased simulation and Markov process analysis,” 2015. [Online]. Available: https://calhoun.nps.edu/handle/10945/34665
 [16] K. J. DeMarco. (2018) SCRIMMAGE multiagent robotics simulator. [Online]. Available: http://www.scrimmagesim.org/
 [17] D. Wierstra, T. Schaul, T. Glasmachers, Y. Sun, J. Peters, and J. Schmidhuber, “Natural evolution strategies.” Journal of Machine Learning Research, vol. 15, no. 1, pp. 949–980, 2014.
 [18] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.

[19]
K. O. Stanley and R. Miikkulainen, “Competitive coevolution through
evolutionary complexification,”
Journal of Artificial Intelligence Research
, vol. 21, pp. 63–100, 2004.  [20] E. Conti, V. Madhavan, F. Petroski Such, J. Lehman, K. O. Stanley, and J. Clune, “Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of NoveltySeeking Agents,” ArXiv eprints, Dec. 2017.
 [21] F. Petroski Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, and J. Clune, “Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning,” ArXiv eprints, Dec. 2017.
Comments
There are no comments yet.