I Introduction
Robotic swarms are gradually leaving academic laboratories, e. g. [1, 2], in favor of industrial settings, as in the case of [3], to reach even less structured and more dynamic environments, like agricultural lands and construction sites [4]. This presents new challenges that robots have to face, which come either from unexpected and/or unmodeled phenomena or from changing environmental conditions. These issues become even more pronounced when the robots are deployed on the field for longterm applications, like persistent environment surveiling [5] or plant growth monitoring [6]. Therefore, a way of encoding survivability [7], i. e., the ability to remain alive (in a robotic sense), is needed now more than ever. In this paper, we introduce a method which can be used to ensure survivability, and which makes use of optimization tools that allow to only minimally influence the task which the robots are asked to perform.
Several solutions have been proposed in order to make robots robust to unknown or changing environmental conditions and to ensure their applicability to unstructured or even hazardous environments [8]. Moreover, in order to let the robots survive for as much time as possible, some of the proposed methods entail scheduled periodic maintenance [9], or path optimization with the aim of maximizing the time spent in the field/minimizing the consumed energy [10]. Some other methods employ a powerdependent multiobjective optimization to ensure that the robots execute their task, while maintaining a desired energy reserve, as done in [11]. In both cases, a careful parameters tuning is required in order to prevent situations in which the robots trade survivability for shorterterm rewards.
As a matter of fact, goaloriented control strategies may not be ideal for longterm applications, where robustness to changing environmental conditions is required. Indeed, control policies obtained using optimal control strategies are characterized by a fragility related to the precise model assumptions [12]. These are likely to be violated during the long time horizons over which the robots are deployed in the field. For this reason, in this paper, we consider a constraintoriented approach for multirobot systems, where the survivability of the swarm is enforced as a constraint on the robots’ task, encoded by the nominal input to the robots, . The control input is, then, synthesized, at each point in time, by solving the optimization problem
(1)  
(2) 
where is the survivability constraint, which is also a function of the robots’ state [7].
It is informative to note that ecological studies have shown that the constraints imposed by an environment strongly determine the behaviors developed by animals living in it [13]. Inspired by this concept, we ask whether robotic swarms can be controlled using constraints only. What this entails is that robots are programmed to do nothing, subject to task and survivability constraints. This is formalized by the following optimization problem
(3)  
(4)  
(5) 
where encodes the task constraint, which is equivalent to executing the nominal input in (1). Thus, what we could call a robotecological formulation [7] naturally lends itself to be implemented using optimizationbased control techniques.
In this paper, we first give sufficient conditions for turning certain classes of multirobot tasks into constraints within an optimization problem. Then, we present a systematic way of doing this. And, finally, we propose an effective task prioritization technique obtained by combining hard and soft constraints, such as survivability and task execution in (3).
The remainder of the paper is organized as follows. In Section II, we briefly recall the control techniques that will be used to synthesize the constraintdriven coordinated control policies for multirobot systems. Then, we introduce an optimization program for executing minimumenergy gradient flow, which will be used, in Section III, to formalize the constraintbased control of multirobot systems. Moreover, we show how to achieve decentralized finitetime minimization algorithms using the presented approach. Two applications are presented in Section IV, namely formation control and coverage control. Section V reports the results of experiments executed with a team of ground mobile robots implementing the proposed controller to execute a longterm environmental monitoring task.
Ii ConstraintBased Control Design
In this section, we review the concepts related to control Lyapunov functions and control barrier functions which will then be used to formulate optimization problems whose solution corresponds to the execution of decentralized coordinated controllers for multirobot systems.
Iia Control Lyapunov and Control Barrier Functions
In order to design controllers that allow the execution of multirobot tasks, we make use of control Lyapunov functions and, in particular, we resort to methods from finitetime stability theory of dynamical systems.
Consider the dynamical system in control affine form
(6) 
with , , and and
locally Lipschitz continuous vector fields. One of the results we will use is given by the following theorem.
Theorem 1 (Based on Theorem 4.3 in [14]).
Given a dynamical system (6) and a continuous, positive definite function , a continuous controller such that
where and denote the Lie derivatives of in the directions of and , respectively, and , renders the origin finitetime stable.
Moreover, an upper bound for the settling time is given by
where is the value of at time .
Proof.
Similar to Theorem 4.3 in [14]. ∎
To enforce constraints, such as survivability or task execution for multirobot systems, we employ control barrier functions. These, as will be shown, are suitable for synthesizing constraints that can be encoded in terms of setmembership. Conceptually similar to control Lyapunov functions, control barrier functions have been introduced in [15] with the objective of ensuring safety in a provably correct way. In this context, ensuring safety means ensuring the forward invariance of a set , in which we want to confine the state . Without loss of generality, here we consider forward complete systems, for which , solution to (6), exists for all .
Suppose we can find a continuously differentiable function , such that the safe set can be defined as the zerosuperlevel set of , i. e.,
(7) 
and
(8)  
(9) 
where and denote the boundary and the interior of , respectively. If the following condition is satisfied
(10) 
with an extended class function [16], then is called a (zeroing) control barrier function. Conditions to ensure forward invariance of the set are given in the following theorem.
Theorem 2 (Safe set forward invariance [17]).
Proof.
See [17]. ∎
The existence of the control barrier function also ensures the asymptotic stability of the set , as shown in the following theorem.
Theorem 3 (Safe set asymptotic stability).
Proof (based on [17]).
Let
(11) 
be a control Lyapunov candidate function. Thus, for and for . Moreover,
(12)  
(13) 
Furthermore, since is continuously differentiable, is continuously differentiable as well. Then, by hypothesis (10), for and for . By Theorem 2, is forward invariant. Moreover, is closed, since it is the inverse image of the closed set under the continuous map . Therefore, by Theorem 2.8 in [18], the system (6) is uniformly globally asymptotically stable with respect to the set . Thus, there exists a class function [16] such that, given any initial state , the solution satisfies , where . Hence, as , . ∎
IiB MinimumEnergy Gradient Flow
In this section we consider the problem of minimizing a cost function . We present a method to reformulate the classic gradient flow algorithm using the tools introduced in Section IIA. This allows us to synthesize a constrained optimization program that is equivalent to the minimization of the cost .
Consider the single integrator dynamical system , where are the state and the control input, respectively. Assume that the objective consists in minimizing a cost , where is a continuously differentiable function. Applying gradient flow algorithms, the problem
(14) 
can be directly minimized by choosing
(15) 
In fact, with this choice of input, applying chain rule leads to:
(16) 
We now show that the minimization problem (14) can be formulated as a minimumenergy problem that achieves the same objective of minimizing the cost .
To this end, let us define the barrier function
(17) 
and its zerosuperlevel set, i. e. the safe set,
(18) 
By Theorems 2 and 3, the differential constraint
(19) 
where is an extended class function, ensures that the set is forward invariant when and asymptotically stable when , being the value of the state at time .
Observation 4.
We can now introduce the following optimization problem:
(21)  
, which solves the problem in (14), as shown in the following proposition.
Proposition 5.
Proof.
The KKT conditions for the problem in (21) are
(22) 
where and are primal and dual optimal points [19]. First of all, we note that, if , then by the fourth equation in (22). Therefore, from the first equation in (22), . This is equivalent to and, since , this implies that . In case , from the third and fourth equation in (22), one has , and therefore, . Since and is continuously differentiable, one can show that . Thus, we can unify the two cases, and , and write the expression of the optimal as follows:
(23) 
With this expression of the input , the evolution in time of the cost is given by
(24) 
So,
(25) 
Hence, as , , such that . ∎
Corollary 6.
Proof.
In summary, we saw that the expression for given in (23) solves the initial optimization problem (14), which can be equivalently solved following the gradient flow of the cost using (15).
We now illustrate that, besides the advantages related to longterm autonomy applications discussed in Section I, the formulation in (21) can be used to design decentralized cost minimization algorithms that are faster than gradient descent. In the optimization literature, there are plenty of methods that can be employed to improve the convergence speed of gradient flow algorithms (see, e. g., [19]). Nevertheless, these second order methods, such as Newton’s method or conjugate gradient, suffer from their centralized nature. Only in some cases, this issue can be partially mitigated by resorting to distributed optimization techniques, such as [21, 22]. The abovementioned methods are all suitable for minimizing a cost function. However, we insist on having a constrained optimization formulation–where we encode cost minimization as a constraint–because of the flexibility and robustness properties discussed in Section I, useful for longterm robot autonomy applications.
The following proposition shows that, using the formulation in (21), it is possible to minimize the cost and to be not just faster than gradient descent, but actually to reach a stationary point in finite time.
Proposition 7.
Given the dynamical system and the objective of minimizing the cost function , the solution of the optimization problem
(27)  
where is given by (17), and , will drive the state to a stationary point of the cost in finite time.
Proof.
Similarly to what has been done in Propositon 5, it can be shown that
(28) 
Thus, by Theorem 1, we conclude that
(29) 
and
(30) 
Hence, , with , in finite time. Indeed, as shown in [23], such that is a finitetime convergence control barrier function for the system characterized by single integrator dynamics, . ∎
Using the results derived in this section, the next section presents a procedure to synthesize decentralized optimization problems whose solutions result in coordinated control of multirobot systems.
Iii ConstraintBased Control of MultiRobot Systems
Local, scalable, safe and emergent are four essential features that decentralized multirobot coordinated control algorithms should possess [24]. Many algorithms that satisfy these properties have been developed for applications ranging from social behavior mimicking [25], formation assembling [26] and area patrolling [27]. In [24], the authors analyze the common features among these algorithms and discuss their decentralized implementations in robotic applications. In this section, we apply the results derived in Section II with the aim of obtaining constrained optimization problems equivalent to the decentralized execution of multirobot tasks.
Consider a collection of robots, whose position is denoted by , where for planar robots and in the case of aerial robots. Assume each robot is equipped with an omnidirectional range sensor that allows it to measure the relative position of neighboring robots, namely robot is able to measure , when robot is within its sensing range. These interactions among the robots are described by a graph , where is the set of vertices of the graph, representing the robots, and is the set of edges between the robots, encoding adjacency relationships. If , then robot can measure robot ’s position. For the purposes of this paper, we assume that the graph is undirected, namely . In order to obtain decentralized algorithms, we want each robot to act only based on local information, by which we mean the relative positions of its neighbors. By construction, this leads to inherently scalable coordinated control algorithms.
Denoting the ensemble state of the robotic swarm by , a general expression for the cost that leads to decentralized control laws is given by
(31) 
where is the neighborhood set of robot , and , is a symmetric, pairwise cost between robots and . We assume that , so that . Assuming we can directly control the velocity of robot , , we can employ a gradient descent flow policy like (15) to minimize , obtaining
(32) 
This is nothing but a weighted consensus protocol, and it is decentralized insofar as the input only depends on robot ’s neighbors. The construction shown in Section II can be then applied to minimize the cost given in (31) by formulating the following minimumenergy problem:
(33)  
where is the vector of robots’ inputs, and a single integrator dynamics, , is assumed for each robot. Solving (33) leads to the accomplishment of the task, by which we mean that a stationary point of the cost has been reached. As explicitly shown by (23) in Proposition 5, a minimumenergy formulation, initially introduced in [15], allows the robots to move towards lower values of the cost until the task is accomplished ().
The following proposition gives the expression of the optimization problems whose solutions lead to a decentralized minimization of the cost in (31).
Proposition 8 (Constraintdriven decentralized task execution).
Given the pairwise cost function defined in (31), a collection of robots, characterized by single integrator dynamics, minimizes in a decentralized fashion, if each robot executes the control input, solution of the following optimization problem:
(34)  
where and is an extended class function, , superadditive for , i. e. . If , , , a stationary point of the cost is reached in finite time, with the upper bounds on the settling time given in Theorem 1.
Proof.
Proposition 5 ensures that, by imposing the global constraint , constructed using the whole state vector , the cost is decreasing towards a stationary point. We want to show that, by imposing only local constraints (i. e., such that robot only needs information about its neighbors), the multirobot system is able to enforce the global constraint and, hence, to minimize the cost in a decentralized fashion.
We proceed by starting to sum up the constraints for each robot, obtaining:
(35)  
(36) 
where we used the superadditivity property of , and we set . Moreover, since the graph , which encodes the neighboring relations between the robots, is undirected, we have that . Thus,
(37) 
where , , and an extended class function. Hence, by Proposition 5, will converge to a stationary point of .
Finally, we note that a class function , defined for is convex, and hence superadditive, for . Applying Proposition 7, the statement holds. ∎
The structure of the cost function , even though quite specific, allows us to encode a rich set of multirobot tasks, by carefully choosing the weights as a function of the state . The following section shows two variations on the cost function which allow a multirobot system to perform formation control, i. e., assembling particular shapes, and coverage control, consisting in spreading out the robotic swarm in the environment in an optimal way.
Iv Applications
In this section we recall the expression of the cost for two specific multirobot tasks: formation control and coverage control.
Iva Formation Control
In formation control applications, the robots are asked to assemble a predefined shape, specified in terms of interagent distances. In order to frame this problem as a cost minimization problem, let be the formation error
(38) 
where is the desired distance between robots and . measures how far the robots are from assembling the desired formation characterized by the relative distances . corresponds to the robots forming the desired shape. Note that, as is a sum of squares, , required as a hypothesis in order for Proposition 8 to hold.
The gradient the evaluates to
(39) 
This can be interpreted as follows: if the distance between robots and is smaller than , then the weight is negative, and the robots experience a repelling effect. Conversely, if the two robots are further than apart, the positive weight will attract one towards the other. The special case in which corresponds to the wellknown consensus problem.
The expression of the gradient in (39) is decentralized as robot has to compute relative distances only with respect to its neighboring robots.
IvB Coverage Control
In coverage control, the task given to the robots is that of covering a domain . Given a coverage performance measure, the robots should spread over the domain in an optimal way. As shown in [28], each robot should be in charge only of a subset of the domain that, more specifically, is its Voronoi cell, defined as .
Let us introduce the measure of how bad a domain is being covered:
(40) 
where denotes the centroid of the Voronoi cell . This form is just a reformulation of the locational cost originally introduced in [27]. Taking the derivative of with respect to , required in the optimization problem (34), one obtains:
(41) 
where
is the identity matrix. Note that, even if
virtually depends on the entire ensemble state, , of the robotic swarm robot , in order to compute it, only requires information from the robots with which it shares part of the boundary of its Voronoi cells.In the Appendix we show how the formulation presented in this paper also allows an exact decentralized implementation of the coverage control with timevarying density functions introduced in [29].
We deployed the optimizationbased control algorithms with the expressions of the costs derived in Sections IVA and IVB on a real multirobot system and, in the next section, we show the experimental results. Moreover, the constraintdriven formulation of Section III is used to achieve longterm environmental monitoring, where the robots are tasked with covering a domain over a timehorizon which is much longer than their battery life, and during which the robots will also have to avoid collisions with obstacles moving around in the domain.
V Experiments
The coordinated control approach presented in this paper has been tested on the Robotarium [1], a remotely accessible swarm robotics testbed. The Robotarium is populated by smallscale differentialdrive robots which can be programmed by uploading code scripts via a web interface.
Throughout the paper, we assumed we can directly control the velocity of the robots, by modeling them using single integrator dynamics. However, a differentialdrive robot can be more accurately modeled using unicycle dynamics:
(42) 
where and are the robot’s position and orientation in the plane, respectively, and and are the linear and angular velocity inputs, respectively. Nevertheless, in [30], it is shown that it is possible to derive a nearidentity diffeomorphism that can be used to partially feedback linearize the system (42). This way, the unicycle can be abstracted as a single integrator. This is realized through the invertible map
(43) 
where is the matrix that rotates vectors in counterclockwise by an angle , and is the velocity in the plane of a point located in front of the unicycle at a distance from its center. This method is used to control the robots on the Robotarium, by calculating linear and angular velocities of robot , and , from the control input , obtained by solving the optimization problems derived in Section III.
Regarding the implementation of the optimization program (34) needed for the execution of the tasks presented in the previous section, the function has been chosen to be , which is an extended class function, convex for . This implies it is also superadditive for , as required by the hypotheses in Proposition 8.
Va Formation and Coverage Control
The optimization problem (34) has been implemented with the specific expressions of and given in (38) and (39) in order to achieve formation control, as explained in Section IVA. A sequence of snapshots recorded during the experiments in the Robotarium is shown in Fig. 1: six robots are asked to assemble a hexagon specified through the interagent distances in (38). The edges corresponding to distances that are maintained are projected down onto the testbed and depicted as black lines in Figures (a)a to (d)d.
Similarly, coverage control has been implemented using the constraintbased optimization (34) and the expressions of and in (40) and (41). The results are shown in Fig. 2. Six robots are asked to spread over a rectangular domain. The Voronoi partition of the domain is projected on the testbed. As a result of the optimization program, the robots are moving towards the centroids of their respective Voronoi cells, represented as gray circles in Figures (a)a to (d)d.
VB Combining and Prioritizing Tasks
In this section, we present the application of the proposed constraintdriven coordinated control to longterm environmental monitoring.
The setup of the experiment is as follows. Six robots are asked to monitor an area by performing coverage control. While executing this task, the robots must not run out of energy and must not collide with two dynamic obstacles, embodied by two additional robots moving in the environment. In order to do so, we define constraints that allow the robots to always keep enough residual energy in their batteries and to be always a minimum distance apart from the obstacles.
To accomplish the first goal, we use a method similar to the one developed in [31]. Assuming that the domain is endowed with charging stations, i. e. locations where the robots can recharge their batteries, let us define the following barrier function:
(44) 
where is the position of robot , is the energy in its battery, is the minimum residual energy we want the robots to keep, is the location of the charging station dedicated to robot , is the minimum distance from the charging station at which the robots can recharge their batteries (typical behavior of wireless charging technologies), and is a constant such that upperbounds the energy required to reach a charging station. We refer to [31] for a rigorous analysis.
As far as obstacle avoidance is concerned, we define, for each obstacle, the following barrier function, which ensures collisionfree operations in multirobot systems [32]:
(45) 
where is the position of the obstacle and is the minimum distance we want the robots to maintain from the obstacle.
Combining the energy constraint and the obstacle constraint together with the coverage task constraints, the following optimization problem can be formulated:
(46)  
The variable in the coverage constraint acts as a relaxation parameter, which allows the constraints related to energy and collisions to be fulfilled. This translates to trading task execution for survivability. Consequently, this formulation allows tasks prioritization obtained by combining hard and soft constraints.
The results of the longterm environmental monitoring experiment are shown in Fig. 3. The Voronoi partition generated by the robots is projected down onto the testbed, as in Fig. 2. Arranged vertically along the left edge of the domain, there are six charging stations depicted as blue circles that turn yellow when the robots are charging. The two robots circle in red are moving in the environment acting solely as obstacles. The sequence of snapshots shows the robots starting to perform coverage (Fig. (a)a), two robots avoiding a obstacles (top right and bottom left in Fig. (b)b), and two robots recharging their batteries (Fig. (c)c). In Fig. (d)d the robots have reached a configuration corresponding to a local minimum of the locational cost (40). A video of the experiments can be found online [33].
Due to the limited amount of time that each experiment submitted to the Robotarium is allowed to last, we simulate the battery dynamics in such a way that the robots experience multiple charging cycles during the course of a single experiment. Fig. 4 shows the energy levels of the robots employed to perform coverage. The minimum desired energy level, , and the value corresponding to fully charged battery, , are depicted as black thick lines. Enforcing the energy constraints using (44) allows the robots to keep their energy level always above .
We have shown how the constraintdriven control formulation can be used to build a minimumenergy optimization problem, whose constraints encode both the task that the robots are asked to perform and the survivability specifications, thus enabling the robust deployment of robots for longterm applications.
Vi Conclusions
In this paper we presented a reformulation of optimizationbased multirobot tasks in terms of constrained optimization. Identifying a multirobot task with a cost function that needs to be minimized, we leverage control barrier functions to synthesize decentralized optimizationbased controllers that achieve the desired goal. The advantages of this approach include its flexibility of encoding several multirobot tasks and the ease of combining them with different types of constraints. We showed how this flexibility can be used to enforce robot survivability and achieve longterm robot autonomy, where robustness and resilience are indispensable properties that robots have to possess. A systematic way of formulating the optimization problems for each agent of a robotic swarm is derived. Its effectiveness is demonstrated through a series of experiments using a team of ground mobile robots, culminating in a longterm environmental monitoring application.
Appendix
The formulation presented in this paper also allows an exact decentralized implementation of the coverage control with timevarying density functions. In [29], the authors show that the control law
(47) 
minimizes the locational cost
(48) 
is a timevarying density function, which specifies the importance of point at time . The cost in (48) is equivalent to the one defined in (40) when the centroids are calculated weighting the points in the domain according to the value of the density function associated to them, as shown in [27].
Acknowledgment
The authors would like to thank Professor Jonathan N. Pauli for helpful discussions about ecology.
References
 [1] D. Pickem, P. Glotfelter, L. Wang, M. Mote, A. Ames, E. Feron, and M. Egerstedt, “The robotarium: A remotely accessible swarm robotics research testbed,” in 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017, pp. 1699–1706.
 [2] M. Rubenstein, C. Ahler, and R. Nagpal, “Kilobot: A low cost scalable robot system for collective behaviors,” in 2012 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2012, pp. 3293–3298.
 [3] R. D’Andrea, “Guest editorial: A revolution in the warehouse: A retrospective on kiva systems and the grand challenges ahead,” IEEE Transactions on Automation Science and Engineering, vol. 9, no. 4, pp. 638–639, 2012.
 [4] L. E. Parker, “Multiple mobile robot systems,” in Springer Handbook of Robotics. Springer, 2008, pp. 921–941.
 [5] E. Stump and N. Michael, “Multirobot persistent surveillance planning as a vehicle routing problem,” in 2011 IEEE International Conference on Automation Science and Engineering. IEEE, 2011, pp. 569–575.
 [6] A. English, P. Ross, D. Ball, and P. Corke, “Vision based guidance for robot navigation in agriculture,” in 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014, pp. 1693–1698.
 [7] M. Egerstedt, J. N. Pauli, G. Notomista, and S. Hutchinson, “Robot ecology: Constraintbased control design for long duration autonomy,” Annual Reviews in Control, 2018, to appear.
 [8] R. C. Arkin and G. Vachtsevanos, “Techniques for robot survivability,” in Proc. 3rd International Symposium on Robotics and Manufacturing, Vancouver, BC, 1990, pp. 383–388.
 [9] S. Mishra, S. Rodriguez, M. Morales, and N. M. Amato, “Batteryconstrained coverage,” in Automation Science and Engineering (CASE), 2016 IEEE International Conference on. IEEE, 2016, pp. 695–700.

[10]
S. Martin and P. Corke, “Longterm exploration & tours for energy constrained robots with online proprioceptive traversability estimation,” in
2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014, pp. 5778–5785.  [11] J. Derenick, N. Michael, and V. Kumar, “Energyaware coverage control with docking for robot teams,” in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2011, pp. 3667–3672.
 [12] M. E. Csete and J. C. Doyle, “Reverse engineering of biological complexity,” science, vol. 295, no. 5560, pp. 1664–1669, 2002.
 [13] R. E. Ricklefs, The economy of nature. Macmillan, 2008.
 [14] S. P. Bhat and D. S. Bernstein, “Finitetime stability of continuous autonomous systems,” SIAM Journal on Control and Optimization, vol. 38, no. 3, pp. 751–766, 2000.
 [15] A. D. Ames, J. W. Grizzle, and P. Tabuada, “Control barrier function based quadratic programs with application to adaptive cruise control,” in Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on. IEEE, 2014, pp. 6271–6278.
 [16] C. M. Kellett, “A compendium of comparison function results,” Mathematics of Control, Signals, and Systems, vol. 26, no. 3, pp. 339–374, 2014.
 [17] X. Xu, P. Tabuada, J. W. Grizzle, and A. D. Ames, “Robustness of control barrier functions for safety critical control,” IFACPapersOnLine, vol. 48, no. 27, pp. 54–61, 2015.
 [18] Y. Lin, E. D. Sontag, and Y. Wang, “A smooth converse lyapunov theorem for robust stability,” SIAM Journal on Control and Optimization, vol. 34, no. 1, pp. 124–160, 1996.
 [19] S. Boyd and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.
 [20] H. K. Khalil, “Noninear systems,” PrenticeHall, New Jersey, vol. 2, no. 5, pp. 5–1, 1996.
 [21] E. Wei, A. Ozdaglar, and A. Jadbabaie, “A distributed newton method for network utility maximization–i: Algorithm,” IEEE Transactions on Automatic Control, vol. 58, no. 9, pp. 2162–2175, 2013.

[22]
S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, et al.,
“Distributed optimization and statistical learning via the alternating
direction method of multipliers,”
Foundations and Trends® in Machine learning
, vol. 3, no. 1, pp. 1–122, 2011.  [23] A. Li, L. Wang, P. Pierpaoli, and M. Egerstedt, “Formally correct composition of coordinated behaviors using control barrier certificates,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, to appear.
 [24] J. Cortés and M. Egerstedt, “Coordinated control of multirobot systems: A survey,” SICE Journal of Control, Measurement, and System Integration, vol. 10, no. 6, pp. 495–503, 2017.
 [25] C. W. Reynolds, “Flocks, herds and schools: A distributed behavioral model,” in ACM SIGGRAPH computer graphics, vol. 21, no. 4. ACM, 1987, pp. 25–34.
 [26] M. Egerstedt and X. Hu, “Formation constrained multiagent control,” IEEE transactions on robotics and automation, vol. 17, no. 6, pp. 947–951, 2001.
 [27] J. Cortes, S. Martinez, T. Karatas, and F. Bullo, “Coverage control for mobile sensing networks,” IEEE Transactions on robotics and Automation, vol. 20, no. 2, pp. 243–255, 2004.
 [28] S. Lloyd, “Least squares quantization in pcm,” IEEE transactions on information theory, vol. 28, no. 2, pp. 129–137, 1982.
 [29] S. G. Lee, Y. DiazMercado, and M. Egerstedt, “Multirobot control using timevarying density functions,” IEEE Transactions on Robotics, vol. 31, no. 2, pp. 489–493, 2015.
 [30] R. OlfatiSaber, “Nearidentity diffeomorphisms and exponential/spl epsi/tracking and/spl epsi/stabilization of firstorder nonholonomic se (2) vehicles,” in American Control Conference, 2002. Proceedings of the 2002, vol. 6. IEEE, 2002, pp. 4690–4695.
 [31] G. Notomista, S. F. Ruf, and M. Egerstedt, “Persistification of robotic tasks using control barrier functions,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 758–763, 2018.
 [32] L. Wang, A. D. Ames, and M. Egerstedt, “Safety barrier certificates for collisionsfree multirobot systems,” IEEE Transactions on Robotics, vol. 33, no. 3, pp. 661–674, 2017.
 [33] G. Notomista. Notomista, Egerstedt  Constraint Driven Coordinated Control of Multi Robot Systems. Youtube. [Online]. Available: https://youtu.be/hOTe4ieOrI
Comments
There are no comments yet.