I Introduction
Particle Swarm Optimization (PSO) is a well developed swarm intelligence method that optimizes a nonlinear or linear objective function iteratively by trying to improve a candidate solution with regards to a given measure of quality. Motivated by a simplified social model, the algorithm is first introduced by Kennedy and Eberhart in [1], where some very primitive analysis of the convergence of PSO is also provided. Since the PSO algorithm requires only elementary mathematical operations and is computationally efficient in terms of both memory requirements and speed, it solves many optimization problems quite efficiently, particularly some nonlinear, nonconvex optimization problems. Consequently, the application of PSO has been widely seen from interdisciplinary subjects ranging from computer science, engineering, biology, to mathematics, economy [2, 3], etc. Several applications are reviewed in [4]
, which includes evolving neural networks, and reactive power and voltage control.
The mechanism of the PSO algorithm can be briefly explained as follows. The algorithm searches the solution space of an objective function by updating the individual solution vectors called particles. In the beginning, each particle is assigned to a position in the solution space and a velocity randomly. Each particle has a memory of its previous best value and the corresponding previous best position. In addition, every particle in the swarm can know the global best value among all particles and the corresponding global best position. During every iteration, the velocity of each particle is updated so that the particle is guided by the previous best position of the particle and the global best position stochastically.
As the PSO algorithm is used more extensively, more research efforts are devoted to its refinement. To improve the efficiency of the PSO algorithm, the selection of the parameters becomes crucial. References [5, 6] study the relationship between convergence rate and parameter selection while [7] focuses on the impact of inertia weight and maximum velocity of PSO in which an adaptive inertia weight is equipped to guarantee the convergence rate near a minimum. On the other hand, some variations of PSO are proposed to improve the various aspects of the algorithm, not limited to efficiency. In particular, [8]
presents a simple variation with the addition of a standard selection mechanism from evolutionary computation to improve performance. The authors in
[9, 10, 11] expand PSO to multiobjective optimization by augmenting the objective functions. More recently, a new simplestructure variation of PSO is proposed [12] to improve convergence. Unlike the standard PSO, in this algorithm the particles can not only communicate with each other via the objective function but also via a new variable named “quantizer” which displays a better convergence than the standard PSO by evaluating some standard test functions in the literature.All the above PSO variants focus either on some highly mathematical skills or on natureinspired structures to improve their performance, lacking the fundamental understanding of how these algorithms work for general problems. Thus, to address this issue, we need to look at the swarm intelligence algorithm design from a new perspective since the traditional way of looking to natural network systems appearing in nature for inspiration does not provide a satisfactory answer. In particular, the new algorithms need to have robustness properties on the practical uncertainty of distributed network implementation with communication constraints. Furthermore, due to the realtime implementation requirement for many network optimization systems in harsh or even adversarial environments, these new algorithms need to have faster (or even finitetime) convergence properties compared with the existing algorithms. Last but not least, these new algorithms need to have a capability of dealing with dynamical systems and control problems instead of just static optimization problems. In particular, it is favorable to use these new algorithms to modify (control) the dynamic behavior of engineered network systems due to the inherent similarity between swarm optimization in computational intelligence [13] and cooperative networks in control theory [14, 15, 16, 17, 18, 19, 20].
Multiagent Coordination Optimization (MCO) algorithms are inspired by swarm intelligence and consensus protocols for multiagent coordination in [21, 22, 23, 24]. Unlike the standard PSO, this new algorithm is a new optimization technique based not only on swarm intelligence [13] which simulates the bioinspired behavior, but also on cooperative control of autonomous agents. Similar to PSO, the MCO algorithm starts with a set of random solutions for agents which can communicate with each other. The agents then move through the solution space based on the evaluation of their cost functional and neighbortoneighbor rules like multiagent consensus protocols [21, 22, 24, 23, 25, 26]. By adding a distributed control term and gradientbased adaptation, we hope that the convergence speed of MCO can be accelerated and the convergence time of MCO can be improved compared with the existing techniques. Moreover, this new algorithm will be more suitable to distributed and parallel computation for solving largescale physical network optimization problems by means of high performance computing facilities.
In this report, we first implement MCO in a parallel computing way by introducing MATLAB^{®} builtin function into MCO. Then we rigorously analyze the global convergence of MCO by means of semistability theory [21, 27]. Besides sharing global optimal solutions with the PSO algorithm, the MCO algorithm incorporates cooperative swarm behavior of multiple agents into the update formula by sharing velocity and position information between neighbors to improve its performance. Numerical evaluation of the parallel MCO algorithm is provided by running the proposed algorithm on supercomputers in the High Performance Computing Center at Texas Tech University. In particular, the optimal solution and consuming time are compared with PSO and serial MCO by solving several benchmark functions in the literature, respectively. Based on the simulation results, the performance of the parallel MCO is not only superb compared with PSO by solving many nonlinear, nonconvex optimization problems, but also is of high efficiency by saving the computational time.
This report is organized as follows. In Section II, some notions and notation in graph theory are introduced. In Section III the realization of the parallel MCO algorithm in the MATLAB environment is described in details. The convergence results are developed in Section IV. The numerical evaluation of the parallel MCO algorithm is then presented in Section V. Finally, Section VI concludes the report.
Ii Mathematical Preliminaries
Graph theory is a powerful tool to investigate the topological change of largescale network systems. In this report, we use graphrelated notation to describe our network topology based MCO algorithm. More specifically, let denote a nodefixed dynamic directed graph (or nodefixed dynamic digraph) with the set of vertices and represent the set of edges, where . The timevarying matrix with nonnegative adjacency elements serves as the weighted adjacency matrix. The node index of is denoted as a finite index set . An edge of is denoted by and the adjacency elements associated with the edges are positive. We assume and for all . The set of neighbors of the node is denoted by , where denotes the cardinality of . The degree matrix of a nodefixed dynamic digraph is defined as
(1) 
where
The Laplacian matrix of the nodefixed dynamic digraph is defined by
(2) 
If , then is called a nodefixed dynamic undirected graph (or simply nodefixed dynamic graph). If there is a path from any node to any other node in a nodefixed dynamic digraph, then we call the dynamic digraph strongly connected. Analogously, if there is a path from any node to any other node in a nodefixed dynamic graph, then we call the dynamic graph connected. From now on we use short notations to denote , respectively. The following result due to Proposition 1 of [28]
is a property about the eigenvalue distribution of a Laplacian matrix.
Lemma II.1 ([28])
Consider the Laplacian matrix for a nodefixed dynamic digraph or graph with the index set , . Let , where denotes the spectrum of . Then for every ,
(3) 
where denotes the argument of , where denotes the set of complex numbers.
A direct consequence from Lemma II.1 is that , where denotes the real part of . Moreover, if is an undirected graph, then is real and .
Iii Parallel Multiagent Coordination Optimization Algorithm
Iiia Multiagent Coordination Optimization with NodeFixed Dynamic Graph Topology
The MCO algorithm with static graph topology, proposed in [29] to solve a given optimization problem , can be described in a vector form as follows:
(4)  
(5)  
(6) 
where , , and are the velocity and position of particle at iteration , respectively, is the position of the global best value that the swarm of the particles can achieve so far, , , and
are three scalar random coefficients which are usually selected in uniform distribution in the range
, , and . In this report, we allow nodefixed dynamic graph topology in MCO so that in (4) becomes . A natural question arising from (4)–(6) is the following: Can we always guarantee the convergence of (4)–(6) for a given optimization problem ? Here convergence means that all the limits , , and exist for every . This report tries to answer this question by giving some sufficient conditions to guarantee the convergence of (4)–(6).IiiB Parallel Implementation of MCO
In this section, a parallel implementation of the MCO algorithm is introduced, which is described as Algorithm 1 in the MATLAB language format. The command opens or closes a pool of MATLAB sessions for parallel computation, and enables the parallel language features within the MATLAB language (e.g., ) by starting a parallel job which connects this MATLAB client with a number of labs.
The command executes code loop in parallel. Part of the body is executed on the MATLAB client (where the is issued) and part is executed in parallel on MATLAB workers. The necessary data on which operates is sent from the client to workers, where most of the computation happens, and the results are sent back to the client and pieced together. In Algorithm 1, the command is used for loop of the update formula of all particles. Since the update formula needs the neighbors’ information, so two temporary variables and are introduced for storing the global information of position and velocity, respectively, and is the Laplacian matrix for the communication topology for MCO.
Iv Convergence analysis
In this section, we present some theoretic results on global convergence of the iterative process in Algorithm 1. In particular, we view the randomized MCO algorithm as a discretetime switched linear system and then use semistability theory to rigorously show its global convergence. To proceed with presentation, let denote the set of real numbers.
Lemma IV.1
Let be positive integers and . For every , let denote a blockmatrix whose th blockcolumn is and the rest blockelements are all zero matrices, i.e., , , where denotes the identity matrix and denotes the zero matrix. Define for every , where denotes the Kronecker product and denotes the matrix whose entries are all ones. Then the following statements hold:

For every , is an idempotent matrix, i.e., , and , where denotes the rank of .

For any , for every and every . In particular, and for every and every , where , denotes the kernel of , and denotes the span of .

, , and for every and every .
Proof:
) First note that by Fact 7.4.3 of [30, p. 445], for every . Now it follows from Fact 7.4.20 of [30, p. 446] that
(7) 
which shows that is idempotent.
Next, it follows from (IV) that for every . By Sylvester’s inequality, we have , and hence, for every . On the other hand, since , it follows from ) of Fact 2.10.17 of [30, p. 127] that , which implies that for every . Thus, for every .
) It follows from (IV) that for every and every ,
namely, for every . Since by ), for every , it follows from Corollary 2.5.5 of [30, p. 105] that for every , where denotes the defect of and denotes the dimension of a subspace . Note that , , are linearly independent, it follows that for every .
) For every and every ,
and
. Finally, by Fact 7.4.3 of [30, p. 445], for every .
Next, we use some graph notions to state a result on the rank of certain matrices related to the matrix form of the iterative process in Algorithm 1.
Lemma IV.2
Define a (possibly infinite) series of matrices , , , as follows:
(8) 
where , , denotes the Laplacian matrix of a nodefixed dynamic digraph , and is defined in Lemma IV.1.

If and , then and for every , , where .

If , then and for every , .

If and , then and for every , .
Proof:
First, it follows from (8) that , , where and .
) If and , then and in can be chosen arbitrarily in and , respectively. Thus, and can be represented as and , where . In this case, and for every , . By Corollary 2.5.5 of [30, p. 105], for every , .
) We consider two cases on .
Case 1. If and , then substituting and into yields
(9) 
where is defined in Lemma IV.1. Since, by ) of Lemma IV.1, for every and every , it follows from (9) that can be represented as , where . Furthermore, it follows from ) of Lemma IV.1 that for every . Thus, for every , , which implies that for every , . Therefore, in this case .
Case 2. If and , then we claim that . To see this, it follows from Lemma II.1 that for any , . Furthermore, note that . Thus, if , then . Now, substituting and into yields
(10) 
Note that , . Premultiplying on both sides of (10) yields , . Since for every , it follows that , , where denotes the determinant. Hence, , .
Let . Note that , it follows from Fact 7.4.22 of [30, p. 446] that for every , and hence, . Next, let , it follows that
Comments
There are no comments yet.