A Particle Swarm Based Algorithm for Functional Distributed Constraint Optimization Problems

09/13/2019 ∙ by Moumita Choudhury, et al. ∙ University of Dhaka 0

Distributed Constraint Optimization Problems (DCOPs) are a widely studied constraint handling framework. The objective of a DCOP algorithm is to optimize a global objective function that can be described as the aggregation of a number of distributed constraint cost functions. In a DCOP, each of these functions is defined by a set of discrete variables. However, in many applications, such as target tracking or sleep scheduling in sensor networks, continuous valued variables are more suited than the discrete ones. Considering this, Functional DCOPs (F-DCOPs) have been proposed that is able to explicitly model a problem containing continuous variables. Nevertheless, the state-of-the-art F-DCOPs approaches experience onerous memory or computation overhead. To address this issue, we propose a new F-DCOP algorithm, namely Particle Swarm Based F-DCOP (PFD), which is inspired by a meta-heuristic, Particle Swarm Optimization (PSO). Although it has been successfully applied to many continuous optimization problems, the potential of PSO has not been utilized in F-DCOPs. To be exact, PFD devises a distributed method of solution construction while significantly reducing the computation and memory requirements. Moreover, we theoretically prove that PFD is an anytime algorithm. Finally, our empirical results indicate that PFD outperforms the state-of-the-art approaches in terms of solution quality and computation overhead.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Distributed Constraint Optimization Problems (DCOPs) are an important constraint handling framework of multi-agent systems in which multiple agents communicate with each other in order to optimize a global objective. The global objective is defined as the aggregation of cost functions (i.e. constraints) among the agents. The cost functions can be defined by a set of variables controlled by the corresponding agents. DCOPs have been widely applied to solve a number of multi-agent coordination problems including, multi-agent task scheduling [16], sensor networks [5], multirobot coordination [20].

Over the years, a number of algorithms have been proposed to solve DCOPs which includes both exact and non-exact algorithms. Exact algorithms, such as ADOPT [12], DPOP [13] and PT-FB [10], are designed in such a way that provide the global optimal solutions of a given DCOP. However, exact solutions experience either, or both, of the exponential memory requirements and computational cost as the system grows. On the contrary, non-exact algorithms such as, DSA [22], MGM & MGM2 [11], Max-Sum [6], CoCoA [18], and ACO_DCOP [3] compromise some solution quality for scalability.

In general, DCOPs assume that participating agents’ variables are discrete. Nevertheless, many real world applications (e.g. target tracking sensor orientation [7], sleep scheduling of wireless sensors [9]) can be best modelled with continuous variables. Therefore, for discrete DCOPs to be able to apply in such problems, we need to discretize the continuous domains of the variables. However, the discretization process needs to be coarse for a problem to be tractable and must be sufficiently fine to find high quality solutions of the problem[15]. To overcome this issue, Stranders et al. has proposed a continuous version of DCOP which is later referred as Functional DCOP (F-DCOP) [8]. There are two main differences between F-DCOP and DCOP. Firstly, instead of having discrete decision variables, F-DCOP has continuous variables that can take any value between a range. Secondly, the constraint functions are represented in functional forms in F-DCOP rather than in the tabular forms in DCOP.

To cope with the modification of the DCOP formulation, Continuous Max-Sum (CMS) has been proposed which is an extension of the discrete Max-Sum [15]. However, this paper approximates the constraint utility functions as piece-wise linear functions which is often not applicable in practice since a handful of real life applications deals with only peice-wise linear functions. To address this limiting assumption of CMS, Hybrid Max-Sum (HCMS) has been proposed in which continuous non-linear optimization methods are combined with the discrete Max Sum algorithm [19]. However, continuous optimization methods such as, gradient based optimization require derivative calculations, and thus they are not suitable for non differentiable optimization problems. The latest contribution in this field has been done by Hoang et al., 2019. In this paper, authors propose one exact, Exact Functional DPOP (EF-DPOP) and two approximate versions, Approximate Functional DPOP (AF-DPOP), and Clustered AF-DPOP (CAF-DPOP) of DPOP for solving F-DCOP [8]. The key limitation of these algorithms is that both AF-DPOP and CAF-DPOP incur exponential memory and computation overhead even though the latter cuts the communication cost by providing a bound on message size.

Against this background, we propose a Particle Swarm Optimization based F-DCOP algorithm(PFD). Particle Swarm Optimization (PSO) [4] is a stochastic optimization technique inspired by the social metaphor of bird flocking that has been successfully applied to many optimization problems such as Function Minimization [14]

, Neural Network Training

[21] and Power-System Stabilizers Design Problems [1]. However, to the best of our knowledge no previous work has been done to incorporate PSO in DCOP or F-DCOP. In PFD, agents cooperatively keep a set of particles where each particle represents a candidate solution and iteratively improve the solutions over time. Since PSO requires only primitive mathematical operators such as, addition and multiplication, it is computationally inexpensive (both in memory and speed) than the gradient based optimization methods. Specifically, We empirically show that PFD can not only find better solution quality by exploring a large search space but it is also computationally inexpensive both in terms of memory and computation cost than the existing FDCOP solvers.

Background and Problem Formulation

In this section, we formulate the problem and discuss the background necessary to understand our proposed method. We first describe the general DCOP framework and then move on the F-DCOP framework which is the main problem that we want to solve. We then discuss the centralized PSO algorithm and the challenges remain to incorporate PSO with F-DCOP framework.

Distributed Constraint Optimization Problem

A DCOP can be defined as a tuple [12] where,

  • A is a set of agents .

  • X is a set of discrete variables , where each variable is controlled by one of the agents .

  • D is a set of discrete domains , where each corresponds to the domain of variable .

  • F is a set of cost functions , where each is defined over a subset = {, , …, } of variables X and the cost for the function is defined for every possible value assignment of , that is, : . The cost functions can be of any arity but for simplicity we assume binary constraints throughout the paper.

  • is a variable to agent mapping function which assigns the control of each variable to an agent . Each agent can hold several variables. However, for the ease of understanding, in this paper we assume each agent controls only one variable.

The solution of a DCOP is an assignment that minimizes the sum of cost functions as shown in Equation 1.

(1)

Functional Distributed Constraint Optimization Problem

(a) Constraint Graph

(b) Cost Functions
Figure 1: Example of an F-DCOP

Similar to the DCOP formulation, F-DCOP can be defined as a tuple . In F-DCOP, , and are the same as defined in DCOP. Nonetheless, the set of variables, and the set of Domains, are defined as follows -

  • X is the set of continuous variables that are controlled by agents A.

  • D is a set of continuous domains , where each variable can take any value between a range, = [, ].

As discussed in the previous section, a notable difference between F-DCOP and DCOP is found in the representation of cost function. In DCOP, the cost functions are conventionally represented in tabular form, while in F-DCOP each constraint is represented in the form a function [8]. However, the goal remains the same as depicted in Equation 1. Figure 1 presents the example of an F-DCOP where Figure 1a represents the constraint graph with four variables where each of the variable is controlled by an agent . Each edge in Figure 1a stands for a constraint function and the definition of each function is shown in Figure 1b. Each variable can take values from the range [-10, 10] in this particular example.

Particle Swarm Optimization

PSO is a population based optimization technique inspired by the movement of a bird flock or a fish school. In PSO, each individual of the population is called a particle. PSO solves the problem by moving the particles in a multi-dimensional search space by adjusting the particle’s position and velocity. As shown in Algorithm 1, initially each particle is assigned a random position and velocity. A fitness function is defined which is used to evaluate the position of each particle. For simplicity, we are going to consider the optimization and minimization interchangeably throughout the paper. In each iteration, the movement of a particle is guided by its personal best position found so far in the search space, as well as the global best position found by the entire swarm (Algorithm 1: Lines ). The combination of the personal best and the global best position ensures that when a better position is found through the search process, the particles will move closer to that position and explore the surrounding search space more thoroughly considering it as a potential solution. The personal best position of each particle and the global best position of the entire population is updated if necessary (Algorithm 1: Lines ). Over the last couple of decades, several versions of PSO have been developed. The standard PSO has a tendency to converge to a suboptimal solution since the velocity component of the global best particle tends to zero after some iterations. Consequently, the global best position stops moving and the swarm behavior of all other particles leads them to follow the global best particle. To cope with the premature convergence property of standard PSO, Guaranteed Convergence PSO (GCPSO) has been proposed that provides convergence guarantees to local optima [17]. To adapt similar convergence behavior to F-DCOP, we choose to adapt GCPSO in our proposed method.

1 Generate an -dimensional population, Initialize positions and velocities of each particle while Termination condition not met do
2        for each particle  do
3               calculate current velocity and position if current position personal best then
4                      update personal best
5              if current position global best then
6                      update global best
7              
8       
Algorithm 1 Particle Swarm Optimization

Challenges

The following challenges must be addressed to develop an anytime F-DCOP algorithm that adapts the guaranteed convergence PSO:

  • Particles and Fitness Representation: We need to define a representation for the particles where each particle represents a solution of the F-DCOP. Moreover, a distributed method for calculating the fitness for each of the particle need to be devised.

  • Creating Population: In centralized optimization problems, creating the initial population is a trivial task. But in case of F-DCOP, different agents control different variables. Hence, a method need to be devised to cooperatively generate initial population.

  • Evaluation: Centralized PSO deals with an n-dimensional optimization task. In F-DCOP, each agent holds k variables and each agent is responsible for solving k-dimensional optimization task where the global objective is still an n-dimensional optimization process.

  • Maintaining Anytime Property: To maintain the anytime property in a F-DCOP model we need to identify the global best particle and the personal best position for each particle. A distribution method needs to be devised to notify all the agents when a new global best particle or personal best position is found. Finally, a coordination method is needed among the agents to update the position and velocity considering the current best position.

In the following section we devise a novel method to apply PSO in F-DCOP while maintaining the balance between local benefit and global benefit.

Proposed Method

PFD is a PSO based iterative algorithm consisting of three phases: Initialization, Evaluation and Update. In the initialization phase, a pseudo-tree is constructed, initial population is created and parameters are initialized. In the evaluation phase, agents distributedly calculate the fitness function for each particle. The update phase keeps track of the best solution found so far and propagates the information to the agents and updates the assignments accordingly. The detailed algorithm can be found in Algorithm 2.

(a) BFS psuedo tree

(b) Ordered arrangement
Figure 2: Pseudo tree construction and ordered arrangement

Initialization starts with ordering the agents in a Breadth First Search(BFS) pseudo-tree [2]. The pseudo-tree serves the purpose of defining a message passing order which is used in the evaluation and update phase. In the ordered arrangement the agents with lower depths have higher priorities over the agents with higher depths and ties are broken in the alphabetical order. Figure 2(a) and (b) illustrates the BFS pseudo tree and the ordered arrangement. In Figure 2(b), is the root and the arrows represent the message passing direction. From this point for an agent , we refer as the set of neighbors, and as the set of higher priority and lower priority neighbors respectively. In Figure 2(b) for agent , , and .

At the beginning of algorithm, PFD takes input from the users to initialize all the parameters where is the number of particles. The parameters depend on the experiments; the recommended settings for our experiments are discussed later in the text. We define as the set of K particles which is maintained by the agents where each agent holds component(s) of the particles. Each particle has a velocity and a position attribute. The velocity attribute defines the movement directions and position attribute defines the variable values associated with the variables that the agent holds. Then each agent executes Init(Algorithm 2: Lines 2 to 2) and initializes the the velocity component, to and position component, to a random value from its domain for each particle . For the example of Figure 2(b), let us assume number of particles, , and the set of particles, . Here, shows the complete assignment for velocity attribute of two particles and the complete assignment for position attribute can be shown as, . We define and as the position and velocity component of particle set by agent . In this example which is the value of variable of particle set by agent . After selecting the value of its variable each agent shares the particle set, to its lower priority neighbors, . For this example, agent sends to its lower priority neighbor .

1 Construct BFS pseudo-tree Initialize parameters: set of particles Function Init():
2        for each particle  do
3               0 a random value from
4       Sends to agents in
5 for each agent  do
6        Init()
7while Termination condition not met each agent  do
8        for  received from  do
9               for each particle  do
10                     
11              Sends to agents in
12       Wait until received from all agent in if  and received from all agent in  then
13               for each particle  do
14                     
15              if  root then
16                      Sends to an
17              
18       if  root then
19               Update()
20       Wait until and receives from if  and receives from  then
21               Calculate and according to equation 7, 8 for each particle  do
22                      if  then
23                             Calculate and according to equation 3, 5
24                     else
25                             Calculate and according to equation 4, 5
26                     
27              if  then
28                      Sends to agents in Sends and to agents in
29              
30       
31 Function Update():
32        for each particle  do
33               if  then
34                     
35              if  then
36                     
37              
38       Sends and to agents in
Algorithm 2 Particle Swarm F-DCOP

Evaluation phase of PFD calculates the fitness of each particle, using a fitness function shown in Equation 2 where represents the complete assignment of variables in .

(2)

This phase starts after the agents receive value assignments from all the higher priority neighbors. Each agent is responsible for calculating the constraint cost associated with each of its higher priority neighbors from (Algorithm 2: Lines 13-16). We define as the local fitness of each particle of the particle set . When an agent receives value assignments , from a higher priority neighbor , it calculates the constraint cost between them and sends it to . Additionally, each agent except the leaf agents need to pass the constraints cost upward the pseudo-tree calculated by lower priority neighbors, (Algorithm 2: Lines 18-19)

For the example shown in Figure 1, agent sends the fitness to and fitness to . Agent calculates the fitness and sends it to . Furthermore, receives the fitness from and passes it to . Similarly, sends the fitness to .

Update phase consists of two parts: , update and variable update. We define to be the personal best position achieved so far by each particle and to be the global best position among all the particles. Since each agent calculates and passes the cost of the constraints to the agents in , the fitness of all the particles propagate to the root. The root agent then sums the fitness values received from the agents in for each of the particles, . Then the root agent checks and updates the for and for and sends the new values to the agents in (Algorithm 2: Lines 38-44). When an agent receives and of the previous iteration, it updates the the velocity component and position component for . To adapt the guaranteed convergence method to PFD, two types of update equations for velocity component are defined. If the particle is the current global best particle, the update equation is defined as follows:

(3)

For all other particles, the velocity update equation is defined as follows:

(4)

The position component update equation is same for all the particles which is defined in the following equation:

(5)

In equations 3 4, and 5, and refers to the velocity and position components controlled by agent for particle in iteration. Here, an iteration refers to a complete round of the Evaluation and Update phase (Algorithm 2: Line 12). is the inertia weight which defines the influence of current velocity on the updated velocity, and are two random values between [0, 1] and , are two constants. Combinations of and define the magnitude of influence personal best and global best have on the updated particle position. In equation 3, is used to explore a random area near the position of the global best particle. To be precise, defines the diameter of this area that the particles can explore. The value of is adjusted according to equation 6.

(6)

In equation 6, and are the count of consecutive success and failures respectively. A success is defined when the global best particle updates its personal best position. Similarly, a failure is defined when the position of the global best particle remains unchanged. The parameters and are the upper bound of and . The following equations define and .

(7)
(8)

In equation 7, defines the global best particle of iteration . Each agent calculates and according to equations 7 and 8 after receiving and from their higher priority neighbors, (Algorithm 2: Line 27).

Consider agent in Figure 2. When receives fitness value from all of its lower priority neighbors, it is ready to calculate the and . The final updated fitness value, . Based on the updated values constructs and = 33 and notifies the agents in . Then each agent calculates and and updates the values based on equation 3, 4, and 5.

Theoretical Analysis

In this section, we first prove PFD is an anytime algorithm that is, solution quality improves and never degrades over time. Later, we provide the theoretical complexity analysis in terms of communication, computation and memory.

Lemma 1: At iteration 111For the theoretical analysis section, iteration refers to the communication steps required. In one communication step agents only directly communicate with the neighbors. , root is aware of the and up to iteration , where is the longest path between root and any node in the pseudo-tree.

To prove this lemma it is sufficient to show that, at iteration , root agent has enough information to calculate and up to iteration , that is, root agent knows the fitness of each particle. To calculate the fitness of each particle according to equation 2, the root agent needs cost messages from the agents in . The cost messages from agents at distance from root will need iteration to reach agents in . By induction, it will take iterations to reach the cost messages calculated at iteration from the agents with distance to root.

Lemma 2: At iteration , each agent is aware of the and up to iteration

In PFD, for any agent , the value message passing length and cost message passing length from the root are same. So, it takes iterations to reach the and to the agents at distance from the root. Using lemma 1 and the above claim, it takes iterations to reach and to the agent at distance from the root.

Proposition 1: PFD is an anytime algorithm.

By lemma 2, at iteration and ( each agent is aware of the and up to iteration and respectively. Let us assume, and at iteration . But for any , and and only gets updated if a better solution is found. Therefore, using proof by contradiction, and at iteration that is, solution quality improves monotonically as the number of iterations increases. Thus we prove, PFD is an anytime algorithm.

Complexity Analysis

We define, the total number of agents and the total number of neighbors of an agent , . In PFD, during the Initialization and Update phase an agent sends messages. Additionally, during the Evaluation phase an agent sends messages. After one round of completion of Initialization, Evaluation and Update phases, an agent sends messages. In the worst case, the graph is complete where . In a complete graph if , then . Therefore, the total number of messages sent by an agent is in the worst case.

The size of each message can be calculated as the size of each particle multiplied by the number of particles. If the total number of particle is , at each iteration the total message size for an agent is in the worst case.

During an iteration, an agent only needs to calculate and for each of the particle . Hence, the total computation complexity per agent during an iteration is where K is the number of particles.

Experimental Results

In this section, we empirically evaluate the quality of solutions produced by PFD with HCMS and AF-DPOP on two types of graphs: Random Graphs and Random Trees. However, CMS is not used in comparison because it only works with peicewise linear functions which is not applicable in most of the real world applications. Although Hoang et al., proposed three versions Functional DPOP, we only compare with AF-DPOP here. The reason is AF-DPOP is reported to provide the best solution among the approximate algorithms proposed in their work. For the experimental performance evaluation, binary quadratic functions are used which are of form . Note that, although we choose binary quadratic functions for evaluation, PFD is broadly applicable to other class of problems. The experiments are carried out on a laptop with an Intel Core i5-6200U CPU, 2.3 GHz processor and 8 GB RAM. The detailed experimental settings are described below.

Figure 3: Solution Cost Comparison of PFD and the competing algorithms varying number of agents (sparse graphs)
Figure 4: Solution Cost Comparison of PFD and the competing algorithms with iterations (sparse graphs)
Figure 5: Solution Cost Comparison of PFD and the competing algorithms varying number of agents (scale-free graphs)
Figure 6: Solution Cost Comparison of PFD and the competing algorithms varying number of agents (dense graphs)

Random Graphs: For random graphs we use three settings - sparse, dense and scale-free. Figure 3

shows the comparison of average costs on Erdős-Rényi topology with sparse settings (edge probability 0.2) varying the number of agents. We choose coefficients of the cost functions

randomly between and set the domains of each agents to . For our proposed algorithm PFD, we set the parameters, , , , and . For both HCMS and AF-DPOP we choose the number of discrete points to be 3. The discrete points are chosen randomly between the domain range. The averages are taken over 50 randomly generated problems. Figure 3

shows that PFD performs better than both HCMS and AF-DPOP on average. Notably, the performance of HCMS varies significantly which results in a high standard deviation. The reason behind the high standard deviation is that, the performance of HCMS on cyclic graph varies on the initial discretization of domains of the agents. For

, AF-DPOP ran out of memory. Thus, we omit the result of AF-DPOP for .

Figure 4 shows the comparison between PFD and HCMS on sparse graph settings with increasing number of iterations. We set the number of agents to 50 and other settings are same as the above experiment. Moreover, we stop both algorithms after 500 iterations. HCMS initially performs slightly better than PFD till 50 iterations since the particles of PFD initially start from random positions and require few iterations to move the particles towards the best position. However, PFD outperforms HCMS later and the improvement rate of PFD is steadier than HCMS. Note that, for 50 agents, AF-DPOP run out of memory in our settings. Thus, we omit the result of AF-DPOP here.

To compare with the performance of AF-DPOP on larger graphs we use scale-free graphs. Figure 5 shows the average cost comparison between the three algorithms with increasing number of agents. PFD shows a comparable performance with HCMS upto 30 agents and outperforms HCMS afterwards. Both PFD and HCMS outperforms AF-DPOP. The huge standard deviation of HCMS results into the comparable performance with PFD for smaller agents.

We choose dense graphs as our final random graph settings. Figure 6 shows comparison between the PFD and HCMS on Erdős-Rényi topology with dense settings (edge probability 0.6). PFD shows comparatively better performance than HCMS. Note than, AF-DPOP is not used in dense graph due to the huge computation overhead.

Random Trees: We use the random tree configuration in our last experimental settings since the memory requirement of AF-DPOP is less on trees. The experimental configurations are similar to the random graph settings. Figure 7 shows the comparison graph between PFD and the competing algorithms on random trees. The closest competitor of PFD in this setting is HCMS. On an average, PFD outperforms HCMS which in turn outperforms AF-DPOP. When the number of agent is 50, PFD shows better performance than AF-DPOP at a significant level.

Figure 7: Solution Cost Comparison of PFD and the competing algorithms varying number of agents (random trees)

Conclusions

In order to model many real world problems, continuous valued variables are more suitable than discrete valued variables. F-DCOP framework is a variant of DCOP framework that can model such problems effectively. To solve F-DCOPs, we propose an anytime algorithm called PFD that is inspired by Particle Swarm Optimization (PSO) technique. To be precise, PFD devises a new method to calculate and propagate the best particle information across all the agents which influence the swarm to move towards a better solution. We also theoretically prove that our proposed algorithm PFD is anytime. Moreover, the guaranteed convergence version of PSO is tailored in PFD which ensures its convergence to a local optima. We empirically evaluate our algorithm in a number of settings, and compare the results with the state-of-the-art algorithms, HCMS and AF-DPOP. In all of the settings, PFD markedly outperforms its counterparts in terms of solution quality. In the future, we would like to further investigate the potential of PFD on various F-DCOP applications. We also want to explore whether PFD can be extended for multi-objective F-DCOP settings.

References

  • [1] M. Abido (2002) Optimal design of power-system stabilizers using particle swarm optimization. IEEE transactions on energy conversion 17 (3), pp. 406–413. Cited by: Introduction.
  • [2] Z. Chen, Z. He, and C. He (2017) An improved dpop algorithm based on breadth first search pseudo-tree for distributed constraint optimization. Applied Intelligence 47 (3), pp. 607–623. Cited by: Proposed Method.
  • [3] Z. Chen, T. Wu, Y. Deng, and C. Zhang (2018) An ant-based algorithm to solve distributed constraint optimization problems. In

    Thirty-Second AAAI Conference on Artificial Intelligence

    ,
    Cited by: Introduction.
  • [4] R. Eberhart and J. Kennedy (1995) Particle swarm optimization. In Proceedings of the IEEE international conference on neural networks, Vol. 4, pp. 1942–1948. Cited by: Introduction.
  • [5] A. Farinelli, A. Rogers, and N. R. Jennings (2014) Agent-based decentralised coordination for sensor networks using the max-sum algorithm. Autonomous agents and multi-agent systems 28 (3), pp. 337–380. Cited by: Introduction.
  • [6] A. Farinelli, A. Rogers, A. Petcu, and N. R. Jennings (2008) Decentralised coordination of low-power embedded devices using the max-sum algorithm. In Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems-Volume 2, pp. 639–646. Cited by: Introduction.
  • [7] S. Fitzpatrick and L. Meetrens (2003) Distributed sensor networks a multiagent perspective, chapter distributed coordination through anarchic optimization. Kluwer Academic Dordrecht. Cited by: Introduction.
  • [8] K. D. Hoang, W. Yeoh, M. Yokoo, and Z. Rabinovich (2019) New algorithms for functional distributed constraint optimization problems. arXiv preprint arXiv:1905.13275. Cited by: Introduction, Introduction, Functional Distributed Constraint Optimization Problem.
  • [9] C. Hsin and M. Liu (2004) Network coverage using low duty-cycled sensors: random & coordinated sleep algorithms. In Proceedings of the 3rd international symposium on Information processing in sensor networks, pp. 433–442. Cited by: Introduction.
  • [10] O. Litov and A. Meisels (2017) Forward bounding on pseudo-trees for dcops and adcops. Artificial Intelligence 252, pp. 83–99. Cited by: Introduction.
  • [11] R. T. Maheswaran, J. P. Pearce, and M. Tambe (2004) Distributed algorithms for dcop: a graphical-game-based approach.. In ISCA PDCS, pp. 432–439. Cited by: Introduction.
  • [12] P. J. Modi, W. Shen, M. Tambe, and M. Yokoo (2005) ADOPT: asynchronous distributed constraint optimization with quality guarantees. Artificial Intelligence 161 (1-2), pp. 149–180. Cited by: Introduction, Distributed Constraint Optimization Problem.
  • [13] A. Petcu and B. Faltings (2005) A scalable method for multiagent constraint optimization. In IJCAI, Cited by: Introduction.
  • [14] Y. Shi and R. C. Eberhart (1999) Empirical study of particle swarm optimization. In

    Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)

    ,
    Vol. 3, pp. 1945–1950. Cited by: Introduction.
  • [15] R. Stranders, A. Farinelli, A. Rogers, and N. R. Jennings (2009) Decentralised coordination of continuously valued control parameters using the max-sum algorithm. In Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems-Volume 1, pp. 601–608. Cited by: Introduction, Introduction.
  • [16] E. Sultanik, P. J. Modi, and W. C. Regli (2007) On modeling multiagent task scheduling as a distributed constraint optimization problem.. In IJCAI, pp. 1531–1536. Cited by: Introduction.
  • [17] F. van den Bergh and A. P. Engelbrecht (2002) A new locally convergent particle swarm optimiser. In IEEE International conference on systems, man and cybernetics, Vol. 3, pp. 6–pp. Cited by: Particle Swarm Optimization.
  • [18] C. J. van Leeuwen and P. Pawelczak (2017) CoCoA: a non-iterative approach to a local search (a)dcop solver. In AAAI, Cited by: Introduction.
  • [19] T. Voice, R. Stranders, A. Rogers, and N. R. Jennings (2010) A hybrid continuous max-sum algorithm for decentralised coordination.. In ECAI, pp. 61–66. Cited by: Introduction.
  • [20] H. Yedidsion and R. Zivan (2016) Applying dcop_mst to a team of mobile robots with directional sensing abilities: (extended abstract). In AAMAS, Cited by: Introduction.
  • [21] J. Zhang, J. Zhang, T. Lok, and M. R. Lyu (2007) A hybrid particle swarm optimization–back-propagation algorithm for feedforward neural network training. Applied mathematics and computation 185 (2), pp. 1026–1037. Cited by: Introduction.
  • [22] W. Zhang, G. Wang, Z. Xing, and L. Wittenburg (2005) Distributed stochastic search and distributed breakout: properties, comparison and applications to constraint optimization problems in sensor networks. Artificial Intelligence 161 (1-2), pp. 55–87. Cited by: Introduction.