Distributed constraint optimization problems (DCOPs) [Modi et al.2005, Fioretto et al.2018] are a fundamental framework in multi-agent systems where agents coordinate their decisions to optimize a global objective. DCOPs have been adopted to model many real world problems including radio frequency allocation [Monteiro et al.2012], smart grid [Fioretto et al.2017] and distributed scheduling [Maheswaran et al.2004b, Li et al.2016].
Most of complete algorithms for DCOPs employ either distributed search [Hirayama and Yokoo1997, Modi et al.2005, Gershman et al.2009, Yeoh et al.2010] or inference [Petcu and Faltings2005b, Vinyals et al.2011] to optimally solve DCOPs. However, since DCOPs are NP-hard, complete algorithms cannot scale up due to exponential overheads. Thus, incomplete algorithms [Maheswaran et al.2004a, Zhang et al.2005, Okamoto et al.2016, Rogers et al.2011, Zivan and Peled2012, Chen et al.2018, Ottens et al.2017, Fioretto et al.2016] are proposed to trade optimality for smaller computational efforts.
Unfortunately, DCOPs fail to capture the ubiquitous asymmetric structure [Burke et al.2007, Maheswaran et al.2004b, Ramchurn et al.2011] since each constrained agent shares the same payoffs. PEAV [Maheswaran et al.2004b] attempts to capture the asymmetric costs by introducing mirror variables and the consistency is enforced by hard constraints. However, PEAV suffers from scalability problems since the number of variables significantly increases. Moreover, many classical DCOP algorithms perform poorly when applied to the formulation due to the presence of hard constraints [Grinshpoun et al.2013]. On the other side, ADCOPs [Grinshpoun et al.2013] are another framework that captures asymmetry by explicitly defining the exact payoff for each participant of a constraint without introducing any variables, which has been intensively investigated in recent years.
Solving ADCOPs involves evaluating and aggregating the payoff for each constrained agent, which is challenging in asymmetric settings due to a privacy concern. SyncABB and ATWB [Grinshpoun et al.2013] are asymmetric adaption of SyncBB [Hirayama and Yokoo1997] and AFB [Gershman et al.2009], using an one-phase strategy to aggregate the individual costs. That is, the algorithms systematically check each side of a constraint before reaching a complete assignment. Besides, AsymPT-FB [Litov and Meisels2017] is the first tree-based algorithm for ADCOPs, which uses forward bounding to compute lower bounds and back bounding to achieve one-phase check. Recently, PT-ISABB [Deng et al.2019] was proposed to improve the tree-based search by implementing a non-local elimination version of ADPOP [Petcu and Faltings2005a] to provide much tighter lower bounds. However, since it relies on an exhaustive search to guarantee the optimality, the algorithm still suffers from exponential communication overheads. On the other hand, although complete inference algorithms (e.g., DPOP [Petcu and Faltings2005b]) only require a linear number of messages to solve DCOPs, they cannot be directly applied to ADCOPs without PEAV due to their requirement for complete knowledge of each constraint to facilitate variable elimination. Accordingly, the parents have to transfer their private cost functions to their children, which leaks at least a half of privacy.
In this paper, we adapt DPOP for solving ADCOPs for the first time by deferring the eliminations of variables. Specifically, we contribute to the state-of-the-art in the following aspects.
We propose AsymDPOP, the first complete inference-based algorithm to solve ADCOPs, by generalizing non-local elimination [Deng et al.2019]. That is, instead of eliminating variables at their parents, we postpone the eliminations until their highest neighbors in the pseudo tree. In other words, an agent in our algorithm may be responsible for eliminating several variables.
We theoretically analyze the complexity of our algorithm where the space complexity of an agent is not only exponential in the number of its separators but also the number of its non-eliminated descendants.
We scale up our algorithm by introducing a table-set propagation scheme to reduce the memory consumption and a mini-batch scheme to reduce the number of operations when performing eliminations. Our empirical evaluation indicates that our proposed algorithm significantly outperforms the state-of-the-art, as well as the vanilla DPOP with PEAV formulation.
In this section we introduce the preliminaries including DCOPs, ADCOPs, pseudo tree, DPOP and non-local elimination.
2.1 Distributed Constraint Optimization Problems
A distributed constraint optimization problem [Modi et al.2005] is defined by a tuple where
is the set of agents
is the set of variables
is the set of domains. Variable takes values from
is the set of constraint functions. Each function specifies the cost assigned to each combination of .
For the sake of simplicity, we assume that each agent controls a variable (and thus the term ”agent” and ”variable” could be used interchangeably) and all constraint functions are binary (i.e., ). A solution to a DCOP is an assignment to all the variables such that the total cost is minimized. That is,
2.2 Asymmetric Distributed Constraint Optimization Problems
While DCOPs assume an equal payoff for each participant of each constraint, asymmetric distributed constraint optimization problems (ADCOPs) [Grinshpoun et al.2013] explicitly define the exact payoff for each constrained agent. In other words, a constraint function
in an ADCOP specifies a cost vector for each possible combination of involved variables. And the goal is to find a solution which minimizes the aggregated cost. An ADCOP can be visualized by a constraint graph where the vertexes denote variables and the edges denote constraints. Fig.1 presents an ADCOP with two variables and a constraint. Besides, for the constraint between and , we denote the private function for and as and , respectively.
2.3 Pseudo Tree
A pseudo tree [Freuder and Quinn1985] is an ordered arrangement to a constraint graph in which different branches are independent. A pseudo tree can be generated by a depth-first traverse to a constraint graph, categorizing constraints into tree edges and pseudo edges (i.e., non-tree edges). The neighbors of an agent are therefore categorized into its parent , pseudo parents , children and pseudo children according to their positions in the pseudo tree and the types of edges they connect through. We also denote its parent and pseudo parents as , and its descendants as . Besides, we denote its separators, i.e., the set of ancestors which are constrained with and its descendants, as [Petcu and Faltings2006]. Finally, we denote ’s interface descendants, the set of descendants which are constrained with , as . Fig. 2 presents a pseudo tree in which the dotted edge is a pseudo edge.
2.4 DPOP and Non-local Elimination
DPOP [Petcu and Faltings2005b] is an inference-based complete algorithm for DCOPs based on bucket elimination [Dechter1999]. Given a pseudo tree, it performs a bottom-up utility propagation phase to eliminate variables and a value propagation phase to assign the optimal assignment for each variable. More specifically, in the utility propagation phase, an agent eliminates its variables from the joint utility table by computing the optimal utility for each possible assignment to after receiving the utility tables from its children, and sends the projected utility table to its parent. In the value propagation phase, computes the optimal assignments for its variables by considering the assignments received from its parent, and propagates the joint assignment to its children. Although DPOP only requires a linear number of messages to solve a DCOP, its memory consumption is exponential in the induced width. Thus, several tradeoffs including ADPOP [Petcu and Faltings2005a], MB-DPOP [Petcu and Faltings2007] and ODPOP [Petcu and Faltings2006] have been proposed to improve its scalability.
However, DPOP cannot be directly applied to asymmetric settings as it requires the total knowledge of each constraint to perform optimal elimination locally. PT-ISABB [Deng et al.2019] applies (A)DPOP into solving ADCOPs by performing variable elimination only to a subset of constraints to build look-up tables for lower bounds, and uses a tree-based search to guarantee the optimality. The algorithm further reinforces the bounds by a non-local elimination scheme. That is, instead of performing elimination locally, the elimination of a variable is postponed to its parent to include the private function enforced in the parent’s side and increase the integrity of the utility table.
3 Asymmetric DPOP
The existing complete algorithms for ADCOPs use complete search to exhaust the search space, which makes them unsuitable for large scale applications. In fact, as shown in our experimental results, these search-based algorithms can only solve the problems with the agent number less than 20. Hence, in this section, we propose a complete, privacy-protecting, and scalable inference-based algorithm for ADCOPs built upon generalized non-local elimination, called AsymDPOP. An execution example can be found in the appendix (https://arxiv.org/abs/1905.11828).
3.1 Utility Propagation Phase
In DPOP, a variable could be eliminated locally without loss of completeness after receiving all the utility messages from its children since all functions that involve the variable have been aggregated. However, the conclusion does not hold for ADCOPs. Taking Fig. 2 as an example, cannot be eliminated locally since the private functions and are not given. Thus, the local elimination to w.r.t. and would lead to overestimate bias and offer no guarantee on the completeness. A naïve solution would be that and transfer their private functions to their children, which would lead to an unacceptable privacy loss.
Inspired by non-local elimination, we consider an alternative privacy-protecting approach to aggregate constraint functions. That is, instead of deferring to its parent, we postpone the elimination of a variable to its highest (pseudo) parent. In this way, all the functions involving the variables have been aggregated and the variable can be eliminated from the utility table optimally. Note that the utility table is a summation of the utility tables from children and local constraints. As a result, although the utility table which contains the variable’s private functions is propagated to its ancestors without elimination, the ancestors can hardly infer the exact payoffs in these private functions. We refer the bottom-up utility propagation phase as generalized non-local elimination (GNLE).
Before diving into the details of GNLE, let’s first consider the following definitions.
Definition 1 ().
The is a function which returns the set of dimensions of a utility table.
Definition 2 (Slice).
Let be a set of key-value pairs. is a slice of over such that
Definition 3 (Join [Vinyals et al.2011]).
Let be two utility tables and be their joint domain spaces. is the join of and over such that
Algorithm 1 presents the sketch of GNLE. The algorithm begins with leaf agents sending their utility tables to their parents via UTIL messages (line 2-3). When an agent receives a UTIL message from a child , it joins its private functions w.r.t. its (pseudo) children in branch (line 5-6), and eliminates all the belonging variables whose highest (pseudo) parents are from the utility table (line 7). Here, is given by
Then joins the eliminated utility table with its running utility table . It is worth mentioning that computing the set of elimination variables does not require agents to exchange their relative positions in a pseudo tree. Specifically, each variable is associated with a counter which is initially set to the number of its parent and pseudo parents. When its (pseudo) parent receives the UTIL message containing it, the counter decreases. And the variable is added to the set of elimination variables as soon as its counter equals zero.
After receiving all the UTIL messages from its children, propagates the utility table to its parent if it is not the root (line 12). Otherwise, the value propagation phase starts (line 10).
3.2 Value Propagation Phase
In contrary to the one in vanilla DPOP which determines the optimal assignment locally for each variable, the value assignment phase in AsymDPOP should be specialized to accommodate the non-local elimination. Specifically, since a variable is eliminated at its highest (pseudo) parent, the parent is responsible for selecting the optimal assignment for that variable. Thus, the value messages in our algorithm would contain not only the assignments for ancestors, but also assignments for descendants. Algorithm 2 presents the sketch of value propagation phase.
The phase is initiated by the root agent selecting the optimal assignment (line 14). Given the determined assignments either from its parent (line 16) or computed locally (line 15), agent selects the optimal assignments for the eliminated variables in each branch by a joint optimization over them (line 17-20), and propagates the assignments together with the determined assignments to (line 21-22). The algorithm terminates when each leaf agent receives a VALUE message.
3.3 Complexity Analysis
The size of a UTIL message produced by an agent is exponential in the number of its separator and its interface descendants.
We prove the theorem by showing a UTIL message produced by agent contains the dimensions of and . The UTIL message must contain the dimensions of since is not the highest (pseudo) parent of . On the other hand, according to the definition to , the UTIL message cannot contain the dimensions of since for each it must exist an agent such that is the highest (pseudo) parent of and thus the variable is eliminated at (line 7). Finally, the UTIL message contains according to [Petcu and Faltings2005b] (Theorem 1). Thus, the size of the UTIL message is exponential in and the theorem is concluded.
As shown in Section 3.3, AsymDPOP suffers serious scalability problems in both memory and computation. In this section, we propose two tradeoffs which make AsymDPOP a practical algorithm.
4.1 Table-Set Propagation Scheme: A Tradeoff between Memory and Privacy
The utility propagation phase of AsymDPOP could be problematic due to the unacceptable memory consumption when the pseudo tree is poor. Consider the pseudo tree shown in Fig. 3. Since every agent is constrained with the root agent, according to the GNLE all the variables can only be eliminated at the root agent, which incurs a memory consumption of due to the join operation in each agent. Here, . Besides, a large utility table would also incur unacceptable computation overheads due to the join operations and the elimination operation (line 5-7).
We notice that utility tables are divisible before performing eliminations. Thus, instead of propagating a joint and high-dimension utility table to a parent agent, we propagate a set of small utility tables. In other words, we wish to reduce the unnecessary join operations (i.e., line 1 and line 7 in the case of ) which could cause a dimension increase during the utility propagation phase. On the other side, completely discarding join operations would incur privacy leakages. For example, if chooses to propagate both and without performing the join operation to , then would know the private functions of directly. Thus, we propose to compromise the memory consumption and the privacy by a parameter controlling the the maximal number of dimensions of each local utility table. We refer the tradeoff as a table-set propagation scheme (TSPS).
Specifically, when an agent sends a UTIL message to its parent, it first joins its private functions w.r.t. its parents with the received utility tables whose dimensions contain the dimensions of these private functions. Notice that the first step does not incur a dimension increase, and can reduce the number of utility tables. Finally, it propagates the set of utility tables to its parent.
Consider the pseudo tree shown in Fig. 3 again. Assume that and then agent would propagate the utility set to . Since there is no elimination in , it is unnecessary to perform the join operation in line 7. Thus, would propagate the utility set to . It can be concluded that TSPS in the example only requires space, which is much smaller than the one required by GNLE. Formally, we have the following theorem.
The size of each utility table of an agent in TSPS is no greater than
According to Theorem 1, the dimension of each utility table from child is a subset of . Since TSPS omits the join operation in line 7, the maximal size of received utility tables of is
Since the local utility is partitioned into utility tables according to , the maximal size of local utility tables of is
Thus the theorem holds. ∎
4.2 Mini-batch Elimination Scheme: A Tradeoff between Time and Space
TSPS could factorize a big utility table to a set of smaller utility tables, which allows us to reduce the computational efforts when performing eliminations by a decrease-and-conquer strategy. Taking Fig. 3 as an example, to perform the elimination, in GNLE has no choice but to optimize a big utility table over variables (line 7), which requires operations. Instead, combining with TSPS () we could exploit the the structure of each small utility table by arranging the min operators among them to reduce computational complexity. That is, instead of performing
which can be solved recursively from to and the overall complexity is . In other words, we reduce the computational complexity by exploiting the independence among utility tables to avoid unnecessary traverses.
However, completely distributing the min operators into every variable would incur high memory consumption as a min operator could implicitly join utility tables to a big and indivisible table. Although the problem can be alleviated by carefully arranging the min operators, it is not easy to find the optimal sequence of eliminations in practice. Consider the utility tables shown as a factor graph in Fig. 4 where square nodes represent utility tables and circle nodes represent variables. And the red circles represent the variables to be eliminated. Obviously, no matter how to arrange the elimination sequence, a 3-ary utility table must appear when eliminating or . Instead, if we jointly optimize both and , the maximal number of dimensions are 2.
We thus overcome the problem by introducing a parameter which specifies the minimal number of variables optimized in a min operator (i.e., the size of a batch), and refer the tradeoff as a mini-batch elimination scheme (MBES). Specifically, when performing an elimination operation, we first divide elimination variables into several groups whose variables share at least a common utility table. For each variable group, we divide the variables into batches whose sizes are at least if it is possible. For each batch, we perform optimization to the functions that are related to the batch and replace these functions with the results. The process terminates when all the variable groups are exhausted.
Note that dividing variables into disjoint variable groups in the first step is crucial since optimizing independent variables jointly is equivalent to optimizing them individually. Taking Fig. 4 for example, if we set and let be a batch, a 4-ary utility table over and still appear even if and is jointly optimized.
5 Experimental Results
We empirically evaluate the performances of AsymDPOP and state-of-the-art complete algorithms for ADCOPs including PT-ISABB, AsymPT-FB, SyncABB, ATWB and the vanilla DPOP with PEAV formulation (PEAV_DPOP) in terms of the number of basic operations, network load and privacy leakage. For inference-based algorithms, we consider the maximal number of dimensions during the utility propagation phase as an additional metric. Specifically, we use NCLOs [Netzer et al.2012] to measure the hardware-independent runtime in which the logic operations in inference-based algorithms are accesses to the utility tables while the ones in search-based algorithms are constraint checks. For the network load, we measure the size of information during an execution. Finally, we use entropy [Brito et al.2009] to quantify the privacy loss [Litov and Meisels2017, Grinshpoun et al.2013]. For each experiment, we generate 50 random instances and report the medians as the results.
In our first experiment, we consider the ADCOPs with the domain size of 3, the density of 0.25 and the agent number varying from 8 to 24. Fig. 5 presents the experimental results under different agent numbers. It can be concluded that compared to the search-based solvers, our AsymDPOP algorithms exhibit great superiorities in terms of network load. That is due to the fact that search-based algorithms explicitly exhaust the search space by message-passing, which is quite expensive especially when the agent number is large. In contrast, our proposed algorithms incur few communication overheads since they follow an inference protocol and only require a linear number of messages. On the other hand, although PEAV_DPOP also uses the inference protocol, it still suffers from a severe scalability problem and can only solve the problems with the agent number less than 12. The phenomenon is due to the mirror variables introduced by PEAV formulation, which significantly increases the complexity. More specifically, a UTIL message in PEAV_DPOP contains the dimensions of mirror variables, which significantly increases the memory consumption.
Fig. 6 presents the performance comparison under different densities. Specifically, we consider the ADCOPs with the agent number of 8, the domain size of 8 and the density varying from 0.25 to 1. It can be concluded from Fig. 6(a) that AsymDPOP with TSPS (i.e., where is the induced width) incurs significantly less communication overheads when solving dense problems, which demonstrates the merit of avoiding unnecessary join operations. Besides, it is interesting to find that compared to the one of AsymDPOP without TSPS (i.e., ), the network load of AsymDPOP increases much slowly. That is due to the fact that as the density increases, eliminations are more likely to happen at the top of a pseudo tree. On the other hand, since unnecessary join operations are avoided in TSPS, eliminations are the major source of the dimension increase. As a result, agents propagate low dimension utility tables in most of the time. For NCLOs, search-based algorithms like AsymPT-FB and PT-ISABB outperform AsymDPOP algorithms when solving dense problems due to their effective pruning.
To demonstrate the effects of different batch sizes, we benchmark MBES with different when combining with TSPS with different on the configuration used in the first experiment. Specifically, we measure the maximal number of dimensions generated in the utility propagation phase, including the intermediate results of MBES. Fig. 7 presents the experimental results. It can be seen that AsymDPOP with a small batch size significantly reduces NCLOs but produces larger intermediate results, which indicates the necessity of the tradeoff. Besides, the performance of MBES significantly degenerates when combining with TSPS with a larger . That is because the utility tables contain more dimensions in the scenario, and a utility table would be traversed more frequently when performing eliminations.
In the last experiment, we consider the privacy loss of different algorithms when solving asymmetric MaxDCSPs with different tightness. In particular, we consider the asymmetric MaxDCSPs with 10 agents, the domain size of 10, the density of 0.4 and the tightness varying from 0.1 to 0.8. Fig. 8 presents the results. It can be concluded from the figure that as the tightness grows, the search-based algorithms leak more privacy while the inference-based algorithms leaks less privacy. That is due to the fact that search-based algorithms rely on a direct disclosure mechanism to aggregate the private costs. Thus, search-based algorithms would leak more privacy when solving the problems with high tightness as they need to traverse more proportions of the search space. In contrast, inference-based algorithms accumulate utility through the pseudo tree, and an agent could infer the private costs of its (pseudo) child when the utility table involving both and is a binary table which is not a result of eliminations or contains zero entries. Thus, AsymDPOP() leaks almost a half of privacy. On the other hand, since the number of prohibit combinations grows as the tightness increases, AsymDPOP() incurs much lower privacy loss when solving high tightness problems.
In this paper we present AsymDPOP, the first complete inference algorithm for ADCOPs. The algorithm incorporates three ingredients: generalized non-local elimination which facilitates the aggregation of utility in an asymmetric environment, table-set propagation scheme which reduces the memory consumption and mini-batch elimination scheme which reduces the operations in the utility propagation phase. We theoretically show its complexity and our empirical evaluation demonstrates its superiorities over the state-of-the-art, as well as the vanilla DPOP with PEAV formulation.
We would like to thank the anonymous reviewers for their valuable comments and helpful suggestions. This work is supported by the Chongqing Research Program of Basic Research and Frontier Technology under Grant No.:cstc2017jcyjAX0030, Fundamental Research Funds for the Central Universities under Grant No.: 2018CDXYJSJ0026, National Natural Science Foundation of China under Grant No.: 51608070 and the Graduate Research and Innovation Foundation of Chongqing, China under Grant No.: CYS17023
- [Brito et al.2009] Ismel Brito, Amnon Meisels, Pedro Meseguer, and Roie Zivan. Distributed constraint satisfaction with partially known constraints. Constraints, 14(2):199–234, 2009.
- [Burke et al.2007] David A Burke, Kenneth N Brown, Mustafa Dogru, and Ben Lowe. Supply chain coordination through distributed constraint optimization. In The 9th International Workshop on DCR, 2007.
- [Chen et al.2018] Ziyu Chen, Yanchen Deng, Tengfei Wu, and Zhongshi He. A class of iterative refined max-sum algorithms via non-consecutive value propagation strategies. Autonomous Agents and Multi-Agent Systems, 32(6):822–860, 2018.
- [Dechter1999] Rina Dechter. Bucket elimination: A unifying framework for reasoning. Artificial Intelligence, 113(1-2):41–85, 1999.
- [Deng et al.2019] Yanchen Deng, Ziyu Chen, Dingding Chen, Xingqiong Jiang, and Qiang Li. PT-ISABB: A hybrid tree-based complete algorithm to solve asymmetric distributed constraint optimization problems. In Proceedings of 18th International Conference on Autonomous Agents and Multiagent Systems, pages 1506–1514, 2019.
- [Fioretto et al.2016] Ferdinando Fioretto, William Yeoh, and Enrico Pontelli. A dynamic programming-based MCMC framework for solving DCOPs with GPUs. In Proceedings of 22nd International Conference on Principles and Practice of Constraint Programming, pages 813–831, 2016.
- [Fioretto et al.2017] Ferdinando Fioretto, William Yeoh, Enrico Pontelli, Ye Ma, and Satishkumar J Ranade. A distributed constraint optimization (DCOP) approach to the economic dispatch with demand response. In Proceedings of the 16th International Conference on Autonomous Agents and MultiAgent Systems, pages 999–1007, 2017.
- [Fioretto et al.2018] Ferdinando Fioretto, Enrico Pontelli, and William Yeoh. Distributed constraint optimization problems and applications: A survey. Journal of Artificial Intelligence Research, 61:623–698, 2018.
- [Freuder and Quinn1985] Eugene C. Freuder and Michael J. Quinn. Taking advantage of stable sets of variables in constraint satisfaction problems. In Proceedings of the 9th International Joint Conference on Artificial Intelligence, pages 1076–1078, 1985.
- [Gershman et al.2009] Amir Gershman, Amnon Meisels, and Roie Zivan. Asynchronous forward bounding for distributed COPs. Journal of Artificial Intelligence Research, 34:61–88, 2009.
- [Grinshpoun et al.2013] Tal Grinshpoun, Alon Grubshtein, Roie Zivan, Arnon Netzer, and Amnon Meisels. Asymmetric distributed constraint optimization problems. Journal of Artificial Intelligence Research, 47:613–647, 2013.
- [Hirayama and Yokoo1997] Katsutoshi Hirayama and Makoto Yokoo. Distributed partial constraint satisfaction problem. In International Conference on Principles and Practice of Constraint Programming, pages 222–236, 1997.
- [Li et al.2016] Shijie Li, Rudy R. Negenborn, and Gabriël Lodewijks. Distributed constraint optimization for addressing vessel rotation planning problems. Engineering Applications of Artificial Intelligence, 48:159–172, 2016.
- [Litov and Meisels2017] Omer Litov and Amnon Meisels. Forward bounding on pseudo-trees for DCOPs and ADCOPs. Artificial Intelligence, 252:83–99, 2017.
- [Maheswaran et al.2004a] Rajiv T Maheswaran, Jonathan P Pearce, and Milind Tambe. Distributed algorithms for DCOP: A Graphical-Game-Based approach. In Proceedings of ISCA PDCS, pages 432–439, 2004.
- [Maheswaran et al.2004b] Rajiv T. Maheswaran, Milind Tambe, Emma Bowring, Jonathan P. Pearce, and Pradeep Varakantham. Taking DCOP to the real world: Efficient complete solutions for distributed multi-event scheduling. In Proceedings of the 3rd International Conference on Autonomous Agents and Multiagent Systems, pages 310–317, 2004.
- [Modi et al.2005] Pragnesh Jay Modi, Wei-Min Shen, Milind Tambe, and Makoto Yokoo. Adopt: asynchronous distributed constraint optimization with quality guarantees. Artificial Intelligence, 161(1-2):149–180, 2005.
- [Monteiro et al.2012] Tânia L Monteiro, Guy Pujolle, Marcelo E Pellenz, Manoel C Penna, and Richard Demo Souza. A multi-agent approach to optimal channel assignment in WLANs. In Wireless Communications and Networking Conference, pages 2637–2642, 2012.
- [Netzer et al.2012] Arnon Netzer, Alon Grubshtein, and Amnon Meisels. Concurrent forward bounding for distributed constraint optimization problems. Artificial Intelligence, 193:186–216, 2012.
- [Okamoto et al.2016] Steven Okamoto, Roie Zivan, and Aviv Nahon. Distributed breakout: Beyond satisfaction. In Proceedings of the 25th International Joint Conference on Artificial Intelligence, pages 447–453, 2016.
- [Ottens et al.2017] Brammert Ottens, Christos Dimitrakakis, and Boi Faltings. DUCT: An upper confidence bound approach to distributed constraint optimization problems. ACM Transactions on Intelligent Systems and Technology, 8(5):69, 2017.
- [Petcu and Faltings2005a] Adrian Petcu and Boi Faltings. Approximations in distributed optimization. In International conference on principles and practice of constraint programming, pages 802–806, 2005.
- [Petcu and Faltings2005b] Adrian Petcu and Boi Faltings. A scalable method for multiagent constraint optimization. In Proceedings of the 19th International Joint Conference on Artificial Intelligence, pages 266–271, 2005.
- [Petcu and Faltings2006] Adrian Petcu and Boi Faltings. ODPOP: An algorithm for open/distributed constraint optimization. In Proceedings of the 21st national conference on Artificial intelligence, pages 703–708, 2006.
- [Petcu and Faltings2007] Adrian Petcu and Boi Faltings. MB-DPOP: A new memory-bounded algorithm for distributed optimization. In Proceedings of the 20th International Joint Conference on Artificial Intelligence, pages 1452–1457, 2007.
- [Ramchurn et al.2011] Sarvapali D Ramchurn, Perukrishnen Vytelingum, Alex Rogers, and Nick Jennings. Agent-based control for decentralised demand side management in the smart grid. In The 10th International Conference on Autonomous Agents and Multiagent Systems, pages 5–12, 2011.
- [Rogers et al.2011] Alex Rogers, Alessandro Farinelli, Ruben Stranders, and Nicholas R Jennings. Bounded approximate decentralised coordination via the max-sum algorithm. Artificial Intelligence, 175(2):730–759, 2011.
- [Vinyals et al.2011] Meritxell Vinyals, Juan A Rodriguez-Aguilar, and Jesús Cerquides. Constructing a unifying theory of dynamic programming DCOP algorithms via the generalized distributive law. Autonomous Agents and Multi-Agent Systems, 22(3):439–464, 2011.
- [Yeoh et al.2010] William Yeoh, Ariel Felner, and Sven Koenig. BnB-ADOPT: An asynchronous branch-and-bound DCOP algorithm. Journal of Artificial Intelligence Research, 38:85–133, 2010.
- [Zhang et al.2005] Weixiong Zhang, Guandong Wang, Zhao Xing, and Lars Wittenburg. Distributed stochastic search and distributed breakout: properties, comparison and applications to constraint optimization problems in sensor networks. Artificial Intelligence, 161(1-2):55–87, 2005.
- [Zivan and Peled2012] Roie Zivan and Hilla Peled. Max/min-sum distributed constraint optimization through value propagation on an alternating DAG. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems, pages 265–272, 2012.
Appendix A A AsymDPOP
a.1 An Example for AsymDPOP
Fig.2 presents a pseudo tree. For better understanding, we take the agent to explain the concepts in a pseudo tree. Since is the only ancestor constrained with via a tree edge, we have , and . Similarly, since and are the descendants constrained with via tree edge, we have , , . Particularly, since is constrained with , we have .
AsymDPOP begins with the leaf agents ( and ) sending their utility tables ( and ) to their parent , where and .
When receives the UTIL message from its child (assume ’s UTIL message has arrived earlier), it stores the received utility table (i.e., ), and joins its private function to update the table (i.e., ). According to the definition of that all the belonging variables’ highest (pseudo) parent is in branch . Here, is given by
Thus, eliminates variable from the utility table . After that, joins the eliminated result to the running utility table ( has been initialized to ). Similarly, upon receipt of the UTIL message from , saves the received utility table (i.e., ), then updates the utility table by joining the private function (i.e., ). Since , joins to the running utility table directly without any elimination. Since has received all the UTIL messages from its children, it propagates the utility table to its parent . Here, we have
When receives the UTIL message from , it saves the received utility table (i.e., ) and joins its private functions and to update (i.e., ). After eliminating and (), joins the eliminated result into its running utility table ( has been initialized to ). Here, we have
Since is the root agent and receives all the UTIL messages from its children, it selects the optimal assignment for itself and the optimal assignments and for the eliminated variables and . That is,
Then propagates a VALUE message including the optimal assignment to its child , where the optimal assignment is .
When receiving the VALUE message from , assigns the value for itself. Since and , selects the optimal assignment for by performing optimization over with the determined assignment of , that is
Then propagates the optimal assignment and to and , respectively. Once and receives the VALUE messages, they assign for themselves. Since all leaf agents have received VALUE messages, the algorithm terminates.
Appendix B B AsymDPOP with TSPS and MBES
b.1 Pseudo Code for AsymDPOP with TSPS and MBES
Fig.11 and Fig.12 present the sketch of AsymDPOP with TSPS and MBES which consists of two phases: utility set propagation phase and value propagation phase. Different from the utility propagation phase in AsymDPOP, the utility set propagation phase applies the TSPS to trade off between memory and privacy through propagating multiple small utility tables, and utilizes the MBES to trade off between time and space by sequential optimizations.
The utility set propagation phase begins with leaf agents sending their utility tables to their parents via UTILSET messages (line 2-3). Note that, the utility tables are obtained from the Function PartitionF. That is, leaf agents partition their local utilities into multiple smaller utility tables according to (line 32-41). If there is any residue, adds them to the utility tables (line 42-43).
When receiving the UTILSET message from its child , stores the received utility tables (i.e., ), then joins its private functions w.r.t. its (pseudo) children in branch into the utility tables (line 4-7). Note that the join operation of its private functions w.r.t. its children does not increase the number of dimensions and should be applied accordingly to the related utility tables. Then the Function Elimination are used to implement sequential eliminations on the utility tables with MBES (line 8, 26-30). In the Function Elimination, variables in are firstly divided into several groups () such that the variables in each group share at least one common utility table (line 26). And then, for the variables in each group, partitions them into several sets (or batches) with the Function PartitionEV according to (line 27, 49-58). After obtaining the eliminated variable sets (), traverses the sets to optimize the utility tables () (line 28-30). In detail, for each set, MBES performs optimization to the utility tables that are related to the eliminated variable in the set and replaces these utility tables with the optimized utility table.
When receiving all the UTILSET messages from its children, adds its local utility tables computed by the Function FunctionF into the optimized utility tables, then sends these utility tables to its parent if it is not the root agent (line 14-17). Otherwise, the value propagation phase starts (line 11-12).
The value propagation phase is roughly the same as the one in AsymDPOP. The root agent selects the optimal assignments for itself and the eliminated variables () belong to each branch . And since the utility set propagation uses the TSPS, there may have more than one utility table. So before selecting the optimal assignments for branch (line 23), joins all the tables in with the assignments received from its parent (line 18) or computed locally (line 11-12) . After that, sends a VALUE message to its child (line 25). The algorithm terminates when each leaf agent receives a VALUE message.
b.2 An Example of AsymDPOP with TSPS and MBES
For the convenience of further explanation, we denote the joint utility table as a function of three variables and . And the index of the function is sorted in alphabetical order.
We take Fig.13 as an example to demonstrate the Algorithm 3. And suppose that and . The algorithm begins with the leaf agent by sending a UTILSET message with a utility table to its parent . Here, is computed by Function PartitionF. Specifically, groups its private functions related with into a function set , since the dimension of its local utility table is three which satisfies . Thus, obtains a 3-ary utility table by joining the functions in the function set (i.e., ).
When receives the UTILSET message from its child , it saves the utility table () and updates the utility table by joining its private functions and (i.e., ). Since is not the highest (pseudo) parent of its any descendants (), it does not need to perform elimination. Further more, since has received all the UTILSET messages from its children, it sends the utility tables to its parent . Here, the utility table is computed by the Function PartitionF through dealing with the residual function .
Upon receipt of the UTILSET message from , similarly, saves the utility tables and updates the utility table by joining its private functions (i.e., ). Since is not the highest (pseudo) parent of or any other descendants, there is no elimination at . Because the residual functions and could not be joined into the utility tables without increase the number of dimensions, deals them with Function PartitionF and gets a utility table . After that, it sends the utility tables to its parent .
Once receives the UTILSET message from , it also saves the utility tables firstly, and joins the relative private functions into the corresponding utility tables (i.e., ). Since is the highest (pseudo) parent of and , that is , the eliminations are preformed by the Function Elimination. In Function Elimination, firstly partitions the eliminated variables and into a group, as they share a common utility table . And since , divides these variables into two batches and eliminates them from the utility tables one by one. After eliminating (assume is eliminated first), gets the utility tables . Then obtains the new utility tables by eliminating (i.e., ). After that, updates the utility table by a joint operation of the utility tables (i.e., ), and sends the updated utility table to its parent .
When receives the UTILSET message from , it saves the utility table, and then joins its private functions and into the utility table (i.e., ). Since and the variables and are both relative to the utility table , they are partitioned into a single group. But since , still eliminates the variables and from the utility table , respectively, and obtains the eliminated utility table . Since it is the root agent and has received all the UTILSET messages from its children, chooses the optimal value for itself based on the utility table .
After that, the value propagation phase starts. selects the optimal values and for and , then propagates the assignment to its child . When receiving the VALUE message from , selects the optimal assignments for and , and sends the assignment to . Upon receipt of the VALUE message from , assigns for itself and sends the assignments received from to its child . And performs just like . The algorithm terminates when the leaf agent receives the VALUE message and assigns for itself.
Appendix C C Experiment Results
c.1 The Experimental Configuration
c.1.1 The Experiment with Different Agent Numbers
Problem type: Random ADCOPs
Agent numbers: [8, 24]
Domain size: 3
c.1.2 The Experiment with Different Densities
Problem type: Random ADCOPs
Agent numbers: 8
Density: [0.25, 1.0]
Domain size: 8
c.1.3 The Experiment with Different Domain size
Problem type: Random ADCOPs
Agent numbers: 8
Domain size: [4, 14]
c.1.4 The Experiment with Different Tightness
Problem type: ADCSPs
Agent numbers: 10
Domain size: 10
Tightness: [0.1, 0.8]