1. Introduction
Distributed constraint optimization problems (DCOPs) are a fundamental framework for coordinated and cooperative multiagent systems. DCOPs have been successfully applied to model many realworld problems including sensor networks Zhang et al. (2005), meeting scheduling Sultanik et al. (2007), smart grid Fioretto et al. (2017), etc.
Incomplete algorithms for DCOPs Maheswaran et al. (2004); Zhang et al. (2005); Pearce and Tambe (2007); Farinelli et al. (2008); Nguyen et al. (2019); Chen et al. (2018) aim to find a good solution in an acceptable overhead, while complete algorithms focus on finding the optimal one by employing either search or inference to systematically explore the entire solution space. SBB Hirayama and Yokoo (1997), AFB Gershman et al. (2009), PTFB Litov and Meisels (2017), ADOPT Modi et al. (2005) and its variants Yeoh et al. (2009a, b, 2010); Gutierrez et al. (2011); Gutierrez and Meseguer (2012) are typical searchbased algorithms that employ distributed backtrack search to exhaust the search space. However, these algorithms incur a prohibitively large number of messages and can only solve the problems with a handful of variables.
On the other hand, inferencebased complete algorithms like DPOP Petcu and Faltings (2005b) perform dynamicprogramming on a pseudotree and only require a linear number of messages. However, the memory consumption in DPOP is exponential to the induced width Dechter et al. (2003), which makes it not applicable for the memorylimited scenarios where the optimal solution is desired Duan et al. (2018); Li et al. (2016). Therefore, a number of algorithms Petcu and Faltings (2005a, 2007a, 2007b, 2006); Brito and Meseguer (2010) were proposed to trade either solution quality or message number for smaller memory consumption. Among these algorithms, MBDPOP Petcu and Faltings (2007b) iteratively performs memorybounded utility propagation to guarantee the optimality. Specifically, given the dimension limit , the algorithm first identifies highwidth areas (clusters) and cyclecut nodes Dechter et al. (2003) to make the maximal dimension of the utility tables propagated within clusters no greater than . For each cluster, the cluster root is responsible for iteratively enumerating all the instantiations of its cyclecut nodes, and nodes in the cluster perform memorybounded inferences by conditioning utility tables on these instantiations. Once instantiations are exhausted, the cluster root propagates the resulted utility table to its parent.
However, a key limitation in MBDPOP is the inability of exploiting the structure of a problem. As a result, MBDPOP suffers from a severe redundancy in memorybounded inference. First, since each cluster root enumerates for all its cyclecut nodes without considering the independence of cyclecut nodes in different branches, each branch in a cluster would have to perform redundant inferences when there are cyclecut nodes which have nothing to do with the branch. Also, these nonconcurrent instantiations severely degenerate the parallelism among branches. Second, agents in a cluster use heuristics to determine cyclecut nodes locally, which would results in a large number of cyclecut nodes. Finally, MBDPOP ignores the validity of the inference results and a branch has to perform inference even if the previous results are compatible with the current instantiations.
In this paper, we aim to improve the scalability of MBDPOP by exploiting the structure of a problem. More specifically, our contributions are listed as follows.

By using the independence among the cyclecut nodes in different branches, we propose a distributed enumeration mechanism where a cluster root only enumerates for cyclecut nodes in its separators and these instantiations are augmented with branchspecific cyclecut nodes dynamically along the propagation. Accordingly, each branch can perform memorybounded inferences asynchronously and the number of nonconcurrent instantiations can be reduced significantly.

We propose an iterative selection mechanism to determine cyclecut nodes by taking both their effectiveness and positions in a pseudo tree into consideration. Concretely, rather than choosing highest/lowest separators as cyclecut nodes, we tend to choose the nodes that cover a maximum of active nodes in a cluster and break ties according to their relative positions. Moreover, we propose a caching mechanism to exploit the historical inference results which are compatible with the current instantiations to further avoid unnecessary utility propagation.

We theoretically show that the message number of our algorithm is no more than the one in MBDPOP. Our empirical evaluation confirms the superiority of our algorithm over the stateoftheart on various benchmarks.
The rest of this paper is organized as follows. Section 2 gives the background including DCOPs, pseudo tree, DPOP and MBDPOP. In section 3, we give the motivation and describe the details of our proposed algorithm. And the experiments are shown in Section 4. Section 5 concludes this paper and gives future research work.
2. Background
In this section, we introduce the preliminaries including DCOPs, pseudo tree, DPOP and MBDPOP.
2.1. Distributed Constraint Optimization Problems
A distributed constraint optimization problem Petcu and Faltings (2005b) can be represented by a tuple where

is a set of agents

is a set of variables

is a set of domains that are finite and discrete, each variable taking a value assignment from

is a set of constraint functions, each function denoting the nonnegative cost for each assignment combination of .
For the sake of simplicity, we assume that each agent controls a variable and all constraint functions are binary (i.e., ). Here, the term "agent" and "variable" can be used interchangeably. A solution to a DCOP is an assignment including all variables that makes the minimum cost. That is
A DCOP can be visualized by a constraint graph presented as Fig. 1, where the nodes represent the agents and the edges represent the constraints, respectively.
2.2. Pseudo Tree
A pseudo tree Freuder and Quinn (1985) is a partial ordering among agents and can be generated by depthfirst search (DFS) traversal to a constraint graph, where different branches are independent from each other. Given a constraint graph and its spanning tree, the edges in the spanning tree are tree edges and the other edges are pseudo edges (i.e., nontree edges). According to the relative positions in a pseudo tree, the neighbors of an agent connected by tree edges are categorized into parent and children , while the ones connected by pseudo edges are denoted as pseudo parents and pseudo children . For its parent and pseudo parents, we denote them as . We also denote its descendants as . Finally, the separators Petcu and Faltings (2006) of refer to the ancestors which are constrained with or its descendants. Fig. 2 gives a pseudo tree derived from the constraint graph in Fig. 1, where tree edges and pseudo edges are denoted by solid and dotted lines, respectively.
2.3. Dpop
DPOP Petcu and Faltings (2005b) is an inferencebased complete algorithm for DCOPs, which implements the bucket elimination in a distributed manner Dechter (1999). It performs two phases of propagation on a pseudo tree via tree edges: a UTIL propagation phase to eliminate dimensions from the bottom up, and a VALUE propagation phase to assign the optimal value for each variable vice versa along the pseudo tree. More specifically, in a UTIL propagation phase, an agent collects the utility tables from its children and joins them with the local utility table, then computes the optimal utilities for all possible assignments of to eliminate its dimension from the joint utility table. Then, it sends the projected utility table to its parent. In a VALUE propagation phase, once receives the assignments from its parent, it plugs them into the joint utility table obtained in the UTIL propagation phase to get the optimal assignment, and sends the joint assignment to its children. Although DPOP only requires a linear number of messages to solve a DCOP, its space complexity is exponential in the induced width of the pseudo tree.
2.4. MbDpop
MBDPOP Petcu and Faltings (2007b) attempts to improve the scalability of DPOP by trading the message number for smaller memory consumption. Given the dimension limit , MBDPOP starts with a labeling phase to identify the areas with the induced width Dechter et al. (2003) higher than (i.e., clusters) and the corresponding cyclecut (CC) nodes. Each cluster is bounded at the top by the lowest node in the tree that has separators of size or less, and such node is called the cluster root (CR) node. For each clusters, CC nodes are determined such that the cluster has the width no greater than once they are removed.
In more detail, the CC nodes are selected and then aggregated in a bottomup fashion. That is, given the lists of CC nodes selected by its children, first determines whether its width exceeds if . If it is the case, needs to choose additional CC nodes to enforce the memory limit by a heuristic function. Then, propagates all the CC nodes to its parent . Otherwise, if and the lists received from children are all empty, labels self as a normal node and propagates the utility as in DPOP.
During the UTIL propagation phase normal nodes (i.e., the nodes whose width is no greater than ) perform canonical utility propagation while the other nodes in each cluster perform memorybounded inferences. Specifically, each cluster root (CR) enumerates instantiations for its CC nodes and propagates them iteratively to the other nodes in the cluster, and these nodes perform memorybounded inferences by conditioning the utility tables on the received instantiation. The cluster root eliminates its dimension and propagates the utility table to its parent after exhausting all the combinations. Finally, a VALUE propagation phase starts. Different from DPOP which only requires a round of value propagation, MBDPOP requires additional utility propagation to redrive the utilities corresponding to the assignments of CC nodes to get the optimal values, since nonCC nodes in a cluster only cache the utility table for the latest instantiation of CC nodes.
3. Proposed Method
In this section, we present our proposed RMBDPOP. We begin with a motivation, and then present the details and the theoretical claim of our algorithm, respectively.
3.1. Motivation
0 As stated earlier, MBDPOP suffers from plenty of redundancies in memorybounded inference due to the inability of exploiting a problem structure in both instantiation enumeration and the selection of CC nodes. Consider the problem in Fig. 2, where is the only cluster root with the dimension limit . Since each agent in the cluster selects its CC nodes only with the local knowledge, MBDPOP would select a large number of CC nodes and significantly increase the number of instantiations. In fact, if we choose CC nodes with the highest level, the CC nodes of are . It would be worse when using the lowest heuristic which results in 9 CC nodes in this case. Alternatively, instead of choosing both and , we could only choose and still guarantee the memory budget. Besides, the cluster root has to enumerate all instantiations of , which results in a large number of nonconcurrent instantiations and redundant inferences. In fact, we could exploit the independence between branch and branch by generating instantiations that only contains the common CC nodes (i.e., and ). In this way, branch and branch can operate asynchronously and the number of nonconcurrent instantiations is significantly reduced. In addition, all the bounded inference results are disposable in MBDPOP, which also leads to redundant inferences. In fact, some inference results received from children in the previous iterations are compatible with the current instantiation, since each branch performs memorybounded inference by conditioning only on a subset of all cyclecut nodes of a cluster. Thus, it is unnecessary to perform a memorybounded inference when the assignments of corresponding CC nodes do not change.
Therefore, to take the structure of a problem into consideration, we propose a novel algorithm named RMBDPOP which incorporates a distributed enumeration mechanism to reduce the nonconcurrent instantiations, an iterative selection mechanism to reduce the number of CC nodes and a caching mechanism to avoid unnecessary inferences. Algorithm 1 ^{1}^{1}1We omit the details of the value propagation phase due to its similarity to the one in MBDPOP. The source code is available in https://github.com/czy920/RMBDPOP. presents the sketch of RMBDPOP.
3.2. Distributed Enumeration Mechanism
Distributed enumeration mechanism (DEM) is adopted in each cluster to perform asynchronous memorybounded inference by factorizing the instantiations. More specifically, since each branch in a pseudo tree is independent, each CC node inside a cluster is only related to a subproblem. Hence, instead of enumerating all the instantiations of CC nodes by a cluster root, we only generate instantiations for the CC nodes in the separators of the cluster root and dynamically augment these instantiations with branchspecific CC nodes. In the following, we present the details of the mechanism.
When the Labeling phase finishes, a CR node starts the iterative memorybounded UTIL propagation by instantiating the nodes in (line 25), where the is a list of the CC nodes corresponding to the branch of . When a CC node receives an instantiation from its parent, it augments by the first assignment from its domain and propagates the extended instantiation to its children in the cluster (line 2224, 4650). Once receives all the utilities from its children, it updates the cache and replaces self assignment with the next value in its domain, and then propagates the new instantiation (line 2933). Until getting a complete bounded inference result corresponding to by a traversal of its domain, sends the result to its parent via a BOUNDED_UTIL message (line 35).
Next, we theoretically show its superiority over MBDPOP in terms of the message number. Let us first introduce two notations. For a cluster root , we denote as the set of CC nodes enumerated by , and the remaining CC nodes as .
Lemma
For an agent in a cluster where is the CR node, the number of instantiations it receives is exponential in the size of .
Proof.
According to line 25, the nonconcurrent instantiations sent from is exponential in . Besides, each nonconcurrent instantiation is augmented by the CC nodes along the path from to (line 23, 32). Therefore, the number of instantiations receives is exponential in . ∎
Given the same CC lists of each node in each cluster, the maximal message number of RMBDPOP is no more than the one in MBDPOP.
Proof.
It is enough to show the theorem by analyzing the total number of instantiations received by each agent in a cluster, since must respond with a bounded utility table to its parent after receiving an instantiation. Without loss of generality, we assume that each variable has a domain with the same size . In MBDPOP each agent in a cluster will receive instantiations. Whereas from Lemma 1, we have the number of instantiations sent to as , and
Consequently, RMBDPOP propagates a smaller number of instantiations than MBDPOP. And only when the cluster does not have the CC nodes inside the cluster (i.e., ), the instantiations for each agent in RMBDPOP are equivalent to those in MBDPOP. Thus, the theorem holds. ∎
3.3. Iterative Selection Mechanism
Instead of selecting CC nodes based on the local knowledge in MBDPOP which would result in a large number of CC nodes, we choose CC nodes by taking their effectiveness and their relative positions into consideration through an iterative selection mechanism (ISM). Specifically, in a cluster we measure the effectiveness of a node by the number of active nodes it covers. Here, an active node is the one whose width is still greater than given the selected CC nodes. Besides, to facilitate DEM, we tend to select nodes in different branches of a cluster. Therefore, we propose to break ties among the nodes with the same effectiveness by their positions in a pseudo tree. Algorithm 2 gives the sketch of Labeling phase.
In more detail, a CC node is selected through two phases of messagepassing. In the first phase, the effectiveness of each CC node candidate is aggregated in a bottomup fashion via SEP_INFO messages. Specifically, each agent maintains a data structure to record the effectiveness of candidates. When receiving a SEP_INFO message from a child , it updates by according to
If is an active node (i.e., satisfying line 38), for each CC node candidate it increases the effectiveness by 1 (line 3943). Then removes all the CC candidates that have a suboptimal effectiveness in its descendants from (line 4552), since they cannot produce the highest effectiveness. The phase ends when the cluster root receives all the SEP_INFO messages from the children in the cluster.
In the second phase, the cluster root chooses the CC node with the maximal effectiveness (line 21) and propagates it into the cluster via ALLOCATION messages. According to Lemma 3.1 and Theorem 3.2, our algorithm can take the advantage of the CC nodes inside the cluster through the DEM. Therefore, we propose to break ties according to the height of the candidates when choosing a CC node, i.e., we tend to choose the lowest CC node since it is more likely to be inside the cluster. The phase ends after each cluster leaf (CL) starts a new phase of effectiveness propagation (line 33). The Labeling phase terminates when there is no active nodes (i.e., satisfying line 17).
It is worth noting that our selection mechanism only incurs minor messages. Specifically, to determine a CC node in the cluster with CR , agents need to propagate bottomup SEP_INFO messages and topdown ALLOCATION messages via tree edges, which requires messages. Here, is the total number of nodes in the cluster. Thus, the total messages exchanged in the Labeling phase in a cluster is and the overall complexity is where is the total number of agents.
step  message  SEP_INFO 

1  
2  
3  
4  
5  
6  N/A 
3.4. Caching Mechanism
The caching mechanism attempts to reduce unnecessary inferences by exploiting the historical results when they are compatible with the current instantiations. To do this, before propagates an instantiation to a child , it projects the instantiation on and stores the projected one. When receives a new instantiation, for each child it checks whether the instantiation is compatible with the cached one associated with the child. If it is the case, the results cached in the previous iteration is valid and there is no need to perform a memorybounded inference. Otherwise, the results from the child is no longer valid and propagates the (augmented) instantiation to the child to initiate a new memorybounded inference.
3.5. Execution Example
For better understanding, we take Fig. 2 as an example to illustrate our algorithm. Assuming the dimension limit , there is only a cluster whose CR node is . The labeling phase begins with CL nodes and which send SEP_INFO messages to their parents and Table 1 presents the trace of effectiveness aggregation in the first round in a chronological order.
It can be seen that node has the highest effectiveness and we should choose it as a CC node. Then a topdown phase is initiated to apply into the cluster. These two phases are performed alternatively until all the nodes in the cluster have a width less than . The final CC nodes for each agent is listed as Table 2.
step  message  INSTANTIATION 

1  {}  
{}  
2  {}  
{}  
3  {}  
{}  
{}  
4  {}  
{}  
5  {}  
{} 
Then, the DEM begins with which sends the first instantiation w.r.t. to its children and . When receiving an instantiation, a CC node appends its assignment into the instantiation. Table 3 gives the trace of the first round of instantiation propagation.
4. Experimental Evaluation
In this section, we compare our proposed RMBDPOP with the stateoftheart on various benchmarks, and present an ablation study to demonstrate the effectiveness of each mechanism.
4.1. Experimental Configuration
We empirically evaluate RMBDPOP, PTFB, DPOP and MBDPOP on two types of problems, i.e., random DCOPs and scalefree networks Barabási and Albert (1999). In the first configuration, we consider the random DCOPs with the graph density of 0.2, the domain size of 3 and the agent number varying from 18 to 34. The second configuration is the DCOPs with 20 agents, the graph density of 0.2 and the domain size varying from 3 to 6. In addition, we present the ratio of the problems successfully solved within limited time on the second configuration where the graph density is set to 0.5. In the third configuration, we consider the scalefree networks generated by BarabásiAlbert model where we set the agent number to 26, the domain size to 3 and to 10 and vary from 2 to 10.
In our experiments, we use the message number and network load (i.e., the size of total information exchanged) to measure the communication overheads. Also, we use wall clock time to measure the runtime. For each experiment, we generate 50 random instances and report the medium over all the instances. The experiments are conducted on an i77820x workstation with 32GB of memory and we set the timeout to 30 minutes for each algorithm. To demonstrate the effects of different , we benchmark RMBDPOP with different varying from 3 to 9. Finally, for fairness, we set the maximal number of dimensions for DPOP to 9.
4.2. Experimental Results
Fig. 3 presents the experiment results under different agent numbers. It can be seen from the figure that PTFB cannot solve the problems with the agent number greater than 22. That is due to the fact that the searchbased solvers need to explicitly exhaust the solution space by messagepassing, which is quite expensive when solving the problems with the large agent number. Similarly, given the memory budget, DPOP also fails to scale up to the problems with the agent number more than 22. On the other hand, the scalability of the memorybounded inference solvers depends on the size of . For example, MBDPOP () can only scale up to the problems with 26 agents while MBDPOP () can solve the ones with 30 agents. This is because a large leads to fewer cyclecut nodes and can significantly reduce the number of the memorybounded inferences. Among these memorybounded inference algorithms, given the same , RMBDPOP substantially outperforms MBDPOP in both communication overheads and runtime. Besides, except for , RMBDPOP () can solve the problems with the larger agent number than MBDPOP (), which demonstrates our proposed mechanisms can improve the scalability of MBDPOP. It is worth noting that the network load of RMBDPOP () is less than MBDPOP () when solving the problems with 30 agents, which indicates the merit of our proposed ISM.
Fig. 4 presents the results when solving the problems with different graph densities. It can be concluded from the figure that DPOP can only solve the problems with the density of 0.1 under this configuration. Besides, given the same , RMBDPOP () outperforms MBDPOP () on all the metrics. Moreover, the gaps between RMBDPOP and MBDPOP are widen as the density grows, which demonstrates the potential of RMBDPOP for reducing redundant inferences. Besides, it is noteworthy that RMBDPOP with stricter memory budget can still outperform MBDPOP with relatively large in terms of network load. For example, the network load of RMBDPOP () is even less than the one of MBDPOP () when solving dense problems. The same phenomenon also appears in solving the problems with the density of 0.3. The phenomena indicate that the redundant inference in MBDPOP grows quickly as growing the graph density, while our proposed algorithm can effectively reduce unnecessary inferences. Fig. 5 shows the ratio of the problems successfully solved within different time limits on this configuration where the graph density is 0.5. It can be seen that RMBDPOP () solves over 90 of the problems in 15 minutes, while the success rate of MBDPOP () is less than 80. Besides, RMBDPOP () solves 60 problems in 6 minutes, while MBDPOP () needs another 3 minutes (i.e., 9 minutes) to make that rate, which demonstrates the great superiority of the proposed algorithm again.
Fig. 6 shows the performance comparison when solving scalefree network problems with different . PTFB still cannot scale up due to prohibitively large search space. And it is worth mentioning that DPOP fails to solve all these problems, which demonstrates the poor scalability of DPOP under memorylimited scenarios. The reason is because the pseudo trees of scalefree network problems have induced width greater than 9 when . On the other hand, our algorithm exhibits great advantage over MBDPOP on all the metrics. The results show that RMBDPOP () successfully solves all the problems, and except for , RMBDPOP () can solve the problems with larger than MBDPOP (). Although the scalability of RMBDPOP () seem to be the same with the one of MBDPOP (), it can be seen that RMBDPOP () incurs less network load than MBDPOP () and its runtime closes to MBDPOP () when . It also demonstrates the scalability of our algorithm.
Fig. 7 presents an ablation study on the second configuration with to demonstrate the effectiveness of each mechanism. It can be seen that the performance of MBDPOP can be improved by each single mechanism when solving the dense problems, and can be further enhanced via the combinations of DEM and ISM. That is because ISM tends to choose the CC nodes inside clusters and DEM can effectively exploit these CC nodes to reduce the nonconcurrent instantiations. Without DEM and ISM, the contribution of caching mechanism is quite limited, but the combination of all the mechanisms achieves the best performance.
5. Conclusions
MBDPOP suffers from a severe redundancy in memorybounded inference due to the inability of exploiting the structure of a problem. In this paper, we propose a novel algorithm named RMBDPOP which incorporates three mechanisms to reduce the redundancy in memorybounded inference. First, we propose a distributed enumeration mechanism to make use of the independence among different branches to reduce the number of nonconcurrent instantiations. Second, we propose an iterative selection mechanism to refine the cyclecut node selection, which aims to make each cyclecut node to cover a maximum of the nodes with the width greater than in a cluster. Finally, a caching mechanism that exploits the historical inference results is introduced to further avoid unnecessary inferences. We theoretically prove that the distributed enumeration mechanism can reduce the message number if there is at least one cyclecut node inside the clusters. Our experimental evaluations demonstrate the superiority of RMBDPOP.
We note that our proposed mechanisms can be adapted to other algorithms as well. In more detail, the caching mechanism could be applied to an iterative process with recurrent combinations. Moreover, the selection mechanism could be used in other memorybounded inference like ADPOPPetcu and Faltings (2005a) and HSCAIChen et al. (2019) to choose more appropriate variables to approximate or decimate. Also, this mechanism is highly customizable when combining with other algorithms. For example, we could easily implement different heuristics by changing the definition of for each candidate. Therefore, we envisage that these mechanisms not only advance the development of MBDPOP, but also contribute to the algorithmic design of DCOPs.
We would like to thank the anonymous reviewers for their valuable comments and helpful suggestions. This work is partially supported by the Chongqing Research Program of Basic Research and Frontier Technology under Grant No.: cstc2017jcyjAX0030, the National Natural Science Foundation of China under Grant No.: 51608070 and the Graduate Research and Innovation Foundation of Chongqing, China under Grant No.: CYS18047.
References
 (1)
 Barabási and Albert (1999) AlbertLászló Barabási and Réka Albert. 1999. Emergence of Scaling in Random Networks. Science 286, 5439 (1999), 509–512.
 Brito and Meseguer (2010) Ismel Brito and Pedro Meseguer. 2010. Improving DPOP with function filtering. In AAMAS. 141–148.
 Chen et al. (2019) Dingding Chen, Yanchen Deng, Ziyu Chen, Wenxin Zhang, and Zhongshi He. 2019. HSCAI: A Hybrid DCOP Algorithm via Combining Search with Contextbased Inference. CoRR abs/1911.12716 (2019). arXiv:1911.12716
 Chen et al. (2018) Ziyu Chen, Yanchen Deng, Tengfei Wu, and Zhongshi He. 2018. A class of iterative refined Maxsum algorithms via nonconsecutive value propagation strategies. Autonomous Agents and MultiAgent Systems 32, 6 (2018), 822–860.
 Dechter (1999) Rina Dechter. 1999. Bucket elimination: A unifying framework for reasoning. Artificial Intelligence 113, 12 (1999), 41–85.
 Dechter et al. (2003) Rina Dechter, David Cohen, et al. 2003. Constraint processing.
 Duan et al. (2018) Peibo Duan, Changsheng Zhang, Guoqiang Mao, and Bin Zhang. 2018. Applying Distributed Constraint Optimization Approach to the User Association Problem in Heterogeneous Networks. IEEE Trans. Cybernetics 48, 6 (2018), 1696–1707.
 Farinelli et al. (2008) Alessandro Farinelli, Alex Rogers, Adrian Petcu, and Nicholas R Jennings. 2008. Decentralised coordination of lowpower embedded devices using the maxsum algorithm. In AAMAS. 639–646.
 Fioretto et al. (2017) Ferdinando Fioretto, William Yeoh, Enrico Pontelli, Ye Ma, and Satishkumar J Ranade. 2017. A distributed constraint optimization (DCOP) approach to the economic dispatch with demand response. In AAMAS. 999–1007.
 Freuder and Quinn (1985) Eugene C. Freuder and Michael J. Quinn. 1985. Taking Advantage of Stable Sets of Variables in Constraint Satisfaction Problems. In IJCAI. 1076–1078.
 Gershman et al. (2009) Amir Gershman, Amnon Meisels, and Roie Zivan. 2009. Asynchronous forward bounding for distributed COPs. Journal of Artificial Intelligence Research 34 (2009), 61–88.
 Gutierrez and Meseguer (2012) Patricia Gutierrez and Pedro Meseguer. 2012. Removing redundant messages in nary BnBADOPT. Journal of Artificial Intelligence Research 45 (2012), 287–304.
 Gutierrez et al. (2011) Patricia Gutierrez, Pedro Meseguer, and William Yeoh. 2011. Generalizing adopt and bnbadopt. In IJCAI. 554–559.
 Hirayama and Yokoo (1997) Katsutoshi Hirayama and Makoto Yokoo. 1997. Distributed partial constraint satisfaction problem. In CP. 222–236.
 Li et al. (2016) Shijie Li, Rudy R. Negenborn, and Gabriël Lodewijks. 2016. Distributed constraint optimization for addressing vessel rotation planning problems. Engineering Applications of Artificial Intelligence 48 (2016), 159–172.
 Litov and Meisels (2017) Omer Litov and Amnon Meisels. 2017. Forward bounding on pseudotrees for DCOPs and ADCOPs. Artificial Intelligence 252 (2017), 83–99.
 Maheswaran et al. (2004) Rajiv T Maheswaran, Jonathan P Pearce, and Milind Tambe. 2004. Distributed Algorithms for DCOP: A GraphicalGameBased Approach.. In ISCA PDCS. 432–439.
 Modi et al. (2005) Pragnesh Jay Modi, WeiMin Shen, Milind Tambe, and Makoto Yokoo. 2005. ADOPT: Asynchronous distributed constraint optimization with quality guarantees. Artificial Intelligence 161, 12 (2005), 149–180.
 Nguyen et al. (2019) Duc Thien Nguyen, William Yeoh, Hoong Chuin Lau, and Roie Zivan. 2019. Distributed Gibbs: A LinearSpace SamplingBased DCOP Algorithm. Journal of Artificial Intelligence Research 64 (2019), 705–748.
 Pearce and Tambe (2007) Jonathan P Pearce and Milind Tambe. 2007. Quality Guarantees on kOptimal Solutions for Distributed Constraint Optimization Problems.. In IJCAI. 1446–1451.
 Petcu and Faltings (2005a) Adrian Petcu and Boi Faltings. 2005a. Approximations in distributed optimization. In CP. 802–806.
 Petcu and Faltings (2005b) Adrian Petcu and Boi Faltings. 2005b. A scalable method for multiagent constraint optimization. In IJCAI. 266–271.
 Petcu and Faltings (2006) Adrian Petcu and Boi Faltings. 2006. ODPOP: An algorithm for open/distributed constraint optimization. In AAAI. 703–708.

Petcu and
Faltings (2007a)
Adrian Petcu and Boi
Faltings. 2007a.
A hybrid of inference and local search for distributed combinatorial optimization. In
ICIAT. 342–348.  Petcu and Faltings (2007b) Adrian Petcu and Boi Faltings. 2007b. MBDPOP: A New MemoryBounded Algorithm for Distributed Optimization. In IJCAI. 1452–1457.
 Sultanik et al. (2007) Evan Sultanik, Pragnesh Jay Modi, and William C Regli. 2007. On Modeling Multiagent Task Scheduling as a Distributed Constraint Optimization Problem.. In IJCAI. 1531–1536.
 Yeoh et al. (2010) William Yeoh, Ariel Felner, and Sven Koenig. 2010. BnBADOPT: An asynchronous branchandbound DCOP algorithm. Journal of Artificial Intelligence Research 38 (2010), 85–133.
 Yeoh et al. (2009a) William Yeoh, Xiaoxun Sun, and Sven Koenig. 2009a. Trading off solution quality for faster computation in DCOP search algorithms. In IJCAI. 354–360.
 Yeoh et al. (2009b) William Yeoh, Pradeep Varakantham, and Sven Koenig. 2009b. Caching schemes for DCOP search algorithms. In AAMAS. 609–616.
 Zhang et al. (2005) Weixiong Zhang, Guandong Wang, Zhao Xing, and Lars Wittenburg. 2005. Distributed stochastic search and distributed breakout: Properties, comparison and applications to constraint optimization problems in sensor networks. Artificial Intelligence 161, 12 (2005), 55–87.
Comments
There are no comments yet.