Introduction
Distributed Constraint Optimization Problems (DCOPs) [14, 8] are an elegant model for representing MultiAgent Systems (MAS) where agents coordinate with each other to optimize a global objective. Due to their ability to capture essential MAS aspects, DCOPs can formalize various applications in the real world such as sensor network [6], task scheduling [18, 10], smart grid [9] and so on.
Incomplete algorithms for DCOPs [28, 17, 7, 21]
aim to rapidly find solutions at the cost of sacrificing optimality. On the contrary, complete algorithms guarantee the optimal solution and can be generally classified into inferencebased and searchbased algorithms. DPOP
[22] and Action_GDL [26] are typical inferencebased complete algorithms which employ a dynamic programming paradigm to solve DCOPs. However, they require a linear number of messages of exponential size with respect to the induced width. Accordingly, ODPOP [24] and MBDPOP [25] were proposed to trade the message number for smaller memory consumption by propagating the dimensionlimited utilities with the corresponding contexts iteratively. That is, they iteratively perform a contextbased inference to solve DCOPs optimally when the memory budget is limited.Searchbased complete algorithms like SBB [14], AFB [12], PTFB [16], ADOPT [19] and its variants [27, 13] perform distributed backtrack searches to exhaust the search space. They have a linear size of messages but an exponential number of messages. Furthermore, these algorithms only use local knowledge to update the lower bounds, which exerts a trivial effect on pruning and makes them infeasible for solving largescale problems. Then, PTISABB [5], DJAO [15] and ADPOTBDP [1] came out to attempt to hybridize search with inference, where an approximated inference is used to construct the initial lower bounds for the search process. More specifically, PTISABB and ADOPTBDP use ADPOP [23] to establish the lower bounds, while DJAO employs a function filtering technique [2] to get them. Here, ADPOP is an approximate version of DPOP by dropping the exceeding dimensions to ensure that each message size is below the memory limit. However, given the limited memory budget, the lower bounds obtained in a oneshot preprocessing phase are still inefficient for pruning since such bounds cannot be tightened by considering the running contexts. That is, the existing hybrid algorithms use only a contextfree approximated inference as a oneshot phase to construct the initial lower bounds.
In this paper, we investigate the possibility of combining search with contextbased inference to solve DCOPs efficiently. Specifically, our main contributions are listed as follows:

We propose a novel complete DCOP algorithm by hybridizing search with contextbased inference, called HSCAI where the search adopts a treebased SBB to find the optimal solution and provides contexts for the inference, while the inference iteratively performs utility propagation for these contexts to construct the tight lower bounds to speed up the search process.

We introduce a context evaluation mechanism to extract the context patterns for the inference from the contexts derived from the search process so as to further reduce the number of contextbased inferences.

We theoretically show the completeness of HSCAI and prove that the lower bounds produced by the contextbased inference are at least as tight as the ones established by the contextfree approximated inference under the same memory budget. Moreover, the experimental results demonstrate HSCAI outperforms stateoftheart complete DCOP algorithms.
Background
In this section, we expound the preliminaries including DCOPs, pseudo tree, MBDPOP and ODPOP.
Distributed Constraint Optimization Problems
A distributed constraint optimization problem [19] can be defined by a tuple where

is a set of agents.

is a set of variables.

is a set of finite, discrete domains. Each variable takes a value in .

is a set of constraint functions. Each function specifies the nonnegative cost for each combination of .
For the sake of simplicity, we assume that each agent holds exactly one variable (and thus the term agent and variable could be used interchangeably) and all constraints are binary (i.e., ). A solution to a DCOP is the assignments to all the variables such that the total cost is minimized. That is,
A DCOP can be represented by a constraint graph where a vertex denotes a variable and an edge denotes a constraint. Fig. 1 (a) presents a DCOP with five variables and seven constraints. For simplicity, the domain size of each variable is four and all constraints are identical as shown in Fig. 1(b).
Pseudo Tree
A depth first search [11, 4] arrangement of a constraint graph is a pseudo tree with the property that different branches are independent, and categorizes its constraints into tree edges and pseudo edges (i.e., nontree edges). Thus, the neighbors of agent can be classified into its parent , children , pseudo parents and pseudo children based on their positions in the pseudo tree and the type edges they connect with . For clarity, we denote all the (pseudo) parents of as and the set of ancestors who share constraints with and its descendants as its separators [22]. Fig. 1(c) presents a possible pseudo tree deriving from Fig. 1(a).
MBDPOP and ODPOP
MBDPOP and ODPOP apply an iterative contextbased utility propagation to aggregate the optimal global utility. Specifically, MBDPOP first uses a cyclecuts idea [4] on a pseudo tree to determine cyclecut variables and groups these variables into clusters. Within the cluster, MBDPOP preforms a boundedmemory exploration; anywhere else, the utility propagation from DPOP applies. Specifically, agents in each cluster propagate memorybounded utilities for all the contexts of cyclecut variables. As for ODPOP, each agent adopts an incremental and bestfirst fashion to propagate the contextbased utility. Specifically, an agent repeatedly asks its children for their suggested contextbased utilities until it can calculate a suggested utility for its parent during the utility propagation phase. The phase finishes after the root agent receiving enough utilities to determine the optimal global utility.
Proposed Method
In this section, we present a novel complete DCOP algorithm which utilizes both search and inference interleaved.
Motivation
It can be concluded that search can exploit bounds to prune the solution space but the pruning efficiency is closely related to the tightness of bounds. However, most of searchbased complete algorithms can only use local knowledge to compute the initial lower bounds, which leads to inefficient pruning. On the other hand, inferencebased complete algorithms can aggregate the global utility promptly, but their memory consumption is exponential in the induced width. Therefore, it is natural to combine both search and inference by performing memorybounded inferences to construct efficient lower bounds for search. Unfortunately, the existing hybrid algorithms only perform an approximated inference to construct oneshot bounds in the preprocessing phase, which would lead to inefficient pruning given a limited memory budget. In fact, the bounds can be tightened by the contextbased inference. That is, instead of dropping a set of dimensions, named approximated dimensions, to stay below the memory budget, we explicitly consider the runningcontext assignments to a subset of approximated dimensions (i.e., the context patterns) and compute tight bounds w.r.t. the context patterns. Here, we denote an assigned subset of approximated dimensions as decimated dimensions.
More specifically, we aim to combine the advantages of search and contextbased inference to optimally solve DCOPs. Different from the existing hybrid methods, we compute tight bounds for the running contexts by performing the contextbased inference for the context patterns chosen from the contexts derived from the search process.
Proposed Algorithm
Now, we present our proposed algorithm which consists of a preprocessing phase and a hybrid phase.
Preprocessing Phase
performs a bottomup dimension and utility propagation to accumulate the approximated dimensions and establish the initial lower bounds based on these propagated utilities for search. Accordingly, we employ a tailored version of ADPOP with the limit to propagate the approximated dimensions and incomplete utilities (we omit the pseudo code of this phase due to the limited space). Particularly, during the propagation process, each agent selects the dimensions of its highest ancestors to approximate to make the dimension size of each outgoing utility below . Then, the approximated dimensions (i.e., the dimensions approximated by and its descendants) and utility sent from to its parent can be computed as follows:
(1) 
(2) 
Here, and are the approximated dimensions and utility received from its child , respectively. denotes the combination of the constraints between and its (pseudo) parents, i.e.,
(3) 
Taking Fig. 1(c) for example, if we set , the dimensions dropped by are . Thus, the approximated dimensions and utility sent from to are and , respectively.
Hybrid Phase
consists of the search part and contextbased inference part. The search part uses a variant of SBB on a pseudo tree (i.e., a simple version of NCBB [3]) to expand any feasible partial assignments and provides contexts for the inference part. By using such contexts, the inference part propagates contextbased utilities iteratively to produce tight lower bounds for the search part.
Traditionally, the contextbased inference is implemented by considering the assignments to the approximated dimensions (that is, the decimated dimensions are equal to the approximated dimensions). For example, MBDPOP performs an iterative contextbased inference by considering each assignment combination of cyclecut variables. However, the approach is not a good choice for our case due to the following facts. First, the number of the assignments of cyclecut variables is exponential, which would incur unacceptable traffic overheads. Moreover, the propagated utilities w.r.t. a specific combination may go out of date since another inference is required as long as the assignments change, which would be very common in the search process. Therefore, it’s unnecessary to perform inference for each assignment combination of the approximated dimensions.
Therefore, we consider to reduce the number of contextbased inferences by making the propagated contextbased utilities compatible with more contexts. Specifically, for agent , we consider the decimated dimensions . Then, the specific assignments to serve as a context pattern and the propagated utilities will cover all the partial assignments with the same context pattern. That is, ’s context pattern can be defined by:
(4) 
where refers to ’s received current partial assignment, and is the current assignment of .
Taking agent in Fig. 1(c) as an example, given the limit , we have . Assume that . Thus, only the assignment of will be considered and (i.e., ) will be still dropped during the contextbased inference part. Further, assume that . Then, covers four contexts (,, and ). Therefore, only need to perform inference for rather than the four contexts above in this case.
Selecting a context pattern is challenging as it offers a tradeoff between the tightness of lower bounds and the number of compatible partial assignments. Thus, a good context pattern should comply with the following requirements. First, the good context pattern should be compatible with more contexts so as to avoid unnecessary contextbased inference. In other words, these assignments are not likely to change in a short term. Second, the context pattern should result in tight lower bounds. Therefore, we propose a context evaluation mechanism to select the context pattern according to the frequency of an assignment of each dimension in the approximated dimensions. In more detail, we consider the context pattern consisting of the assignments whose frequency is greater than a specified threshold . Given , we have where refers to the frequency of for . With an appropriate , the assignments in the context pattern could not change in a short term. On the other hand, if a partial assignment is hard for pruning, then there would be more assignments included in the context pattern, which guarantees the lower bound tightness.
In addition, we introduce the variable to ensure that the descendants of perform an inference only for a context pattern at a time. Once has found or received a context pattern, is set to false to stop the context evaluation. And when the context pattern is incompatible with , is set to true to indicate that can find a new context pattern.
Next, we will detail how to implement the context evaluation mechanism and contextbased inference part. Algorithm 1 presents the sketch of these procedures. We ignore the search part since it is an analogy to the treebased branch and bound search algorithm PTISABB.
After the preprocessing phase (line 1), the root agent starts the context evaluation by initializing with true (line 23). Besides, it also starts the search part via CPA messages with its first assignment to its children (line 45). Upon receipt of a CPA message, first holds the previous , and stores the received and (line 6). Afterwards, it initializes the lower bounds for each child according to its received utilities (line 7, 2327). Concretely, the lower bound for is established by the contextbased utility compatible with received from (line 2425). Otherwise, the bound is computed by the utility received from in the preprocessing phase (line 2627). Next, updates based on and the previous one (line 8, 2832). Specifically, for each dimension in , clears its counter if its assignment differs from its previous one (line 2930). Otherwise, increases that counter (line 3132). Then, finds a context pattern if the pattern for the contextbased inference has not been determined (line 910). After finding one, it allocates and sends the allocated patterns via CTXT messages to its children who need to execute the contextbased inference (i.e., ) (line 1115, 3335). Here, , the allocated pattern for , is a partition of based on ’s approximated dimensions (line 33).
When receiving a CTXT message, allocates the received pattern if there is any child who needs to perform the contextbased inference (line 17, 3335). Otherwise, it sends its contextbased utility to its parent (line 1819, 3640). Here, its contextbased utility is computed by the following steps. Firstly, it joins its local utility with the contextbased utilities from and the utilities from the other children (line 36). Next, it applies to fix the values of the partial dimensions in so as to improve the completeness of the utility (line 38). Finally, it drops the dimensions of the utility to stay below the limit (line 39). After all the contextbased utilities from have arrived, sends its contextbased utility to its parent if it is not the starter of the contextbased inference (line 2122).
Considering in Fig. 1(c), assume that the context pattern has not been determined and . Given , we have and after receives a CPA with . Since the approximated dimensions for its child are , sends a CTXT message with the context pattern to . When receiving the pattern , sends the contextbased utility to . Then, uses to compute the lower bound for after receiving the CPA message with or .
Theoretical Results
In this section, we first prove the effectiveness of the contextbased inference on HSCAI, and further establish the completeness of HSCAI. Finally, we give the complexity analysis of the proposed method.
Lower Bound Tightness
Lemma 1.
For a given , the lower bound of for produced by the contextbased utility () is at least as tight as the one established by the utility (). That is, , where .
Proof.
Directly from the pseudo code, , the dimensions dropped by in the contextbased inference part (line 37), can be defined by:
where is the context pattern for ’s contextbased inference, and is the dimensions dropped by in the preprocessing phase. Since has received from , we have (line 3435, 2021). Thus, is established.
Next, we will prove Lemma 1 by induction.
Base case. ’s children are leaf agents. For each child , we have
where is the assignment of in . The equation in the third to the fourth step holds since . Thus, we have proved the basis.
Inductive hypothesis. Assume that the lemma holds for all ’s children. Next, we are going to show the lemma holds for as well. For each , we have
where are ’s children who need to perform the contextbased inference. Thus, Lemma 1 is proved. ∎
Correctness
Lemma 2.
Given the optimal solution , , the cost to the subtree rooted at is no less than the lower bound . That is, .
Proof.
Since we have proved the lower bounds constructed by the contextbased inference part are at least as tight as the ones established by the preprocessing phase in Lemma 1, to prove the lemma, it is sufficient to show that .
Next, we will prove Lemma 2 by induction as well.
Base case. ’s children are leaf agents. For each child , we have
where is the assignment of in , and is the dimensions dropped by in the contextbased inference part. Thus, the basis is proved.
Inductive hypothesis. Assume the lemma holds for all . Next, we will prove the lemma also holds for . For each child , we have
Thus, the lemma is proved. ∎
Theorem 1.
HSCAI is complete.
Complexity
When it performs a contextbased inference, needs to store the contextbased utilities and utilities received from all its children. Thus, the overall space complexity is where , and is the maximum dimension limit. Since a CTXTUTIL message only contains a contextbased utility, its size is . For a CPA message, it is composed of the assignment of each agent and a context evaluation flag. Thus, the size of a CPA message is . Other messages like CTXT only carry an assignment combination of the approximated dimensions, it only requires space.
The preprocessing phase in HSCAI only requires messages, since only the utility propagation are performed. For the search part and contextbased inference part in the hybrid phase, the message number of the search part grows exponentially to the agent number with the same as the searchbased complete algorithms. And the message number of the contextbased inference part is proportional to the number of the context patterns selected by the context evaluation mechanism.
Empirical Evaluation
In this section, we first investigate the effect of the parameter in the context evaluation mechanism on HSCAI. Then, we present the experimental comparisons of HSCAI with stateoftheart complete DCOP algorithms.
Configuration and Metrics
We empirically evaluate the performance of HSCAI and stateoftheart complete DCOP algorithms including PTFB, DPOP and MBDPOP on random DCOPs. Besides, we consider HSCAI without the contextbased inference part as HSAI and HSCAI without the context evaluation mechanism as HSCAI(M). Here, HSAI is actually a variant of PTISABB in DCOP settings. All evaluated algorithms are implemented in DCOPSovler^{1}^{1}1 https://github.com/czy920/DCOPSovler, the DCOP simulator developed by ourselves. Besides, we consider the parameter in HSCAI related to both and , where and is the height of a pseudo tree. Therefore, we set . Moreover, we choose and as the low and high memory budget for MBDPOP, HSAI, HSCAI(M) and HSCAI. In our experiments, we use the message number and network load (i.e., the size of total information exchanged) to measure the communication overheads, and the NCLOs [20] to measure the hardwareindependent runtime where the logical operations in the inference and the search are accesses to utilities and constraint checks, respectively. For each experiment, we generate 50 random instances and report the average of over all instances.
Parameter Tuning
Firstly, we aim to examine the effect of different on the performance of HSCAI to verify the effectiveness of the context evaluation mechanism. Specifically, we consider the DCOPs with 22 agents and the domain size of 3. The graph density varies from 0.2 to 0.6 and varies from 0.05 to 0.65. Here, we do not show the experiment results of greater than 0.65 since the larger leads to the exact same results as with 0.65. Fig. 2 presents the network load of HSCAI(M) and HSCAI with different . The average induced widths in this experiment are 8 16. It can be seen from the figure that HSCAI requires much less network load than HSCAI(M). That is because HSCAI performs inference only for the context patterns selected by the context evaluation mechanism rather than all the contexts as HSCAI(M) does.
Besides, given the memory budget limit , it can be observed that HSCAI does not decrease all the time with the increase of . This is due to the fact that increasing which leads to large can decrease the number of contextbased inferences but also loose the tightness of lower bounds to some degree. Exactly as mentioned above, the context pattern selection offers a tradeoff between the tightness of lower bounds and the number of compatible partial assignments. Moreover, it can be seen that the best value of is close to 0.25 in HSCAI() while the one in HSCAI() is near to 0.45. Thus, we choose to 0.25 for HSCAI() and 0.45 for HSCAI() in the following comparison experiments.
Performance Comparisons
Fig. 4 gives the experimental results under different agent numbers on the sparse configuration where we consider the graph density to 0.25, the domain size to 3 and vary the agent number from 22 to 32. Here, the average induced widths are 9 17. It can be seen from Fig. 4(a) and (b) that although the hybrid complete algorithms (e.g., HSAI and HSCAI) and PTFB all use the search strategy to find the optimal solution, HSAI and HSCAI are superior to PTFB in terms of the network load and message number. This is because the lower bounds in PTFB cannot result in effective pruning by only considering the constraints related to the assigned agents. Also, given a fixed , HSCAI requires fewer messages than HSAI since the lower bounds produced by the contextbased inference are tighter than the ones established by the contextfree approximated inference. Besides, it can be seen that HSCAI() can solve larger problems than HSAI() and the inferencebased complete algorithms like DPOP and MBDPOP, which demonstrates the superiority of hybridizing search with contextbased inference when the memory budget is relatively low.
Although inference requires larger messages than search, it can be observed from Fig. 4(b) that HSCAI incurs less network load than HSAI, which indicates that HSCAI can find the optimal solution with fewer messages owing to the effective pruning and context evaluation mechanism. Moreover, we can see from Fig. 4(c) that when solving problems with the agent number to 28, HSCAI() requires fewer NCLOs than HSAI() in spite of the exponential computation overheads incurred by the iterative inferences. This is because that HSCAI() can provide tight lower bounds to speed up the search so as to greatly reduce the constraint checks when solving the large scale problems under the limited memory budget.
Besides, we consider the DCOPs with the domain size of 3 and graph density of 0.6 as the dense configuration. The agent number varies from 14 to 24 and the average induced widths are 8 18. Fig. 4 presents the performance comparison. It can be seen from Fig. 4(a) that DPOP and MBDPOP cannot solve the problems with the agent number greater than 20 due to the large induced widths. Furthermore, since the inferencebased complete algorithms have to perform inference on the entire solution space, these algorithms require much more NCLOs than the other competitors as Fig. 4(c) shows. Additionally, although they both perform the contextbased inference, it can be seen from Fig. 4(b) and (c) that HSCAI exhibits great superiority over MBDPOP in terms of the network load and NCLOs. That is because HSCAI only performs inference for the context patterns extracted by the context evaluation mechanism, while MBDPOP needs to iteratively perform inference for all the contexts of cyclecut variables. As for HSCAI with different , it can be seen from Fig. 4(a) and (c) that HSCAI() requires fewer messages but more NCLOs than HSCAI(). That is because HSCAI() can produce tighter lower bounds but will incur more computation overheads than HSCAI().
Conclusion
By analyzing the feasibility of hybridizing search and inference, we propose a complete DCOP algorithm, named HSCAI which combines search with contextbased inference for the first time. Different from the existing hybrid complete algorithms, HSCAI constructs tight lower bounds to speed up the search by executing contextbased inference iteratively. Meanwhile, HSCAI only needs to perform inference for a part of the contexts obtained from the search process by means of a context evaluation mechanism, which reduces the huge traffic overheads incurred by iterative contextbased inferences. We theoretically prove that the contextbased inference can produce tighter lower bounds compared to the contextfree approximated inference under the same memory budget. Moreover, the experimental results show that HSCAI can find the optimal solution faster with less traffic overheads than the stateoftheart.
In the future, we will devote to further accelerating the search process by arranging the search space with the inference results. In addition, we will also work for reducing the overheads caused by a contextbased utility propagation.
Acknowledgments
This work is funded by the Chongqing Research Program of Basic Research and Frontier Technology (No.:cstc2017jcyjAX0030), Fundamental Research Funds for the Central Universities (No.:2019CDXYJSJ0021) and Graduate Research and Innovation Foundation of Chongqing (No.:CYS17023).
References
 [1] (2008) A memory bounded hybrid approach to distributed constraint optimization. In Proceedings 10th International Workshop on DCR, pp. 37–51. Cited by: Introduction.
 [2] (2010) Improving DPOP with function filtering. In Proceedings of the 9th AAMAS, pp. 141–148. Cited by: Introduction.
 [3] (2006) Nocommitment branch and bound search for distributed constraint optimization. In Proceedings of the 5th AAMAS, pp. 1427–1429. Cited by: Hybrid Phase.
 [4] (2003) Constraint processing. Morgan Kaufmann. Cited by: Pseudo Tree, MBDPOP and ODPOP.
 [5] (2019) PTISABB: a hybrid treebased complete algorithm to solve asymmetric distributed constraint optimization problems. In Proceedings of the 18th AAMAS, pp. 1506–1514. Cited by: Introduction, Correctness.
 [6] (2014) Agentbased decentralised coordination for sensor networks using the maxsum algorithm. Autonomous agents and multiagent systems 28 (3), pp. 337–380. Cited by: Introduction.
 [7] (2008) Decentralised coordination of lowpower embedded devices using the maxsum algorithm. In Proceedings of the 7th AAMAS, Vol. 2, pp. 639–646. Cited by: Introduction.

[8]
(2018)
Distributed constraint optimization problems and applications: a survey.
Journal of Artificial Intelligence Research
61, pp. 623–698. Cited by: Introduction.  [9] (2017) A distributed constraint optimization (DCOP) approach to the economic dispatch with demand response. In Proceedings of the 16th AAMAS, pp. 999–1007. Cited by: Introduction.
 [10] (2017) A multiagent system approach to scheduling devices in smart homes. In Proceedings of the 16th AAMAS, pp. 981–989. Cited by: Introduction.
 [11] (1985) Taking advantage of stable sets of variables in constraint satisfaction problems. In Proceedings of the 9th IJCAI, Vol. 85, pp. 1076–1078. Cited by: Pseudo Tree.
 [12] (2009) Asynchronous forward bounding for distributed COPs. Journal of Artificial Intelligence Research 34, pp. 61–88. Cited by: Introduction.
 [13] (2011) Generalizing ADOPT and BnBADOPT. In Proceedings of the 22nd IJCAI, pp. 554–559. Cited by: Introduction.
 [14] (1997) Distributed partial constraint satisfaction problem. In International Conference on Principles and Practice of Constraint Programming, pp. 222–236. Cited by: Introduction, Introduction.
 [15] (2014) DJAO: a communicationconstrained DCOP algorithm that combines features of ADOPT and ActionGDL. In Proceedings of the 28th AAAI, pp. 2680–2687. Cited by: Introduction.
 [16] (2017) Forward bounding on pseudotrees for DCOPs and ADCOPs. Artificial Intelligence 252, pp. 83–99. Cited by: Introduction.
 [17] (2006) A family of graphicalgamebased algorithms for distributed constraint optimization problems. In Coordination of largescale multiagent systems, pp. 127–146. Cited by: Introduction.
 [18] (2004) Taking DCOP to the real world: efficient complete solutions for distributed multievent scheduling. In Proceedings of the 3rd AAMAS, Vol. 1, pp. 310–317. Cited by: Introduction.
 [19] (2005) ADOPT: asynchronous distributed constraint optimization with quality guarantees. Artificial Intelligence 161 (12), pp. 149–180. Cited by: Introduction, Distributed Constraint Optimization Problems.
 [20] (2012) Concurrent forward bounding for distributed constraint optimization problems. Artificial Intelligence 193, pp. 186–216. Cited by: Configuration and Metrics.
 [21] (2017) DUCT: an upper confidence bound approach to distributed constraint optimization problems. ACM Transactions on Intelligent Systems and Technologyc 8 (5), pp. 69. Cited by: Introduction.
 [22] (2005) A scalable method for multiagent constraint optimization. In Proceedings of the 19th IJCAI, pp. 266–271. Cited by: Introduction, Pseudo Tree.
 [23] (2005) Approximations in distributed optimization. In International Conference on Principles and Practice of Constraint Programming, pp. 802–806. Cited by: Introduction.
 [24] (2006) ODPOP: an algorithm for open/distributed constraint optimization. In Proceedings of the 21st AAAI, pp. 703–708. Cited by: Introduction.
 [25] (2007) MBDPOP: a new memorybounded algorithm for distributed optimization. In Proceedings of the 20th IJCAI, pp. 1452–1457. Cited by: Introduction.
 [26] (2009) Generalizing DPOP: DPOP, a new complete algorithm for DCOPs. In Proceedings of the 8th AAMAS, pp. 1239–1240. Cited by: Introduction.
 [27] (2010) BnBADOPT: an asynchronous branchandbound DCOP algorithm. Journal of Artificial Intelligence Research 38, pp. 85–133. Cited by: Introduction.
 [28] (2005) Distributed stochastic search and distributed breakout: properties, comparison and applications to constraint optimization problems in sensor networks. Artificial Intelligence 161 (12), pp. 55–87. Cited by: Introduction.
Comments
There are no comments yet.