HS-CAI: A Hybrid DCOP Algorithm via Combining Search with Context-based Inference

11/28/2019 ∙ by Dingding Chen, et al. ∙ Chongqing University Nanyang Technological University NetEase, Inc 0

Search and inference are two main strategies for optimally solving Distributed Constraint Optimization Problems (DCOPs). Recently, several algorithms were proposed to combine their advantages. Unfortunately, such algorithms only use an approximated inference as a one-shot preprocessing phase to construct the initial lower bounds which lead to inefficient pruning under the limited memory budget. On the other hand, iterative inference algorithms (e.g., MB-DPOP) perform a context-based complete inference for all possible contexts but suffer from tremendous traffic overheads. In this paper, (i) hybridizing search with context-based inference, we propose a complete algorithm for DCOPs, named HS-CAI where the inference utilizes the contexts derived from the search process to establish tight lower bounds while the search uses such bounds for efficient pruning and thereby reduces contexts for the inference. Furthermore, (ii) we introduce a context evaluation mechanism to select the context patterns for the inference to further reduce the overheads incurred by iterative inferences. Finally, (iii) we prove the correctness of our algorithm and the experimental results demonstrate its superiority over the state-of-the-art.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Distributed Constraint Optimization Problems (DCOPs) [14, 8] are an elegant model for representing Multi-Agent Systems (MAS) where agents coordinate with each other to optimize a global objective. Due to their ability to capture essential MAS aspects, DCOPs can formalize various applications in the real world such as sensor network [6], task scheduling [18, 10], smart grid [9] and so on.

Incomplete algorithms for DCOPs [28, 17, 7, 21]

aim to rapidly find solutions at the cost of sacrificing optimality. On the contrary, complete algorithms guarantee the optimal solution and can be generally classified into inference-based and search-based algorithms. DPOP

[22] and Action_GDL [26] are typical inference-based complete algorithms which employ a dynamic programming paradigm to solve DCOPs. However, they require a linear number of messages of exponential size with respect to the induced width. Accordingly, ODPOP [24] and MB-DPOP [25] were proposed to trade the message number for smaller memory consumption by propagating the dimension-limited utilities with the corresponding contexts iteratively. That is, they iteratively perform a context-based inference to solve DCOPs optimally when the memory budget is limited.

Search-based complete algorithms like SBB [14], AFB [12], PT-FB [16], ADOPT [19] and its variants [27, 13] perform distributed backtrack searches to exhaust the search space. They have a linear size of messages but an exponential number of messages. Furthermore, these algorithms only use local knowledge to update the lower bounds, which exerts a trivial effect on pruning and makes them infeasible for solving large-scale problems. Then, PT-ISABB [5], DJAO [15] and ADPOT-BDP [1] came out to attempt to hybridize search with inference, where an approximated inference is used to construct the initial lower bounds for the search process. More specifically, PT-ISABB and ADOPT-BDP use ADPOP [23] to establish the lower bounds, while DJAO employs a function filtering technique [2] to get them. Here, ADPOP is an approximate version of DPOP by dropping the exceeding dimensions to ensure that each message size is below the memory limit. However, given the limited memory budget, the lower bounds obtained in a one-shot preprocessing phase are still inefficient for pruning since such bounds cannot be tightened by considering the running contexts. That is, the existing hybrid algorithms use only a context-free approximated inference as a one-shot phase to construct the initial lower bounds.

In this paper, we investigate the possibility of combining search with context-based inference to solve DCOPs efficiently. Specifically, our main contributions are listed as follows:

  • We propose a novel complete DCOP algorithm by hybridizing search with context-based inference, called HS-CAI where the search adopts a tree-based SBB to find the optimal solution and provides contexts for the inference, while the inference iteratively performs utility propagation for these contexts to construct the tight lower bounds to speed up the search process.

  • We introduce a context evaluation mechanism to extract the context patterns for the inference from the contexts derived from the search process so as to further reduce the number of context-based inferences.

  • We theoretically show the completeness of HS-CAI and prove that the lower bounds produced by the context-based inference are at least as tight as the ones established by the context-free approximated inference under the same memory budget. Moreover, the experimental results demonstrate HS-CAI outperforms state-of-the-art complete DCOP algorithms.

(a) Constraint graph
(b) Constraint matrix
(c) Pseudo tree
Figure 1: An example of a DCOP and its pseudo tree

Background

In this section, we expound the preliminaries including DCOPs, pseudo tree, MB-DPOP and ODPOP.

Distributed Constraint Optimization Problems

A distributed constraint optimization problem [19] can be defined by a tuple where

  • is a set of agents.

  • is a set of variables.

  • is a set of finite, discrete domains. Each variable takes a value in .

  • is a set of constraint functions. Each function specifies the non-negative cost for each combination of .

For the sake of simplicity, we assume that each agent holds exactly one variable (and thus the term agent and variable could be used interchangeably) and all constraints are binary (i.e., ). A solution to a DCOP is the assignments to all the variables such that the total cost is minimized. That is,

A DCOP can be represented by a constraint graph where a vertex denotes a variable and an edge denotes a constraint. Fig. 1 (a) presents a DCOP with five variables and seven constraints. For simplicity, the domain size of each variable is four and all constraints are identical as shown in Fig. 1(b).

Pseudo Tree

A depth first search [11, 4] arrangement of a constraint graph is a pseudo tree with the property that different branches are independent, and categorizes its constraints into tree edges and pseudo edges (i.e., non-tree edges). Thus, the neighbors of agent can be classified into its parent , children , pseudo parents and pseudo children based on their positions in the pseudo tree and the type edges they connect with . For clarity, we denote all the (pseudo) parents of as and the set of ancestors who share constraints with and its descendants as its separators [22]. Fig. 1(c) presents a possible pseudo tree deriving from Fig. 1(a).

MB-DPOP and ODPOP

MB-DPOP and ODPOP apply an iterative context-based utility propagation to aggregate the optimal global utility. Specifically, MB-DPOP first uses a cycle-cuts idea [4] on a pseudo tree to determine cycle-cut variables and groups these variables into clusters. Within the cluster, MB-DPOP preforms a bounded-memory exploration; anywhere else, the utility propagation from DPOP applies. Specifically, agents in each cluster propagate memory-bounded utilities for all the contexts of cycle-cut variables. As for ODPOP, each agent adopts an incremental and best-first fashion to propagate the context-based utility. Specifically, an agent repeatedly asks its children for their suggested context-based utilities until it can calculate a suggested utility for its parent during the utility propagation phase. The phase finishes after the root agent receiving enough utilities to determine the optimal global utility.

Proposed Method

In this section, we present a novel complete DCOP algorithm which utilizes both search and inference interleaved.

Motivation

It can be concluded that search can exploit bounds to prune the solution space but the pruning efficiency is closely related to the tightness of bounds. However, most of search-based complete algorithms can only use local knowledge to compute the initial lower bounds, which leads to inefficient pruning. On the other hand, inference-based complete algorithms can aggregate the global utility promptly, but their memory consumption is exponential in the induced width. Therefore, it is natural to combine both search and inference by performing memory-bounded inferences to construct efficient lower bounds for search. Unfortunately, the existing hybrid algorithms only perform an approximated inference to construct one-shot bounds in the preprocessing phase, which would lead to inefficient pruning given a limited memory budget. In fact, the bounds can be tightened by the context-based inference. That is, instead of dropping a set of dimensions, named approximated dimensions, to stay below the memory budget, we explicitly consider the running-context assignments to a subset of approximated dimensions (i.e., the context patterns) and compute tight bounds w.r.t. the context patterns. Here, we denote an assigned subset of approximated dimensions as decimated dimensions.

More specifically, we aim to combine the advantages of search and context-based inference to optimally solve DCOPs. Different from the existing hybrid methods, we compute tight bounds for the running contexts by performing the context-based inference for the context patterns chosen from the contexts derived from the search process.

Proposed Algorithm

Now, we present our proposed algorithm which consists of a preprocessing phase and a hybrid phase.

Preprocessing Phase

performs a bottom-up dimension and utility propagation to accumulate the approximated dimensions and establish the initial lower bounds based on these propagated utilities for search. Accordingly, we employ a tailored version of ADPOP with the limit to propagate the approximated dimensions and incomplete utilities (we omit the pseudo code of this phase due to the limited space). Particularly, during the propagation process, each agent selects the dimensions of its highest ancestors to approximate to make the dimension size of each outgoing utility below . Then, the approximated dimensions (i.e., the dimensions approximated by and its descendants) and utility sent from to its parent can be computed as follows:

(1)
(2)

Here, and are the approximated dimensions and utility received from its child , respectively. denotes the combination of the constraints between and its (pseudo) parents, i.e.,

(3)

Taking Fig. 1(c) for example, if we set , the dimensions dropped by are . Thus, the approximated dimensions and utility sent from to are and , respectively.

Hybrid Phase

consists of the search part and context-based inference part. The search part uses a variant of SBB on a pseudo tree (i.e., a simple version of NCBB [3]) to expand any feasible partial assignments and provides contexts for the inference part. By using such contexts, the inference part propagates context-based utilities iteratively to produce tight lower bounds for the search part.

Traditionally, the context-based inference is implemented by considering the assignments to the approximated dimensions (that is, the decimated dimensions are equal to the approximated dimensions). For example, MB-DPOP performs an iterative context-based inference by considering each assignment combination of cycle-cut variables. However, the approach is not a good choice for our case due to the following facts. First, the number of the assignments of cycle-cut variables is exponential, which would incur unacceptable traffic overheads. Moreover, the propagated utilities w.r.t. a specific combination may go out of date since another inference is required as long as the assignments change, which would be very common in the search process. Therefore, it’s unnecessary to perform inference for each assignment combination of the approximated dimensions.

Therefore, we consider to reduce the number of context-based inferences by making the propagated context-based utilities compatible with more contexts. Specifically, for agent , we consider the decimated dimensions . Then, the specific assignments to serve as a context pattern and the propagated utilities will cover all the partial assignments with the same context pattern. That is, ’s context pattern can be defined by:

(4)

where refers to ’s received current partial assignment, and is the current assignment of .

Taking agent in Fig. 1(c) as an example, given the limit , we have . Assume that . Thus, only the assignment of will be considered and (i.e., ) will be still dropped during the context-based inference part. Further, assume that . Then, covers four contexts (,, and ). Therefore, only need to perform inference for rather than the four contexts above in this case.

Selecting a context pattern is challenging as it offers a trade-off between the tightness of lower bounds and the number of compatible partial assignments. Thus, a good context pattern should comply with the following requirements. First, the good context pattern should be compatible with more contexts so as to avoid unnecessary context-based inference. In other words, these assignments are not likely to change in a short term. Second, the context pattern should result in tight lower bounds. Therefore, we propose a context evaluation mechanism to select the context pattern according to the frequency of an assignment of each dimension in the approximated dimensions. In more detail, we consider the context pattern consisting of the assignments whose frequency is greater than a specified threshold . Given , we have where refers to the frequency of for . With an appropriate , the assignments in the context pattern could not change in a short term. On the other hand, if a partial assignment is hard for pruning, then there would be more assignments included in the context pattern, which guarantees the lower bound tightness.

In addition, we introduce the variable to ensure that the descendants of perform an inference only for a context pattern at a time. Once has found or received a context pattern, is set to false to stop the context evaluation. And when the context pattern is incompatible with , is set to true to indicate that can find a new context pattern.

Next, we will detail how to implement the context evaluation mechanism and context-based inference part. Algorithm 1 presents the sketch of these procedures. We ignore the search part since it is an analogy to the tree-based branch and bound search algorithm PT-ISABB.

After the preprocessing phase (line 1), the root agent starts the context evaluation by initializing with true (line 2-3). Besides, it also starts the search part via CPA messages with its first assignment to its children (line 4-5). Upon receipt of a CPA message, first holds the previous , and stores the received and (line 6). Afterwards, it initializes the lower bounds for each child according to its received utilities (line 7, 23-27). Concretely, the lower bound for is established by the context-based utility compatible with received from (line 24-25). Otherwise, the bound is computed by the utility received from in the preprocessing phase (line 26-27). Next, updates based on and the previous one (line 8, 28-32). Specifically, for each dimension in , clears its counter if its assignment differs from its previous one (line 29-30). Otherwise, increases that counter (line 31-32). Then, finds a context pattern if the pattern for the context-based inference has not been determined (line 9-10). After finding one, it allocates and sends the allocated patterns via CTXT messages to its children who need to execute the context-based inference (i.e., ) (line 11-15, 33-35). Here, , the allocated pattern for , is a partition of based on ’s approximated dimensions (line 33).

When receiving a CTXT message, allocates the received pattern if there is any child who needs to perform the context-based inference (line 17, 33-35). Otherwise, it sends its context-based utility to its parent (line 18-19, 36-40). Here, its context-based utility is computed by the following steps. Firstly, it joins its local utility with the context-based utilities from and the utilities from the other children (line 36). Next, it applies to fix the values of the partial dimensions in so as to improve the completeness of the utility (line 38). Finally, it drops the dimensions of the utility to stay below the limit (line 39). After all the context-based utilities from have arrived, sends its context-based utility to its parent if it is not the starter of the context-based inference (line 21-22).

Considering in Fig. 1(c), assume that the context pattern has not been determined and . Given , we have and after receives a CPA with . Since the approximated dimensions for its child are , sends a CTXT message with the context pattern to . When receiving the pattern , sends the context-based utility to . Then, uses to compute the lower bound for after receiving the CPA message with or .

Theoretical Results

In this section, we first prove the effectiveness of the context-based inference on HS-CAI, and further establish the completeness of HS-CAI. Finally, we give the complexity analysis of the proposed method.

Lower Bound Tightness

Lemma 1.

For a given , the lower bound of for produced by the context-based utility () is at least as tight as the one established by the utility (). That is, , where .

Proof.

Directly from the pseudo code, , the dimensions dropped by in the context-based inference part (line 37), can be defined by:

where is the context pattern for ’s context-based inference, and is the dimensions dropped by in the preprocessing phase. Since has received from , we have (line 34-35, 20-21). Thus, is established.

Next, we will prove Lemma 1 by induction.

Base case. ’s children are leaf agents. For each child , we have

where is the assignment of in . The equation in the third to the fourth step holds since . Thus, we have proved the basis.

Inductive hypothesis. Assume that the lemma holds for all ’s children. Next, we are going to show the lemma holds for as well. For each , we have

where are ’s children who need to perform the context-based inference. Thus, Lemma 1 is proved. ∎

Correctness

Lemma 2.

Given the optimal solution , , the cost to the sub-tree rooted at is no less than the lower bound . That is, .

Proof.

Since we have proved the lower bounds constructed by the context-based inference part are at least as tight as the ones established by the preprocessing phase in Lemma 1, to prove the lemma, it is sufficient to show that .

Next, we will prove Lemma 2 by induction as well.

Base case. ’s children are leaf agents. For each child , we have

where is the assignment of in , and is the dimensions dropped by in the context-based inference part. Thus, the basis is proved.

Inductive hypothesis. Assume the lemma holds for all . Next, we will prove the lemma also holds for . For each child , we have

Thus, the lemma is proved. ∎

Theorem 1.

HS-CAI is complete.

Proof.

Immediately from Lemma 2, the optimal solution will not be pruned in HS-CAI. Furthermore, it has been proved that each agent will not receive two identical s in the search part from PT-ISABB [5], and the termination of HS-CAI relies on the search part. Thus, HS-CAI is complete. ∎

Complexity

When it performs a context-based inference, needs to store the context-based utilities and utilities received from all its children. Thus, the overall space complexity is where , and is the maximum dimension limit. Since a CTXTUTIL message only contains a context-based utility, its size is . For a CPA message, it is composed of the assignment of each agent and a context evaluation flag. Thus, the size of a CPA message is . Other messages like CTXT only carry an assignment combination of the approximated dimensions, it only requires space.

The preprocessing phase in HS-CAI only requires messages, since only the utility propagation are performed. For the search part and context-based inference part in the hybrid phase, the message number of the search part grows exponentially to the agent number with the same as the search-based complete algorithms. And the message number of the context-based inference part is proportional to the number of the context patterns selected by the context evaluation mechanism.

(a)
(b)
Figure 2: Network load of varying on different densities
(a) Message Number
(b) Network Load
(c) NCLOs
(a) Message Number
(b) Network Load
(c) NCLOs
Figure 3: Performance comparison under different agents on sparse configuration
Figure 4: Performance comparison under different agents on dense configuration
Figure 3: Performance comparison under different agents on sparse configuration

Empirical Evaluation

In this section, we first investigate the effect of the parameter in the context evaluation mechanism on HS-CAI. Then, we present the experimental comparisons of HS-CAI with state-of-the-art complete DCOP algorithms.

Configuration and Metrics

We empirically evaluate the performance of HS-CAI and state-of-the-art complete DCOP algorithms including PT-FB, DPOP and MB-DPOP on random DCOPs. Besides, we consider HS-CAI without the context-based inference part as HS-AI and HS-CAI without the context evaluation mechanism as HS-CAI(-M). Here, HS-AI is actually a variant of PT-ISABB in DCOP settings. All evaluated algorithms are implemented in DCOPSovler111 https://github.com/czy920/DCOPSovler, the DCOP simulator developed by ourselves. Besides, we consider the parameter in HS-CAI related to both and , where and is the height of a pseudo tree. Therefore, we set . Moreover, we choose and as the low and high memory budget for MB-DPOP, HS-AI, HS-CAI(-M) and HS-CAI. In our experiments, we use the message number and network load (i.e., the size of total information exchanged) to measure the communication overheads, and the NCLOs [20] to measure the hardware-independent runtime where the logical operations in the inference and the search are accesses to utilities and constraint checks, respectively. For each experiment, we generate 50 random instances and report the average of over all instances.

Parameter Tuning

Firstly, we aim to examine the effect of different on the performance of HS-CAI to verify the effectiveness of the context evaluation mechanism. Specifically, we consider the DCOPs with 22 agents and the domain size of 3. The graph density varies from 0.2 to 0.6 and varies from 0.05 to 0.65. Here, we do not show the experiment results of greater than 0.65 since the larger leads to the exact same results as with 0.65. Fig. 2 presents the network load of HS-CAI(-M) and HS-CAI with different . The average induced widths in this experiment are 8 16. It can be seen from the figure that HS-CAI requires much less network load than HS-CAI(-M). That is because HS-CAI performs inference only for the context patterns selected by the context evaluation mechanism rather than all the contexts as HS-CAI(-M) does.

Besides, given the memory budget limit , it can be observed that HS-CAI does not decrease all the time with the increase of . This is due to the fact that increasing which leads to large can decrease the number of context-based inferences but also loose the tightness of lower bounds to some degree. Exactly as mentioned above, the context pattern selection offers a trade-off between the tightness of lower bounds and the number of compatible partial assignments. Moreover, it can be seen that the best value of is close to 0.25 in HS-CAI() while the one in HS-CAI() is near to 0.45. Thus, we choose to 0.25 for HS-CAI() and 0.45 for HS-CAI() in the following comparison experiments.

Performance Comparisons

Fig. 4 gives the experimental results under different agent numbers on the sparse configuration where we consider the graph density to 0.25, the domain size to 3 and vary the agent number from 22 to 32. Here, the average induced widths are 9 17. It can be seen from Fig. 4(a) and (b) that although the hybrid complete algorithms (e.g., HS-AI and HS-CAI) and PT-FB all use the search strategy to find the optimal solution, HS-AI and HS-CAI are superior to PT-FB in terms of the network load and message number. This is because the lower bounds in PT-FB cannot result in effective pruning by only considering the constraints related to the assigned agents. Also, given a fixed , HS-CAI requires fewer messages than HS-AI since the lower bounds produced by the context-based inference are tighter than the ones established by the context-free approximated inference. Besides, it can be seen that HS-CAI() can solve larger problems than HS-AI() and the inference-based complete algorithms like DPOP and MB-DPOP, which demonstrates the superiority of hybridizing search with context-based inference when the memory budget is relatively low.

Although inference requires larger messages than search, it can be observed from Fig. 4(b) that HS-CAI incurs less network load than HS-AI, which indicates that HS-CAI can find the optimal solution with fewer messages owing to the effective pruning and context evaluation mechanism. Moreover, we can see from Fig. 4(c) that when solving problems with the agent number to 28, HS-CAI() requires fewer NCLOs than HS-AI() in spite of the exponential computation overheads incurred by the iterative inferences. This is because that HS-CAI() can provide tight lower bounds to speed up the search so as to greatly reduce the constraint checks when solving the large scale problems under the limited memory budget.

Besides, we consider the DCOPs with the domain size of 3 and graph density of 0.6 as the dense configuration. The agent number varies from 14 to 24 and the average induced widths are 8 18. Fig. 4 presents the performance comparison. It can be seen from Fig. 4(a) that DPOP and MB-DPOP cannot solve the problems with the agent number greater than 20 due to the large induced widths. Furthermore, since the inference-based complete algorithms have to perform inference on the entire solution space, these algorithms require much more NCLOs than the other competitors as Fig. 4(c) shows. Additionally, although they both perform the context-based inference, it can be seen from Fig. 4(b) and (c) that HS-CAI exhibits great superiority over MB-DPOP in terms of the network load and NCLOs. That is because HS-CAI only performs inference for the context patterns extracted by the context evaluation mechanism, while MB-DPOP needs to iteratively perform inference for all the contexts of cycle-cut variables. As for HS-CAI with different , it can be seen from Fig. 4(a) and (c) that HS-CAI() requires fewer messages but more NCLOs than HS-CAI(). That is because HS-CAI() can produce tighter lower bounds but will incur more computation overheads than HS-CAI().

Conclusion

By analyzing the feasibility of hybridizing search and inference, we propose a complete DCOP algorithm, named HS-CAI which combines search with context-based inference for the first time. Different from the existing hybrid complete algorithms, HS-CAI constructs tight lower bounds to speed up the search by executing context-based inference iteratively. Meanwhile, HS-CAI only needs to perform inference for a part of the contexts obtained from the search process by means of a context evaluation mechanism, which reduces the huge traffic overheads incurred by iterative context-based inferences. We theoretically prove that the context-based inference can produce tighter lower bounds compared to the context-free approximated inference under the same memory budget. Moreover, the experimental results show that HS-CAI can find the optimal solution faster with less traffic overheads than the state-of-the-art.

In the future, we will devote to further accelerating the search process by arranging the search space with the inference results. In addition, we will also work for reducing the overheads caused by a context-based utility propagation.

Acknowledgments

This work is funded by the Chongqing Research Program of Basic Research and Frontier Technology (No.:cstc2017jcyjAX0030), Fundamental Research Funds for the Central Universities (No.:2019CDXYJSJ0021) and Graduate Research and Innovation Foundation of Chongqing (No.:CYS17023).

References

  • [1] J. Atlas, M. Warner, and K. Decker (2008) A memory bounded hybrid approach to distributed constraint optimization. In Proceedings 10th International Workshop on DCR, pp. 37–51. Cited by: Introduction.
  • [2] I. Brito and P. Meseguer (2010) Improving DPOP with function filtering. In Proceedings of the 9th AAMAS, pp. 141–148. Cited by: Introduction.
  • [3] A. Chechetka and K. Sycara (2006) No-commitment branch and bound search for distributed constraint optimization. In Proceedings of the 5th AAMAS, pp. 1427–1429. Cited by: Hybrid Phase.
  • [4] R. Dechter, D. Cohen, et al. (2003) Constraint processing. Morgan Kaufmann. Cited by: Pseudo Tree, MB-DPOP and ODPOP.
  • [5] Y. Deng, Z. Chen, D. Chen, X. Jiang, and Q. Li (2019) PT-ISABB: a hybrid tree-based complete algorithm to solve asymmetric distributed constraint optimization problems. In Proceedings of the 18th AAMAS, pp. 1506–1514. Cited by: Introduction, Correctness.
  • [6] A. Farinelli, A. Rogers, and N. R. Jennings (2014) Agent-based decentralised coordination for sensor networks using the max-sum algorithm. Autonomous agents and multi-agent systems 28 (3), pp. 337–380. Cited by: Introduction.
  • [7] A. Farinelli, A. Rogers, A. Petcu, and N. R. Jennings (2008) Decentralised coordination of low-power embedded devices using the max-sum algorithm. In Proceedings of the 7th AAMAS, Vol. 2, pp. 639–646. Cited by: Introduction.
  • [8] F. Fioretto, E. Pontelli, and W. Yeoh (2018) Distributed constraint optimization problems and applications: a survey.

    Journal of Artificial Intelligence Research

    61, pp. 623–698.
    Cited by: Introduction.
  • [9] F. Fioretto, W. Yeoh, E. Pontelli, Y. Ma, and S. J. Ranade (2017) A distributed constraint optimization (DCOP) approach to the economic dispatch with demand response. In Proceedings of the 16th AAMAS, pp. 999–1007. Cited by: Introduction.
  • [10] F. Fioretto, W. Yeoh, and E. Pontelli (2017) A multiagent system approach to scheduling devices in smart homes. In Proceedings of the 16th AAMAS, pp. 981–989. Cited by: Introduction.
  • [11] E. C. Freuder and M. J. Quinn (1985) Taking advantage of stable sets of variables in constraint satisfaction problems. In Proceedings of the 9th IJCAI, Vol. 85, pp. 1076–1078. Cited by: Pseudo Tree.
  • [12] A. Gershman, A. Meisels, and R. Zivan (2009) Asynchronous forward bounding for distributed COPs. Journal of Artificial Intelligence Research 34, pp. 61–88. Cited by: Introduction.
  • [13] P. Gutierrez, P. Meseguer, and W. Yeoh (2011) Generalizing ADOPT and BnB-ADOPT. In Proceedings of the 22nd IJCAI, pp. 554–559. Cited by: Introduction.
  • [14] K. Hirayama and M. Yokoo (1997) Distributed partial constraint satisfaction problem. In International Conference on Principles and Practice of Constraint Programming, pp. 222–236. Cited by: Introduction, Introduction.
  • [15] Y. Kim and V. Lesser (2014) DJAO: a communication-constrained DCOP algorithm that combines features of ADOPT and Action-GDL. In Proceedings of the 28th AAAI, pp. 2680–2687. Cited by: Introduction.
  • [16] O. Litov and A. Meisels (2017) Forward bounding on pseudo-trees for DCOPs and ADCOPs. Artificial Intelligence 252, pp. 83–99. Cited by: Introduction.
  • [17] R. T. Maheswaran, J. P. Pearce, and M. Tambe (2006) A family of graphical-game-based algorithms for distributed constraint optimization problems. In Coordination of large-scale multiagent systems, pp. 127–146. Cited by: Introduction.
  • [18] R. T. Maheswaran, M. Tambe, E. Bowring, J. P. Pearce, and P. Varakantham (2004) Taking DCOP to the real world: efficient complete solutions for distributed multi-event scheduling. In Proceedings of the 3rd AAMAS, Vol. 1, pp. 310–317. Cited by: Introduction.
  • [19] P. J. Modi, W. Shen, M. Tambe, and M. Yokoo (2005) ADOPT: asynchronous distributed constraint optimization with quality guarantees. Artificial Intelligence 161 (1-2), pp. 149–180. Cited by: Introduction, Distributed Constraint Optimization Problems.
  • [20] A. Netzer, A. Grubshtein, and A. Meisels (2012) Concurrent forward bounding for distributed constraint optimization problems. Artificial Intelligence 193, pp. 186–216. Cited by: Configuration and Metrics.
  • [21] B. Ottens, C. Dimitrakakis, and B. Faltings (2017) DUCT: an upper confidence bound approach to distributed constraint optimization problems. ACM Transactions on Intelligent Systems and Technologyc 8 (5), pp. 69. Cited by: Introduction.
  • [22] A. Petcu and B. Faltings (2005) A scalable method for multiagent constraint optimization. In Proceedings of the 19th IJCAI, pp. 266–271. Cited by: Introduction, Pseudo Tree.
  • [23] A. Petcu and B. Faltings (2005) Approximations in distributed optimization. In International Conference on Principles and Practice of Constraint Programming, pp. 802–806. Cited by: Introduction.
  • [24] A. Petcu and B. Faltings (2006) ODPOP: an algorithm for open/distributed constraint optimization. In Proceedings of the 21st AAAI, pp. 703–708. Cited by: Introduction.
  • [25] A. Petcu and B. Faltings (2007) MB-DPOP: a new memory-bounded algorithm for distributed optimization. In Proceedings of the 20th IJCAI, pp. 1452–1457. Cited by: Introduction.
  • [26] M. Vinyals, J. A. Rodriguez-Aguilar, and J. Cerquides (2009) Generalizing DPOP: DPOP, a new complete algorithm for DCOPs. In Proceedings of the 8th AAMAS, pp. 1239–1240. Cited by: Introduction.
  • [27] W. Yeoh, A. Felner, and S. Koenig (2010) BnB-ADOPT: an asynchronous branch-and-bound DCOP algorithm. Journal of Artificial Intelligence Research 38, pp. 85–133. Cited by: Introduction.
  • [28] W. Zhang, G. Wang, Z. Xing, and L. Wittenburg (2005) Distributed stochastic search and distributed breakout: properties, comparison and applications to constraint optimization problems in sensor networks. Artificial Intelligence 161 (1-2), pp. 55–87. Cited by: Introduction.