PT-ISABB: A Hybrid Tree-based Complete Algorithm to Solve Asymmetric Distributed Constraint Optimization Problems

02/16/2019 ∙ by Yanchen Deng, et al. ∙ Chongqing University 0

Asymmetric Distributed Constraint Optimization Problems (ADCOPs) have emerged as an important formalism in multi-agent community due to their ability to capture personal preferences. However, the existing search-based complete algorithms for ADCOPs can only use local knowledge to compute lower bounds, which leads to inefficient pruning and prohibits them from solving large scale problems. On the other hand, inference-based complete algorithms (e.g., DPOP) for Distributed Constraint Optimization Problems (DCOPs) require only a linear number of messages, but they cannot be directly applied into ADCOPs due to a privacy concern. Therefore, in the paper, we consider the possibility of combining inference and search to effectively solve ADCOPs at an acceptable loss of privacy. Specifically, we propose a hybrid complete algorithm called PT-ISABB which uses a tailored inference algorithm to provide tight lower bounds and a tree-based complete search algorithm to exhaust the search space. We prove the correctness of our algorithm and the experimental results demonstrate its superiority over other state-of-the-art complete algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Distributed Constraint Optimization Problems (DCOPs) Yeoh and Yokoo (2012) are a fundamental framework in multi-agent systems where agents cooperate with each other to optimize a global objective. DCOPs have been successfully deployed in many real world applications including smart grids Fioretto et al. (2017), radio frequency allocation Monteiro et al. (2012), task scheduling Sultanik et al. (2007), etc.

Algorithms for DCOPs can generally be classified into two categories, i.e., complete algorithms and incomplete algorithms. Search-based complete algorithms like SBB

Hirayama and Yokoo (1997), AFB Gershman et al. (2009), ConFB Netzer et al. (2012), ADOPT Modi et al. (2005) and its variants Yeoh et al. (2010); Gutierrez et al. (2011) perform distributed searches to exhaust the search space, while inference-based complete algorithms including Action-GDL Vinyals et al. (2009), DPOP Petcu and Faltings (2005b) and its variants Petcu and Faltings (2006, 2007) use dynamic programming to optimally solve problems. In contrast, incomplete algorithms including local search Zhang et al. (2005); Maheswaran et al. (2004a); Okamoto et al. (2016), GDL-based algorithms Farinelli et al. (2008); Rogers et al. (2011); Zivan and Peled (2012); Chen et al. (2017) and sampling-based algorithms Ottens et al. (2017); Fioretto et al. (2016) trade optimality for small computational efforts.

Asymmetric Distributed Constraint Optimization Problems (ADCOPs) Grinshpoun et al. (2013) are a notable extension to DCOPs, which can capture ubiquitous asymmetric structures in real world scenarios Maheswaran et al. (2004b); Ramchurn et al. (2011); Burke et al. (2007). That is, a constraint in an ADCOP explicitly defines the exact payoff for each participant instead of assuming equal payoffs for constrained agents. Solving ADCOPs is more challenging since algorithms must evaluate and aggregate the payoff for each participant of a constraint. ATWB and SABB Grinshpoun et al. (2013) are asymmetric versions of AFB and SBB based on an one-phase strategy in which the algorithms systematically check each side of the constraints before reaching a full assignment. Besides, AsymPT-FB Litov and Meisels (2017) is another search-based complete algorithm for ADCOPs, which implements a variation of forward bounding on a pseudo tree. However, to the best of our knowledge, there is no asymmetric adaptation of inference-based complete algorithms for DCOPs (e.g., DPOP). That is partially because these algorithms require the total knowledge of each constraint to perform variable elimination optimally. In other words, parent agents must surrender their private constraints to eliminate their children variables, which is unacceptable in a asymmetric scenario.

In this paper, we investigate the possibility of combining both inference and search to efficiently solve ADCOPs at an acceptable loss of privacy. Specifically, our main contributions are listed as follows.

  • We propose a hybrid tree-based complete algorithm for ADCOPs, called PT-ISABB.111The source code is available at https://github.com/czy920/DCOPSovlerAlgorithm_PTISABB. The algorithm first uses a tailored version of ADPOP Petcu and Faltings (2005a) to solve a subset of constraints, and the inference results stored in agents are served as look-up tables for tight lower bounds. Then, a variant of SABB is implemented on a pseudo tree to guarantee optimality.

  • We theoretically show the completeness of our proposed algorithm. Moreover, we also prove that the lower bounds in PT-ISABB are at least as tight as the ones in AsymPT-FB when its maximal dimension limit .

  • We empirically evaluate our algorithm on various benchmarks. Our study shows that PT-ISABB requires significantly fewer messages and lower NCLOs than state-of-the-art search-based complete algorithms including AsymPT-FB. The experimental results also indicate that our proposed algorithm leaks less privacy than AsymPT-FB when solving complex problems.

2. Background

In this section, we review the preliminaries including ADCOPs, pseudo tree, DPOP and ADPOP.

2.1. Asymmetric Distributed Constraint Optimization Problems

An asymmetric distributed constraint optimization problem can be defined by a tuple in which

  • is a set of agents

  • is a set of variables

  • is a set of finite and discrete domains. Each variable takes a value from

  • is a set of constraints. Each constraint defines a set of non-negative costs for every possible value combination of the set of variables it is involved in

For the sake of simplicity, we assume that an agent only controls a variable and all constraints are binary. Therefore, the term agent and variable can be used interchangeably. Besides, for the constraint between and , we denote the private cost functions for and as and , respectively. Note that in the asymmetric setting, does not necessarily equal to . A solution to an ADCOP is the assignments to all the variables with the minimal aggregated cost. An ADCOP can be visualized by a constraint graph in which the vertexes denote the variables and the edges denote the constraints between agents. Fig. 1 (a) visualizes an ADCOP with four agents and four constraints.

(a) (b)
Figure 1. An example of constraint graph and pseudo tree

2.2. Pseudo Tree

A pseudo tree is an ordered arrangement to a constraint graph in which agents in different branches are independent and thus search can be performed in parallel on these independent branches. A pseudo tree can be generated by a depth-first traverse of the constraint graph, which categorizes the constraints into tree edges and pseudo edges. For an agent , we denote its parent as which is the ancestor connecting to through a tree edge, its pseudo parents as which is a set of ancestors connecting to through pseudo edges, its children and pseudo children as and which are the sets of descendants connecting to via tree edges and pseudo edges, respectively. For the sake of clarity, we denote all the parents of as . We also denote its separator, i.e., the set of ancestors that are constrained with or its descendants, as Petcu and Faltings (2006). Fig. 1 (b) gives a possible pseudo tree deriving from Fig. 1 (a).

2.3. DPOP and ADPOP

DPOP is an important inference-based algorithm that performs dynamic programming on a pseudo tree, starting with a phase of utility propagation. In the phase, each agent joins the received utilities from its children with its local utility, eliminates its dimension by calculating the optimal utility for each assignment combination of its separator, and propagates the reduced utility to its parent. After that, a value propagation phase starts from the root agent. In the phase, each agent chooses the optimal assignment according to the utilities calculated in the previous phase and the assignments from its parent, and broadcasts the extended assignments to its children. The algorithm terminates when all agents have chosen their optimal assignments.

Although DPOP only requires a linear number of messages to solve a DCOP, its memory consumption is still exponential in induced width, which prohibits it from solving more complex problems. Thus, Petcu et. al, proposed ADPOP which is an approximate version of DPOP and allows the desired trade-off between solution quality and computational complexity. Specifically, ADPOP imposes a limit on the maximum number of dimensions in each message. When the number of dimensions in an outgoing message exceeds the limit, the algorithm drops a set of dimensions to stay below the limit. That is, the algorithm computes an upper bound and a lower bound by applying a maximal/minimal projection on these dimensions. During the value propagation phase, agents can make decisions according to the highest utilities in either upper bounds or lower bounds.

3. Proposed Method

In this section, we present our proposed PT-ISABB, a two-phase hybrid complete algorithm for ADCOPs. We begin with a motivation, and then present the details of the inference phase and the search phase, respectively.

3.1. Motivation

The existing search-based complete algorithms for ADCOPs can only use local knowledge to compute lower bounds, which leads to inefficient pruning. More specifically, unassigned agents report the best local costs under the given partial assignments to compute lower bounds. Taking Fig. 1 as an example, in AsymPT-FB agent can receive LB_reports from and . As a consequence, can only be aware of the lower bounds of and and does not have any knowledge about the remaining constraints (i.e., the constraints between and , and ). On the other hand, inference algorithms like DPOP are able to aggregate and propagate the global utility, but they are not applicable to ADCOPs due to a privacy concern. For example, needs to know both and to optimally eliminate , which violates the privacies of and . Thus, to overcome the pathologies, we propose a novel hybrid scheme to solve ADCOPs, which combines both inference and search. Specifically, the scheme consists of the following phases.

  • Inference phase: performing a bottom-up utility propagation with respect to a subset of constraints to build look-up tables for lower bounds

  • Search phase: using a tree-based complete search algorithm for ADCOPs to exhaust the search space and guarantee optimality

In this paper, we propose a tailored version of ADPOP for the inference phase to avoid the severe privacy loss and exponential memory consumption in DPOP. Furthermore, we implement SABB on a pseudo tree for the search phase and propose an algorithm called PT-ISABB. Although they both operate on pseudo trees, our algorithm excels AsymPT-FB twofold. When its maximal dimension limit , the lower bounds in our algorithm are at least as tight as the ones in AsymPT-FB (see Property 4.1 for detail). Moreover, PT-ISABB avoids to perform forward bounding which is expensive during the search phase.

3.2. Inference Phase

Figure 2. Pseudo code of inference phase

Fig. 2 gives the sketch of the inference phase for PT-ISABB. The phase begins with leaf agents who send their local utilities to their parents via UTIL messages (line 1 - 3). Particularly, if the number of dimensions in the utility exceeds the limit (line 11), we drop the dimensions of the highest ancestors to stay below the limit by a minimal projection (line 12 - 13). Here, denotes the combination of the constraints between agent and its parent and pseudo parents enforced in side, i.e.,

Note that in our algorithm we do not require the parent agents to disclose their private functions to perform inference exactly. In this way, a local utility table only involves the functions of that agent and the privacies of its parent and pseudo parents are therefore guaranteed. On the other hand, however, ignoring the private functions of parents and pseudo parents leads to severe inconsistencies when performing variable elimination. In other words, we actually trade lower bound tightness for privacy. We try to alleviate the problem by performing non-local elimination which is elaborated as follows.

When receives a UTIL message from its child , it joins the utility from with its corresponding private constraint function and then eliminates the dimension for a more complete utility (line 4). Compared to DPOP and ADPOP, the elimination of each variable is postponed to its parent in the pseudo-tree. Taking Fig. 1 (b) for example, the UTIL message from to is given by

if the maximal dimension limit , and the elimination of is actually performed by . That is,

Then, initiates the search phase after receiving all UTIL messages from its children if it is the root agent (line 6 - 8). Otherwise, it propagates the joint utility to its parent (line 9 - 10).

3.3. Search Phase

Figure 3. Pseudo code of search phase (message passing)
Figure 4. Pseudo code of search phase (auxiliary functions)

The phase performs a branch-and-bound search on a pseudo tree to exhaust the search space. Specifically, each branching agent decomposes the problem into several subproblems, and each of its children solves a subproblem in parallel. To detect and discard the suboptimal solution, each agent maintains a upper bound for its subproblem and an lower bound for each value in its domain. Therefore, each agent needs to maintain the following data structures.

  • records the assignment currently being explored in the subtree rooted at . The data structure is necessary because asynchronous search is carried out in parallel in sub-trees based on different possible values of .

  • is the cost for between and its parent and pseudo parents under the current partial assignment (), which is initially set to the cost enforced in side. That is,

    (1)
  • is the lower bound of child for , which is initially set to the utility under and . That is,

    (2)

    where is a slice to under , i.e.,

    When receives a BACKTRACK message from , it replaces the initial lower bound with the actual cost reported by (or if is infeasible for given ).

  • is the lower bound for , i.e.,

    (3)
  • is the set of assignments for which has received all BACKTRACK messages from its children, and is initially set to . Particularly, if is a leaf agent.

  • is the best cost explored under , which is given by

    (4)

    Particularly, if , .

  • is the optimal assignment to its subproblem under when and is initially set to . Particularly, is the optimal solution if is the root agent, where .

Fig. 3 and Fig. 4 give the pseudo codes of the search phase for PT-ISABB. The phase begins with the root agent sending the first element in its domain to its children (line 1 - 4). When an agent receives a CPA message from its parent, it first stores the partial assignment and upper bound and then finds the first feasible assignment (line 5 - 7), i.e., the first assignment such that (line 57 - 60). If such an assignment exists, sends COST_REQ messages to its parent and pseudo parents to request the private costs of other side for (line 8 - 10, line 13). Otherwise, it sends a BACKTRACK message with an infinity cost and an empty subproblem assignment (line 64 - 65) to its parent to announce that the given is infeasible (line 11 - 12).

When receives a COST message for , it adds the other side cost to (line 14). After receiving all the COST messages for from its parent and pseudo parents, is able to determine whether it should continue to explore . If is a leaf agent, it just updates the current upper bound and switches to the next feasible assignment (line 17 - 19) since the search space no longer needs to be expanded. If such exists, requests costs for the assignment (line 20 - 21). Otherwise, it backtracks to its parent by reporting the best cost and the best subproblem assignment explored under (line 22 - 23, line 66 - 68). If is not a leaf agent and the current lower bound for is still less than its upper bound, it expands the search space by sending CPA messages to its children who are going to explore (line 25 - 27). Each message contains an extended partial assignment (line 61) and an upper bound which is the remainder after deducting the cost incurred by and the lower bounds of the other children from ’s upper bound (line 62). Otherwise, is proven to be suboptimal and the agent switches to the next feasible assignment (line 28 - 30). If such an assignment exists, requests costs for it (line 31 - 32). A backtrack takes place if all children exhaust ’s domain (line 33 - 34).

When receives a BACKTRACK message for from a child , it updates the corresponding lower bound with the actual cost reported by if and the assignment is feasible (otherwise ), and merges the best assignments from (line 35). If has received all the BACKTRACK messages for from its children, it marks as complete and updates the current upper bound for its subproblem (line 36 - 38). also needs to determine the next assignment for to explore (line 39). If exists and has received all the COST messages from its parent and pseudo parents, it informs to explore by sending a CPA message (line 40 - 42). Otherwise, requests costs for if it has not been done (line 43 - 44). If does not exist and all children have exhausted ’s domain, informs its children to terminate and terminates itself if it is the root agent (line 45 - 48). Otherwise, it backtracks to its parent (line 49 - 50).

4. Theoretical Results

4.1. Correctness

In this section, we first prove the termination and optimality, and further establish the completeness of PT-ISABB.

Lemma

PT-ISABB will terminate after a finite number of iterations.

Proof.

Directly from the pseudo codes, the inference phase will terminate since it only needs a linear number of messages. Thus, to prove the termination, it is enough to show that the same partial assignment cannot be explored twice in the search phase, i.e., an agent will not receive two identical s. Obviously, the claim holds for the root agent since it does not receive any CPA message. For an agent and a given from its parent, it will send several CPA messages to each child. Since each of them contains the different assignments of the agent (line 29, line 39, line 57 - 60), the s sent to the child are all different. Therefore, the termination is hereby guaranteed. ∎

Lemma

For an agent and a given , the cost incurred by any assignment to the subtree rooted at with the assignment is no less than the corresponding lower bound .

Proof.

The lemma for a leaf is trivial since is set to the cost of which is obviously no greater than the cost of the feasible assignment. We now focus on no-leaf agents. Recall that will replace the original lower bound with the actual cost reported by after receiving a BACKTRACK message for from (line 35). Thus, to prove the lemma, it is sufficient to show that the initial lower bound is no greater than the actual cost of , where is the assignment to the subtree rooted at .

Consider the induction basis, i.e., ’s children are leafs. For each child , we have

where is the assignment to in or .The equation in the second to the last step holds when the maximal dimension limit . Thus, the lemma holds for the basis.

Assume that the lemma holds for all . Next, we are going to show the lemma holds for as well. For each child , we have

which establishes the lemma. ∎

Lemma

For an agent and a given , any assignment to the subtree rooted at with cost greater than cannot be a part of a solution with cost less than the global upper bound.

Proof.

We will prove recursively, by showing that for a partial assignment to the subtree rooted at with , any partial assignment to the subtree rooted at will have where . Note that could be either an upper bound from via a CPA message (line 5) or a result of updating the upper bound locally (line 18, line 38). cannot backtrack by reporting in the latter case since there must exist a better partial assignment whose cost is . If is received from , according to line 62, we have

Thus, necessarily means that any partial assignment will have . ∎

PT-ISABB is complete.

Proof.

Immediately from Lemma 4.1, Lemma 4.2 and Lemma 4.3, the algorithm will terminate and all pruned assignments are suboptimal. Thus, PT-ISABB is complete. ∎

4.2. Lower bound tightness

Property 4.1.

For an agent and a given , the initial lower bound of for is at least as tight as the one in AsymPT-FB when the maximal dimension limit .

Proof.

In AsymPT-FB, the lower bound for after receiving all the LB_Reports from the subtree rooted at is given by the sum of the best single side local costs of ’s descendants under . That is,

where is the set of the descendants of

. For the sake of clarity, we denote the vector of

and its descendant variables as . Next, we will show . Since , the inference phase does not drop any dimension. Thus, we have

Since and , the right-hand side of the inequality in the last step can be further reduced. That is,

which concludes the property. ∎

4.3. Complexity

Since an agent stores and for each child, the overall space complexity in the worst case (i.e., ) is where . Since it contains all the dimensions of and itself, the size of an UTIL message from is when . For a CPA message, it consists of the assignment of each agent and an upper bound. Thus, the size of a CPA message is . Other messages including COST_REQ, COST, BACKTRACK and TERMINATE carry several scalars and thus they only require space.

Different than standard DPOP/ADPOP, PT-ISABB only requires messages in the inference phase since it does not have the value propagation phase. Like any other search based complete algorithm, the message number of the search phase grows exponentially with respect to the agent number.

5. Experimental Results

We empirically evaluate PT-ISABB with state-of-the-art search-based complete algorithms for ADCOPs including SABB, ATWB and AysmPT-FB on three configurations. To demonstrate the real power of non-local elimination, we also consider SABB on a pseudo-tree (PT-SABB) and the local elimination version of PT-ISABB (PT-ISABB, local) with . In the first ADCOP configuration, we set the graph density to 0.25, the domain size to 3 and vary the agent number from 8 to 18. The second configuration is ADCOPs with 8 agents and the domain size of 8. The graph density varies from 0.25 to 1. In the last configuration, we consider asymmetric MaxDCSPs with 10 agent, the domain size of 10 and the graph density of 0.4, and the tightness varies from 0.1 to 0.8. For each of the settings, we generate 50 random instances and the results are averaged over all instances. In our experiments, we use the number of non-concurrent logical operations (NCLO) Netzer et al. (2012) to evaluate hardware-independent runtime, in which the logical operations in the inference phase are accesses to utility tables, and for the search phase and other competitors they are constraint checks. Also, we use the message number and the size of total information exchanged to measure the network load. For asymmetric MaxDCSPs, we use entropy Brito et al. (2009) to quantify the privacy loss Litov and Meisels (2017); Grinshpoun et al. (2013). The experiments are conducted on an i7-7820x workstation with 32GB of memory and for each algorithm we set the timeout to 2 minutes.


(a)

(b)

Figure 5. Performance comparison under different agent numbers

(a)

(b)

Figure 6. Performance comparison under different graph densities

Fig. 5 gives the performance comparison on different agent numbers, and the average induced widths in the experiments are 1 6.84. It can be seen from the figure that all the algorithms suffer from exponential overheads as the agent number grows. Among them, our proposed PT-ISABB requires significant fewer messages and lower NCLOs than the other competitors, which demonstrates the superiority of the hybrid execution of inference and search. On the other hand, although PT-ISABB (, local) employs an complete one-side inference to construct the initial lower bounds, it is still inferior to PT-ISABB with , which demonstrate the necessity of non-local elimination. Besides, it is worth noting that PT-ISABB requires much fewer messages than AsymPT-FB even when the maximal dimension limit is small (e.g., ). That is because PT-ISABB does not rely on forward bounding which is expensive in message-passing to compute lower bounds. Moreover, the phenomenon also indicates that our algorithm can produce tighter lower bounds even if the memory budget is relatively low.

Fig. 6 gives the results under different graph densities. The average induced widths here are 1 6. Note that in this configuration, the size of the search space does not change and the complexity is reflected in the topologies. It can be concluded from the figure that all the tree-based algorithms exhibit great superiorities when the graph density is low, and the advantages vanish as the density grows. That is because those algorithms can effectively parallel the search processes on sparse problems. Dense problems, on the other hand, usually result in pseudo trees with low branching factors, making the tree-based algorithms require more messages than SABB. Even so, our proposed PT-ISABB with large still outperforms SABB when the problems are fully connected, which demonstrates the necessity of tighter lower bounds. Additionally, the figure also indicates that PT-ISABB with different performs similarly on sparse problems, but the performances vary a lot on dense problems. That is due to the fact that the induced width of a pseudo tree is relatively small when solving a spare problem and thus only a small set of dimensions is dropped during the inference phase. Besides, it can be seen from the figure that although both PT-ISABB (, local) and PT-ISABB () perform complete inferences, the non-local elimination version requires lower NCLOs and fewer message numbers in most of the settings. That is because the non-local elimination version can provide tighter lower bounds which result in efficient pruning, and thus the algorithm incurs fewer constraint checks and messages in the search phase.


(a)

(b)

Figure 7. Performance comparison under different tightness
Table 1. The size of total information exchanged of each algorithm under different graph densities (in KB)

Table 1 presents the size of total information exchanged of each algorithm under different densities. It can be seen from the table that all the non-local elimination versions of PT-ISABB exhibit great advantages over the other search-based competitors, except that PT-ISABB () is slightly inferior to AsymPT-FB when solving the fully-connected problems. The phenomenon indicates that although a message in the inference phase is generally larger than the one in the search phase, the algorithms can still gain great reductions on network traffics since the effective pruning in the search phase greatly reduces the message number. Besides, it is interesting to find that a large dimension limit (e.g., ) does not necessarily result in the smallest traffic. That is because the size of a message in the inference phase is exponential to the minimum of the induced width and the dimension limit. Besides, it should be noted that although PT-SABB requires more messages than ATWB and SABB when solving fully-connected problems according to Fig. 6, it still incurs much smaller traffic due to the fact that the last agent in ATWB and SABB needs to broadcast the reached complete solution to all other agents once a new solution is constructed. In contrast, agents in PT-SABB only back up the assignments to their descendants, which are subsets of the complete solution, to their parents via BACKTRACK messages.

Fig. 7 presents the results on asymmetric MaxDCSPs with different tightness, and the average induced width is 3.92. This configuration neither increases the search space nor affects the topologies, but instead increases the difficulty of pruning. All the algorithms except ATWB produces few messages when solving problems with low tightness. That is because on these problems the algorithms can find low upper bounds very quickly to prune most of the search space. As the tightness grows, the number of prohibited combinations increases and the algorithms can no longer find low upper bounds promptly. As a result, the algorithms require much more search efforts to exhaust the search space. Since they cannot exploit topologies to accelerate the search process, SABB and ATWB perform poorly and can only solve the problems with tightness up to 0.6. On the other hand, the tree-based algorithms divide a problem to several smaller subproblems at each branching agent and search the subproblems in parallel. Thus, those algorithms exhibit better performances and solve all the problems. Among them, our proposed PT-ISABB with incurs much smaller overheads, which demonstrates the effectiveness of the inference phase in computing tighter lower bounds. In other words, although PT-ISABB only guarantees to produce lower bounds as tight as the ones of AsymPT-FB when according to Property 4.1, it requires less memory consumption to compute such lower bounds in practice. Besides, it can be seen from the figure that PT-SABB incurs smaller communication overheads than AsymPT-FB when solving the problems with low tightness, which demonstrates forward bounding is expensive in message-passing. Additionally, it can be concluded that PT-ISABB algorithms with large require much more NCLOs than the other competitors when solving problems with low tightness. That is no surprise since inference on problems with large domain sizes would be more expensive, and a search-based algorithm actually can find a feasible solution very quickly even if the lower bounds are poor when solving these problems.

Figure 8. Privacy losses under different tightness

Fig. 8 gives privacy losses under different tightness. Privacy loss in PT-ISABB comes from both the inference phase and the search phase. Specifically, since the variable elimination is performed by parents (i.e., line 4 of the inference phase), parents can easily figure out which pairs of assignments are feasible with respect to the constraints enforced in children sides from the zero entries of the utilities from children. Thus, the inference phase would cause a half privacy loss only on each tree edge in the worst case, which is still better than leaking at least a half privacy if we directly use DPOP to solve problems. Besides, the direct disclosure mechanism of the search phase in which agents request their (pseudo) parents to expose the private costs before expanding the search space also leads to the privacy loss. However, the loss could be much reduced via the effective pruning. It can be seen that our proposed algorithm leaks more privacy than the other competitors when solving the problems with low tightness. That is no surprise because these problems usually have feasible solutions, which leads to the fact that most of entries in a utility from a child are zero. That being said, PT-ISABB with leaks less privacy than the other competitors when solving the problems with high tightness. The reason is twofold: parents can no longer infer the feasible assignment pairs as the tightness grows, and the inference phase produces tight lower bounds which lead to the effective pruning in the search phase. Besides, it is worth mentioning that the local elimination version of PT-ISABB performs better in terms of privacy preservation when solving the problems with low tightness. That is because variables are already eliminated before sending utilities to their parents. As a result, parents can only know the best utilities they can achieve, but cannot figure out the corresponding assignments to their children.

6. Conclusion

It is known that DPOP/ADPOP for DCOP cannot be directly applied to ADCOP due to a privacy concern. In this paper, we take ADPOP into solving ADCOP for the first time by combining with a tree-based variation of SABB, and present a two-phase complete algorithm called PT-ISABB. In the inference phase, a non-local elimination version of ADPOP is performed to solve a subset of constraints and build look-up tables for the tighter lower bounds. In the search phase, a tree-based variation of SABB is implemented to exhaust the search space. The experimental results show that our algorithms exhibit great superiorities over state-of-the-art search based algorithms, as well as the local elimination version of PT-ISABB. Also, our algorithms leak less privacy when solving complex problems. The authors would like to thank the anonymous referees for their valuable comments and helpful suggestions. This work is supported by the Chongqing Research Program of Basic Research and Frontier Technology under Grant No.:cstc2017jcyjAX0030, Fundamental Research Funds for the Central Universities under Grant No.: 2018CDXYJSJ0026, National Natural Science Foundation of China under Grant No.: 51608070 and the Graduate Research and Innovation Foundation of Chongqing, China under Grant No.: CYS18047

References

  • (1)
  • Brito et al. (2009) Ismel Brito, Amnon Meisels, Pedro Meseguer, and Roie Zivan. 2009. Distributed constraint satisfaction with partially known constraints. Constraints 14, 2 (2009), 199–234.
  • Burke et al. (2007) David A Burke, Kenneth N Brown, Mustafa Dogru, and Ben Lowe. 2007. Supply chain coordination through distributed constraint optimization. In The 9th International Workshop on DCR.
  • Chen et al. (2017) Ziyu Chen, Yanchen Deng, and Tengfei Wu. 2017. An Iterative Refined Max-sum_AD Algorithm via Single-side Value Propagation and Local Search. In Proc. of the 16th Conference on AAMAS. 195–202.
  • Farinelli et al. (2008) Alessandro Farinelli, Alex Rogers, Adrian Petcu, and Nicholas R Jennings. 2008. Decentralised coordination of low-power embedded devices using the max-sum algorithm. In Proc. of the 7th AAMAS. 639–646.
  • Fioretto et al. (2016) Ferdinando Fioretto, William Yeoh, and Enrico Pontelli. 2016. A dynamic programming-based MCMC framework for solving DCOPs with GPUs. In International Conference on Principles and Practice of Constraint Programming. Springer, 813–831.
  • Fioretto et al. (2017) Ferdinando Fioretto, William Yeoh, Enrico Pontelli, Ye Ma, and Satishkumar J Ranade. 2017. A Distributed Constraint Optimization (DCOP) Approach to the Economic Dispatch with Demand Response. In Proc. of the 16th Conference on AAMAS. 999–1007.
  • Gershman et al. (2009) Amir Gershman, Amnon Meisels, and Roie Zivan. 2009. Asynchronous forward bounding for distributed COPs.

    Journal of Artificial Intelligence Research

    34 (2009), 61–88.
  • Grinshpoun et al. (2013) Tal Grinshpoun, Alon Grubshtein, Roie Zivan, Arnon Netzer, and Amnon Meisels. 2013. Asymmetric Distributed Constraint Optimization Problems. Journal of Artificial Intelligence Research 47 (2013), 613–647.
  • Gutierrez et al. (2011) Patricia Gutierrez, Pedro Meseguer, and William Yeoh. 2011. Generalizing ADOPT and BnB-ADOPT. In Proc. of the 22nd IJCAI. 554–559.
  • Hirayama and Yokoo (1997) Katsutoshi Hirayama and Makoto Yokoo. 1997. Distributed partial constraint satisfaction problem. In International Conference on Principles and Practice of Constraint Programming. 222–236.
  • Litov and Meisels (2017) Omer Litov and Amnon Meisels. 2017. Forward bounding on pseudo-trees for DCOPs and ADCOPs. Artificial Intelligence 252 (2017), 83–99.
  • Maheswaran et al. (2004a) Rajiv T Maheswaran, Jonathan P Pearce, and Milind Tambe. 2004a. Distributed Algorithms for DCOP: A Graphical-Game-Based Approach.. In Proceeding of ISCA PDCS’04. 432–439.
  • Maheswaran et al. (2004b) Rajiv T. Maheswaran, Milind Tambe, Emma Bowring, Jonathan P. Pearce, and Pradeep Varakantham. 2004b. Taking DCOP to the Real World: Efficient Complete Solutions for Distributed Multi-Event Scheduling. In Proc. of the 3rd AAMAS. 310–317.
  • Modi et al. (2005) Pragnesh Jay Modi, Wei-Min Shen, Milind Tambe, and Makoto Yokoo. 2005. ADOPT: Asynchronous distributed constraint optimization with quality guarantees. Artificial Intelligence 161, 1-2 (2005), 149–180.
  • Monteiro et al. (2012) Tânia L Monteiro, Guy Pujolle, Marcelo E Pellenz, Manoel C Penna, and Richard Demo Souza. 2012. A multi-agent approach to optimal channel assignment in wlans. In Wireless Communications and Networking Conference (WCNC). 2637–2642.
  • Netzer et al. (2012) Arnon Netzer, Alon Grubshtein, and Amnon Meisels. 2012. Concurrent forward bounding for distributed constraint optimization problems. Artif. Intell. 193 (2012), 186–216.
  • Okamoto et al. (2016) Steven Okamoto, Roie Zivan, and Aviv Nahon. 2016. Distributed breakout: beyond satisfaction. In Proc. of the 25th IJCAI. 447–453.
  • Ottens et al. (2017) Brammert Ottens, Christos Dimitrakakis, and Boi Faltings. 2017. DUCT: An Upper Confidence Bound Approach to Distributed Constraint Optimization Problems. ACM Transactions on Intelligent Systems and Technology (TIST) 8, 5 (2017), 69.
  • Petcu and Faltings (2005a) Adrian Petcu and Boi Faltings. 2005a. Approximations in distributed optimization. In International Conference on Principles and Practice of Constraint Programming. 802–806.
  • Petcu and Faltings (2005b) Adrian Petcu and Boi Faltings. 2005b. A Scalable Method for Multiagent Constraint Optimization. In Proc. of the 19th IJCAI. 266–271.
  • Petcu and Faltings (2006) Adrian Petcu and Boi Faltings. 2006. ODPOP: an algorithm for open/distributed constraint optimization. In Proc. of the 21st AAAI. 703–708.
  • Petcu and Faltings (2007) Adrian Petcu and Boi Faltings. 2007. MB-DPOP: a new memory-bounded algorithm for distributed optimization. In Proc. of the 20th IJCAI. 1452–1457.
  • Ramchurn et al. (2011) Sarvapali D Ramchurn, Perukrishnen Vytelingum, Alex Rogers, and Nick Jennings. 2011. Agent-based control for decentralised demand side management in the smart grid. In Proc. of the 10th AAMAS. 5–12.
  • Rogers et al. (2011) Alex Rogers, Alessandro Farinelli, Ruben Stranders, and Nicholas R Jennings. 2011. Bounded approximate decentralised coordination via the max-sum algorithm. Artificial Intelligence 175, 2 (2011), 730–759.
  • Sultanik et al. (2007) Evan A Sultanik, Pragnesh Jay Modi, and William C Regli. 2007. On modeling multiagent task scheduling as a distributed constraint optimization problem. In Proc. of the 20th IJCAI. 1531–1536.
  • Vinyals et al. (2009) Meritxell Vinyals, Juan A Rodriguez-Aguilar, and Jesús Cerquides. 2009. Generalizing DPOP: Action-GDL, a new complete algorithm for DCOPs. In Proc. of The 8th AAMAS. 1239–1240.
  • Yeoh et al. (2010) William Yeoh, Ariel Felner, and Sven Koenig. 2010. BnB-ADOPT: An asynchronous branch-and-bound DCOP algorithm. Journal of Artificial Intelligence Research 38 (2010), 85–133.
  • Yeoh and Yokoo (2012) William Yeoh and Makoto Yokoo. 2012. Distributed problem solving. AI Magazine 33, 3 (2012), 53.
  • Zhang et al. (2005) Weixiong Zhang, Guandong Wang, Zhao Xing, and Lars Wittenburg. 2005. Distributed stochastic search and distributed breakout: properties, comparison and applications to constraint optimization problems in sensor networks. Artificial Intelligence 161, 1-2 (2005), 55–87.
  • Zivan and Peled (2012) Roie Zivan and Hilla Peled. 2012. Max/min-sum distributed constraint optimization through value propagation on an alternating DAG. In Proc. of the 11th AAMAS. 265–272.