The Complexity of the Distributed Constraint Satisfaction Problem

07/27/2020
by   Silvia Butti, et al.
Universitat Pompeu Fabra
0

We study the complexity of the Distributed Constraint Satisfaction Problem (DCSP) on a synchronous, anonymous network from a theoretical standpoint. In this setting, variables and constraints are controlled by agents which communicate with each other by sending messages through fixed communication channels. Our results endorse the well-known fact from classical CSPs that the complexity of fixed-template computational problems depends on the template's invariance under certain operations. Specifically, we show that DCSP(Γ) is polynomial-time tractable if and only if Γ is invariant under symmetric polymorphisms of all arities. Otherwise, there are no algorithms that solve DCSP(Γ) in finite time. We also show that the same condition holds for the search variant of DCSP. Collaterally, our results unveil a feature of the processes' neighbourhood in a distributed network, its iterated degree, which plays a major role in the analysis. We explore this notion establishing a tight connection with the basic linear programming relaxation of a CSP.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/10/2022

Weisfeiler-Leman Invariant Promise Valued CSPs

In a recent line of work, Butti and Dalmau have shown that a fixed-templ...
09/11/2019

Promises Make Finite (Constraint Satisfaction) Problems Infinitary

The fixed template Promise Constraint Satisfaction Problem (PCSP) is a r...
12/18/2019

Piecewise Linear Valued CSPs Solvable by Linear Programming Relaxation

Valued constraint satisfaction problems (VCSPs) are a large class of com...
09/11/2019

Algebraic Theory of Promise Constraint Satisfaction Problems, First Steps

What makes a computational problem easy (e.g., in P, that is, solvable i...
11/17/2011

Unbiased Statistics of a CSP - A Controlled-Bias Generator

We show that estimating the complexity (mean and distribution) of the in...
08/28/2017

Searching for an algebra on CSP solutions

The Constraint Satisfaction Problem (CSP) is a problem of computing a ho...
04/26/2012

Containment, Equivalence and Coreness from CSP to QCSP and beyond

The constraint satisfaction problem (CSP) and its quantified extensions,...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Constraint Satisfaction Problem (CSP) consists of a collection of variables and a collection of constraints where each constraint specifies the valid combinations of values that can be taken simultaneously by the variables in its scope. The goal is to decide if there exists an assignment of the elements of a domain to the variables which satisfies all constraints. The CSP is a very rich mathematical framework that is widely used both as a fruitful paradigm for theoretical research, and as a powerful tool for applications in AI, such as scheduling and planning [Rossi2006, krokhin2017constraint].

While, in its full generality, the CSP is known to be NP-complete, applying specific restrictions on the instances can yield tractable subclasses of the problem. One of the most studied approaches consists in requiring that, in each constraint, the set of allowed combinations for its values be drawn from a prescribed set , usually called the constraint language or the template. Thanks to the proof of the CSP dichotomy conjecture obtained separately in [Bulatov2017] and [Zhuk2017], which culminated a decades-long research program, it is possible to determine the complexity (P or NP-complete) of each family of CSPs, , which is obtained by fixing . This proof confirmed that the complexity of the constraint satisfaction problem is deeply tied to certain algebraic properties of the constraint language. Specifically, it depends on whether or not the constraint language is invariant under certain operations known as its polymorphisms. The polymorphisms of a constraint language enforce a symmetry on the space of the solutions of a CSP instance that can possibly be exploited by an algorithm. This connection with algebra is also present in our work.

We study the computational complexity of the distributed counterpart of CSP, which is known as DCSP. This was introduced by Yokoo et al. [yokoo1992distributed] as a formal framework for the study of cooperative distributed problem solving. In particular, we consider a deterministic, synchronous, anonymous network of agents controlling variables and constraints, and we study the complexity of message passing algorithms on this network. A number of practical applications can be encoded in the DCSP model, for instance resource allocation tasks in wireless networks, routing, networking, and mobile technologies (see for instance [duffy2013decentralized, bejar2001distributed]

). We notice that this framework is general enough to encompass some simple Graph Neural Network architectures that update the feature vector of each node by combining it with the feature vectors of its neighbours (see for example

[morris2019weisfeiler, grohe2020word2vec]). GNNs have a wide range of applications including molecule classification or image classification (see [GNNs2018] for example). Recently, GNNs have been deployed to solve CSPs [GroheGNNCSP2019]. However, whereas in all variants of GNNs the computation is limited to a reduced number of operations over feature vectors, in the DCSP model the computation at each node is governed by an arbitrary algorithm.

While there are a variety of well-performing distributed algorithms for constraint satisfaction and optimisation (see for instance [yokoo2000algorithms, meisels2008distributed, Fioretto2018]), the theoretical aspects of distributed complexity are to date not well understood. In this paper we initiate the study of the complexity of DCSP parametrized by the constraint language, obtaining a complete characterization of its complexity. More specifically, building on the connection between the CSP and algebra, we show that for any finite constraint language , the decision problem for is tractable whenever is invariant under symmetric polymorphisms of all arities, where an operation is symmetric if its result does not depend on the order of its arguments. Otherwise, there are no message passing algorithms that solve . Collaterally, we show that the same holds for the search problem for DCSP.

Our work begins with the identification of a feature of the nodes in a distributed network, its iterated degree, which plays a major role in how messages are transmitted in the network. The iterated degree is an extension of the similar concept introduced in the study of the isomorphism problem which turns out to have a variety of alternative characterizations in terms of fractional isomorphisms, the Weisfeiler-Leman test, and definability with counting logics (see [grohe2020word2vec]). It turns out that, due to the network anonymity, in every distributed algorithm all equivalent agents (with respect to iterated degree) must necessarily behave identically at each round. A similar phenomenon has been observed independently in the context of GNNs in [morris2019weisfeiler, XuHowPowerful] leading to further study in [Barcelo2020logical].

We use this fact to show that, under the absence of symmetric polymorphism of any arity in , it is always possible to construct two instances of , one satisfiable and the other unsatisfiable, that cannot be distinguished by any message passing algorithm in an anonymous network.

On the other hand, invariance under symmetric polymorphisms is connected with the basic linear programming relaxation of a CSP instance. More precisely, if has symmetric polymorphisms of all arities then one can decide the satisfiability of every instance of by checking whether its basic linear programming relaxation is feasible (see for instance [Barto2017]). Whereas it is not clear how to directly use this fact to obtain a distributed algorithm for , it can be applied to establish a structure theorem that unveils a simple yet surprising structure in the solution space of every satisfiable instance in : it must contain a solution that assigns the same value to all variables that have the same iterated degree.

The proof of the structure theorem uses the weighted majority algorithm, a well-known weight update procedure that is widely used in optimisation methods and machine learning techniques (see

[Arora2012]). The structure theorem is key in the proof of the positive results as it allows to run an adapted variant of the -consistency algorithm [kozik2018solving] that overcomes the absence of unique identifiers for the variables, by using instead their iterated degree.

Finally, we turn our focus to the iterated degree and show how some of its alternative characterizations first proposed in the graph setting can be lifted to CSPs. While the methods are simple adaptations of existing techniques, these results provide a new perspective by drawing a parallel between the basic linear programming of a CSP instance, regarded here as a fractional homomorphism, and fractional isomorphism (an alternative embodiment of the iterated degree) which allows us to reprove the structure theorem using only linear algebra.

This paper is organised as follows. In Section 2 we introduce some definitions and technical concepts about the DCSP model. In Section 3 we present the basic LP relaxation for CSPs and we show its connection to the symmetry on the solution space, culminating in the proof of the structure theorem. Section 4 is dedicated to the proof of the dichotomy theorem for the complexity of DCSP, with the hardness results in Section 4.1 and the details of the distributed algorithm for tractable languages in Section 4.2. In Section 5 we present the connection of our work with fractional isomorphisms in graphs through a purely algebraic approach. Finally, in the Appendix we add some technicalities and provide detailed proofs for all the claims that were made along the paper.

2 Preliminaries

Constraint Satisfaction Problems.

An instance of the Constraint Satisfaction Problem (CSP) is a triple where is a set of variables, is a finite set called the domain, and is a set of constraints where a constraint is a pair where for a positive integer, is a relation over of arity , and is a tuple of variables, known as the scope of . We use to denote the arity of a relation, tuple, or constraint and we write for any variable in the scope of .

An assignment is said to be satisfying if for all constraints we have , where is applied to coordinate-wise. Usually we denote the number of variables by and the number of constraints by .

Let be a set of relations over some finite domain , and let denote the set of CSP instances with all constraint relations lying in . In this context, is known as the constraint language. Throughout this paper, we will assume that is always finite. Then, the decision problem for is the problem of deciding whether a satisfying assignment exists for an instance . The search problem for is the problem of deciding whether a satisfying assignment exists and, if it does, to find one such assignment.

The Distributed Model.

We consider the DCSP model of [yokoo1992distributed] with some small modifications. The basic idea is to assign the task of solving a constraint satisfaction problem to a multi-agent system. In the original model, which assumes that all constraints are binary [yokoo1998distributed, yokoo2000algorithms], the assumption is that each variable is controlled by an agent, and two agents can communicate with one another if and only if they share a constraint. Here we deviate slightly from the original model to allow for non-binary constraints and we assume that both variables and constraints are controlled by distributed agents in the network. An instance of the Distributed Constraint Satisfaction Problem is a tuple , where , , and are as in the classical CSP, is a finite set of agents, and is a surjective function which assigns the control of each variable and each constraint to an agent , respectively. For the purpose of this paper, we assume that there are exactly agents, and therefore each agent controls exactly one variable or one constraint. The decision and search problems for DCSP are defined analogously to CSP, and we will denote them by and respectively.

Distributed Networks and Message Passing.

We now present some fundamental concepts relating to the message-passing paradigm for distributed networks. For a general introduction to distributed algorithms, we refer the reader to [fokkink2013distributed]. A distributed system consists of a finite set of nodes or processes, which are connected through communication channels to form a network. Any process in the network can perform events of three kinds: send, receive and internal

. Send and receive events are self-explanatory, as they denote the sending or receiving of a message over a communication channel. Any kind of local computation performed at the process level, as well as state changes and decisions, are classified as an internal event.

We assume a fully synchronous communication model, meaning that the send event at a process and the corresponding receive event at a process can be considered de facto as a unique event, with no time delay. As a whole, a synchronous system proceeds in rounds, where at each round a process can perform some internal computation and then send messages to and receive messages from its neighbours. A round needs to terminate at every process before the next round begins. Note that while for simplicity we assume a synchronous network, all our algorithms can be adapted to asynchronous systems by applying simple synchronizers (see for example [Awerbuch1985complexity]).

We make the fundamental assumption that the network is anonymous, meaning that variables, constraints and agents do not have IDs. For practical purposes, we still refer to variables and constraints with names (such as , ), however these cannot be communicated through the channels. The assumption of anonymity can have various practical justifications: the processes may actually lack the hardware to have an ID, or they may be unable to reveal their ID due to security or privacy concerns. For instance, the basic architecture of GNNs is anonymous. This is a very desirable property as it allows to deploy GNNs in different networks than those in which they were trained.

We assume that all the processes run locally the same deterministic algorithm, therefore IDs cannot be created and deadlocks cannot be broken by for instance flipping a random coin. Hence, the lack of IDs makes the processes essentially indistinguishable from one another - except, as we will see later, for the structure of their neighbourhood in the network.

Leader election is a procedure by which the processes in a network select a single process to be the leader in a distributed way. If a leader can be elected, then all the information about the instance can be gathered to the leader, who can then solve the CSP locally. It is a well-known result that there does not exist a terminating deterministic algorithm for electing a leader in an anonymous ring [Alguin1980Local]. Therefore, the assumptions of anonymity and determinism ensure that the DCSP model is intrinsically different from the (centralised) CSP framework, and open up the way for establishing novel, non-trivial complexity results.

The encoding of a DCSP instance into the message passing framework is straightforward. The processes correspond to the agents of the network, and there is a labelled communication channel between a variable agent and a constraint agent if and only if . More formally, the Factor Graph [Fioretto2018] of an instance of CSP is the undirected bipartite graph with vertex set and edge set . Each edge in that is incident to a variable and a constraint where has a label for , where for a tuple , denotes the entry of .111For mathematical clarity, we label edges with the relation itself. However, in algorithmic applications, every relation can be substituted with a corresponding symbol. Then, the message passing network corresponds to the factor graph where every node (variable or constraint) is replaced by their associated agent and every edge by a communication channel of the same label. Note that between any two nodes there is at most one channel. Unless explicitly stated we only consider instances whose factor graph consists of a unique connected component. It is easy to prove (see Remark A.1 in the appendix) that in the case that all relations are binary, the original model where only variables are controlled by agents is equivalent to our model.

At the start of an algorithm, a process only has access to very limited information. All processes know the total number of variables in the CSP instance, the total number of constraints, the labels of the communication channels that they are incident to in the network, and naturally whether they are controlling a variable or a constraint. During a run of the algorithm a process can acquire further knowledge from the messages that it receives from its neighbours. We assume that at any time each process is in one of a set of states, a subset of which are terminating states. When it enters a terminating state, a process performs no more send or internal events, and all receive events are disregarded. The local algorithm is then a deterministic function which determines the process’ next state, and the messages it will send to its neighbours. The output of such function only depends on the process’ current knowledge, on its state, and on the global time.We allow processes to send different messages through different channels. However, since processes can only distinguish the channels based on their labels, identical messages must be sent through channels with identical labels. Note that the power of the model would not decrease if only one message was allowed to be passed through all the channels, since a process can simulate sending a separate message through each channel by tagging each message with the label of the desired channel and concatenating them in a unique string. This, however, comes at the cost of increased message size. Moreover, if a process needs to broadcast multiple messages, these can be concatenated into one. We say that an algorithm terminates when all processes are in a terminating state.

We say that a distributed algorithm solves if, given any instance of , the algorithm terminates and the terminating state of every process correctly states that is satisfiable if it is, and that it is not satisfiable otherwise. Similarly, an algorithm solves if it solves and, in the satisfiable case, the terminating state of every variable process contains a value such that is a satisfying assignment.

In terms of algorithmic complexity, there are a number of measures that can be of interest. Time complexity, which is our primary concern, corresponds to the amount of time required for the algorithm to terminate. This is closely related to the number of rounds of the algorithm, which is another measure that we are concerned with. Message complexity and bit complexity measure the total number of messages and bits exchanged respectively. These can be bounded easily from the maximum size of a message.

Iterated Degree and Degree Sequence.

We present a number of concepts from graph theory that carry over to CSPs. Their adaptation to DCSPs is straightforward in all cases. Consider the labelled factor graph of an instance described in the previous paragraph. In what follows it will be convenient to allow instances with a disconnected factor graph . Let be a node of and denote its neighbourhood in the factor graph by . The (zeroth) degree, denoted , of a node in the factor graph is simply a symbol that distinguishes variables from constraints: we set for all and for all . The iterated degree222We remark that the notions of degree and iterated degree are well-defined concepts in graph theory. We borrow this terminology to refer to the analogous concepts in CSPs. () of a node is defined as . We write if , and simply if for all . In this case, we say that and are iterated degree equivalent. We show in the Appendix (see Proposition A.2) that as increases, the partition induced by gets more refined, and indeed it reaches a fixed point for some where .

The notion of iterated degree is strikingly relevant in our work as it captures what it means for two processes in a network to be indistinguishable. This implies that no distributed algorithm can differentiate between two iterated degree equivalent nodes, as we illustrate in the following result.

Proposition 1.

Let be an instance of whose factor graph is not necessarily connected and consider two variables . Then, if and only if any terminating decision algorithm over outputs the same decision at and . Furthermore, if and is satisfiable, then any terminating search algorithm outputs the same values at and .

The following is a direct consequence of Proposition 1. We say that two instances and have the same iterated degree sequence if there exists a bijection between the nodes of and the nodes of such that for every and every node of , the degree of in is equal to the degree of in . We note that in this case, if we construct the (disconnected) instance containing all the variables and constraints in and as well as their corresponding agents, then for every node . Hence the result below follows.

Corollary 2.

Let have the same iterated degree sequence. Then with both inputs any terminating decision algorithm will report the same decision.

Polymorphisms.

Let be a -ary relation over a finite domain . An -ary polymorphism of is an operation such that the coordinate-wise application of to any set of tuples from gives a tuple in . More precisely, for any , we have that . We say that a function is a polymorphism of a constraint language if is a polymorphism of all relations . Equivalently, we say that is invariant under . The set of polymorphisms of a constraint language will be denoted by .

There is a particular construction of a CSP instance that is closely related to the clone of polymorphisms of the corresponding constraint language. Let be a constraint language over a finite domain . For any positive integer , the indicator problem of order for is the instance where and contains for every relation and for every , the constraint where for every . It follows easily that for every , satisfies if and only if is a polymorphism of .

An -ary operation is said to be symmetric if for all and for all permutations of we have that . As anticipated in the introduction, invariance under symmetric polymorphisms plays a crucial role in the proof of our main theorem.

Now, our work unveils a novel structure in the space of solutions of a CSP instance that is deeply connected to the symmetry of its polymorphisms. In particular, containing symmetric polymorphisms of all arities is equivalent to the fact that any satisfying assignment to an instance of preserves the partition induced by . This is the main result of the next section.

3 Basic Linear Programming relaxation

For any CSP instance there is a LP relaxation (usually called basic LP relaxation, see for example [Kun2012]) denoted , which is defined as follows. It has a variable for each and , and a variable for each and where is the constraint relation of . All variables must take values in the range . The value of

is interpreted as the probability that

is assigned to . Similarly, the value of is interpreted as the probability that the scope of is assigned component-wise to the tuple . In this paper we only deal with a feasibility problem (that is, there is no objective function). The variables are restricted by the following equations:

(1)
(2)

where we denote the relation and scope of a constraint by and respectively. We say that BLP decides if for every instance , is satisfiable whenever is feasible. We will use the following well-known result (which for the reader’s convenience we prove in the Appendix).

Theorem 3 (see [Kun2012]).

If has symmetric polymorphisms of all arities, then BLP decides . Moreover, if is satisfiable then it has a solution such that for all with for all , we have .

The following theorem reveals a useful structure inside the solutions of the BLP.

Theorem 4.

Let be an instance of such that is feasible. Then, has a feasible solution such that for every with and every , .

Proof.

(Sketch) We start by rewriting the program in the form

(3)

by replacing every equality by the inequalities and .

Let us use and to denote the rows and columns of respectively. The main idea of the proof is to apply the Multiplicative Weight Update (MWU) algorithm, a well-known weight update procedure that is widely used in optimisation methods and machine learning techniques. MWU has a number of variants; the one that is relevant to our paper is described in Algorithm 1. This variant assumes that there is a feasible solution. The algorithm requires the existence of an oracle which, given a probability -vector (i.e, a vector with non-negative entries such that the sum of all its entries is ), outputs a vector which is a solution to the weaker problem

(4)

if one exists, or correctly states that no such vectors exist otherwise.

Initialisation: Fix and let be a -vector, whose entries, called weights, are initially set to .
for   do
       Compute the probability vector , where
       Let be a solution satisfying given by oracle O
       Compute the losses
       Compute the new weights )
end for
return
Algorithm 1 Multiplicative Weight Update

Under some technical conditions that provide an upper bound on the number of rounds necessary to achieve a given approximation (see [Arora2012]) it follows that the MWU algorithm converges when to a solution of .

Now consider oracle O that, given a -vector , returns the -vector where for every , if is positive and otherwise. Since maximizes under the restriction it follows that satisfies (4).

We note that induces in a natural way an equivalence relation on the variables of (namely, is equivalent to whenever and ) which can be extended to an equivalence relation on the set of columns in . Similarly, induces in a natural way an equivalence relation on the rows of . Then our goal is to show that the positions of -equivalent entries in the ouput are identical. This is done by showing by induction the more general fact that at each iteration of the algorithm, the positions of all -equivalent entries in are identical, and that for each of the -vectors (, , and ) the positions of all -equivalent entries are identical as well. ∎

We note here that Theorem 4 can be alternatively proved using the connection between iterated degree and linear algebra introduced in Section 5.

We finalize the section by proving this theorem on the structure of the solution space of CSP instances.

Theorem 5.

Let be a finite constraint language. The following are equivalent:

  1. has symmetric polymorphisms of all arities.

  2. For all satisfiable instances there exists a satisfying assignment such that for all pairs of variables , if then .

Proof.

. Let be a satisfiable instance of , where has symmetric polymorphisms of all arities. Consider the solution of given by Theorem 4 and note that it satisfies for all and all . Then, by Theorem 3, has a solution which satisfies for all .

. Let satisfy and let . We shall prove that has a symmetric polymorphism of arity . Let be the indicator problem of order . Recall that every solution to corresponds to an -ary polymorphism of , and hence the indicator problem is always satisfiable since for instance the projection to the first coordinate is a polymorphism of . Let be a solution of the indicator problem which satisfies condition (2). It is easy to show by induction that for every tuple , every permutation of and every , which implies that . We conclude that is symmetric as required. ∎

4 The Complexity of DCSP

The primary goal of this section is to prove the main theorem of this paper, namely, the dichotomy theorem for tractability of , which we now state.

Theorem 6.

is tractable in polynomial time if and only if contains symmetric polymorphisms of all arities. Otherwise, cannot be solved in finite time.

We show hardness of constraint languages that do not have symmetric polymorphisms of all arities in Section 4.1 and tractability of the remaining languages in Section 4.2. In addition, using standard methods it is easy to extend (see appendix) the decision algorithm so that, additionally, it also provides a solution. Hence we have:

Theorem 7.

is tractable in polynomial time if and only if contains symmetric polymorphisms of all arities. Otherwise, cannot be solved in finite time.

4.1 Intractable Languages

In this section we focus on intractable languages, that is, the hardness part of Theorem 6.

Theorem 8.

Let be a constraint language on a finite domain . If does not contain symmetric operations of all arities, then there is no algorithm that solves in finite time.

Schematically, the proof goes as follows. Assume that does not have symmetric polymorphisms of some arity . Then we shall use the relation pp-defined by the indicator problem of order and show that using it as constraint relation there always exist two instances, one which is satisfiable and the other one which is not, that are indistinguishable locally - in other words, they have the same iterated degree sequence. Therefore, any algorithm will return the same output on both instances, meaning that one of these outputs is wrong. Before embarking on the proof we state the following combinatorial lemma which will be needed in the proof.

Lemma 9.

Let be positive integers. Then, for all multiples of and large enough, there exists a collection of -ary subsets of satisfying the following properties:

  1. contains every -ary subset of

  2. Every element of appears in the same number of sets of .

Proof of Theorem 8.

Assume that does not contain symmetric polymorphisms of arity . Fix any arbitrary order on the tuples of and consider the relation defined as

This is, is the set of solutions of the indicator problem of order . It follows easily (see Remark A.6 in appendix) that if is not solvable in finite time then neither is .

Partition into equivalence classes where two tuples are related, denoted , if there exists some permutation on such that for every . We shall use to refer to the collection of classes and to refer to the class of tuple .

For every , define to be the number of tuples in . Then we can choose an integer large enough such that for every , is a multiple of , and satisfies Lemma 9 for and .

We are now ready to construct two instances and of , which are indistinguishable with respect to their iterated degree sequence, but differ with regards to satisfiability. The two instances have the same set of variables, defined to be where .

We start by constructing the constraints of the unsatisfiable instance , which we will do in two stages. First, for every class , let be the collection of sets of cardinality given by Lemma 9, as before with and . Note that each set in defines naturally a subset of so we shall abuse notation and assume that is a collection of subsets of .

To simplify notation it will be convenient to use as a shorthand for the indexed family . Now let be satisfying for every . We associate to the constraint where the scope is constructed as follows. Before defining we need some preparation. Recall that every coordinate of , and hence of , is associated to a tuple , so we can talk of the class to which each coordinate belongs. In particular, there are coordinates in of class . Hence, by fixing some arbitrary ordering we can use , to refer to the coordinates in of class . Then, informally, describes which variables from to use in order to fill coordinates , . Formally, for every and each , is assigned to the element in in increasing order.

We add such a constraint for each of the possible choices for . Therefore, after the first stage we have exactly constraints.

In the second stage we add more constraints which will yield the particular symmetry of . Note that every permutation on induces a permutation on the coordinates of in a natural way. Specifically, if coordinate of is associated to tuple , then where . Then, in the second stage, for each permutation on and for every constraint added in the first stage we add the constraint where for every , . Therefore, after the second stage we have a total of constraints as needed.

We now turn to . The constraints are constructed in a similar way, but instead of using the family in the first stage, we use a different family . In particular, for each class is obtained by partitioning in blocks of consecutive elements, so that each block has exactly elements. Then, contains the sets that can be obtained by selecting one element from each block. The second stage is identical is done exactly as in .

Claim 1.

and have the same iterated degree sequence.

Proof of Claim 1 (Sketch). Let . First, we observe that in both instances, every variable of participates in the same number of constraints, and because of the operation done in the second stage, the positions of the scope in which a variable in participates distribute evenly among the positions associated to . Using this fact it is very easy to prove by induction that and have the same iterated degree sequence. More specifically, it is shown that all constraints in and have the same iterated degree and that for every class all variables in also have the same iterated degree in and .

Claim 2.

Instance is unsatisfiable.

Proof of Claim 2. Assume by contradiction that has a satisfying assignment . For each class , consider the values given by to the first variables in . Since , it follows by the pigeon-hole principle that at least of these variables are assigned by to the same value of . Let be a subset of containing of these variables (we know that this subset belongs to by condition (a) of Lemma 9). Now consider the constraint in associated to , which belongs to . If is a solution to , then the restriction of to corresponds to an -ary polymorphism of . But assigns the same value to any two related tuples , which implies that is symmetric, a contradiction.

Claim 3.

Instance is satisfiable.

Proof of Claim 3. Let be any -ary polymorphism of (for example the projection operation defined as ). We shall construct a solution of in the following way. Recall that in the definition of we have partitioned the tuples of in consecutive blocks. In the first stage, all the elements in each block are placed in the same coordinate of . So, if are the tuples associated to coordinates and hence block respectively, then we only need that all variables in the block are assigned to to satisfy all constraints added in the first stage. This assignment also satisfies the constraints added in the second stage, because if is an -ary polymorphism of , then for every permutation on , the operation defined as is also a polymorphism of .

To sum up, we have two instances and , the latter of which is satisfiable while the former is not, which have the same iterated degree sequence. It follows from Corollary 2 that any distributed algorithm will give the same output on both instances, meaning that no algorithm can solve . From Remark A.6 then it follows that there are also no algorithms that solve . ∎

4.2 Tractable Languages

In this section we turn our attention to the tractable case. In particular we shall show the following:

Theorem 10.

Let be a constraint language that is invariant under symmetric polymorphisms of all arities. Then there is an algorithm that solves . The total running time, number of rounds, and maximum message size of are, respectively, , , and where and are the number of variables and constraints, respectively, of the input instance.

Note that this implies the “if” part of Theorem 6. is composed of two phases. In the first phase, a distributed version of the colour refinement algorithm allows every process to calculate its iterated degree. Then, thanks to Theorem 5 we can use the degree of a variable as its ID for the second phase, implying that a distributed adapted version of the -consistency algorithm [kozik2018solving] where messages are tagged with a process’ iterated degree solves the decision problem for .

Distributed Colour Refinement.

Let be an instance of and let and . There is a very natural way to calculate an agent’s iterated degree in a distributed way, both for variables and for constraints. This is a mere adaptation of the well-known -dimensional Weisfeiler-Leman algorithm, also known as colour refinement (see for example [grohe2017color, grohe2020word2vec]). The algorithm proceeds in rounds. At round , each agent for computes and broadcasts it to all its neighbours. At round , each agent knows the degrees of its neighbours which it had received in the previous round, uses them to compute , and broadcasts it to its neighbours. If (see Proposition A.2 in the Appendix) then for every satisfying we have that , which implies that we can essentially regard the iterated degree as the unique common ID for all variables that are iterated degree equivalent. Then in rounds each agent can compute , where we use as a shorthand of .

Although this is not necessary to achieve a polynomial time, we can reduce the size required to encode , denoted , with the following variation to the distributed algorithm introduced above. After computing the degree and before proceeding to compute the degree, all agents broadcast their degree to their neighbours. At the next round, every agent broadcasts all the degrees received (removing repetitions) to its neighbours so that in rounds every agent has received a complete list of all the degrees of all nodes. Every agent orders all degrees (this can easily be done in such a way that all agents produce the same order), and sets to be the rank of its own degree in the order. Then it proceeds to send out this new encoding of and to calculate accordingly.

In this way, we have . Note that the total number of rounds of this algorithm is and that, provided every set of degrees is stored as an ordered array, the cost of each computation done locally by an agent at a given round is bounded above by the size, , of the largest message sent.

As we will see, the price of an increase in the number of rounds (from to ) is compensated by the effect of on both time complexity and the size of the messages passed.

The Distributed Consistency Algorithm.

It is well known that if a constraint language has symmetric operations of all arities then it satisfies the so-called bounded width property (see [Barto2017]). We avoid introducing the definition of bounded width as it is not needed in our results and refer the reader to [Barto2017] for reference. Then, it has been shown in [kozik2018solving] that if has symmetric operations of all arities and satisfies a certain combinatorial condition, called -consistency, then has a solution. Instead of stating literally the result in [kozik2018solving] we shall state a weaker version that uses a different notion of consistency, more suitable to the model of distributed computation introduced in the paper.

A set system is a subset of . We shall use to denote the set .

A walk of length (in instance ) is any sequence where are variables, are constraints, and for every . Note that walks are precisely the walks in the factor graph (in the standard graph-theoretic sense) starting and finishing in .

Let be a set system, be a walk, and where is the starting node of . The propagation of via under , denoted , is the subset of defined inductively on the length of as follows. If then . Otherwise, where is a path of length ending at . Let . Then we define to contain all such that there exists and such that for every , satisfies the following conditions:

  1. ,

  2. if then , and

  3. if then .

We are now ready to state the result from [kozik2018solving] that we shall use.

Theorem 11 (follows from [kozik2018solving]).

Let be an instance of where has bounded width and let be a set system such that for every and such that for every walk starting and finishing at the same node and for every , belongs to . Then is satisfiable.

Our goal is to design a distributed algorithm that either correctly determines that an instance is unsatisfiable, or produces a set system verifying the conditions of Theorem 11. This is not possible in general due to the fact that agents are anonymous and hence a hypothetical algorithm that would generate a walk in a distributed way would be unable to determine if the initial and end nodes are the same. However, thanks to the structure established by Theorem 5, this difficulty can be overcome when has symmetric polymorphisms of all arities because, essentially, the iterated degree of a node can act as its unique identifier. To make this intuition precise we will need to introduce a few more definitions.

We say that a pair is -supported if for every walk starting at and finishing at a node with , we have that contains .

Remark 12.

We note that if is not -supported and is a walk of minimal length among all walks witnessing that is not -supported then . Indeed if we let , then we have that for every , since otherwise the shorter walk would contradict the minimality of . Since there are choices for each and choices for , the bound follows.

We say that a set system is safe if for every solution we have

Then, we have

Lemma 13.

Let be a safe set system and let be a pair that is not -supported. Then is safe.

Proof.

Let be any solution in satisfying for every with and let be any walk in witnessing that is not -supported, (i.e, is such that , , and ). Since is safe we have that for every . It remains to see that , so that the safety condition remains unaltered when is removed. First, it follows easily by induction that for every , where . Then, since , , and , it follows that . ∎

Our distributed consistency algorithm (that is, the second phase of ) works as follows. Every variable agent maintains a set in such a way that the set system is guaranteed to be safe at all times. As a result of an iterative process is modified. We shall use to denote the content of at the iteration, where an iteration is a loop of consecutive rounds. The exact value of required will be made precise later, but for now we mention that is linear in . Initially, is set to for every . At iteration for , is obtained by removing all the elements in that are not -supported. Then, in at most iterations we shall obtain a fix point .

The key observation is that when has symmetric polymorphisms of all arities, the satisfiability of can be determined from . Indeed, if for some then we can conclude from the fact that is safe and Theorem 5 that has no solution. Otherwise, satisfies the conditions of Theorem 11 and, hence, is satisfiable.

It remains to see how to compute from . In an initial preparation step for every iteration, every variable agent sends to all its neighbours. To compute the algorithm proceeds in rounds. All the messages sent are sets containing triplets of the form where , , and is the iterated degree of some variable . It follows from the fact that there are at most possibilities for the degree of a variable that the size of each message is .

The agents controlling variables and constraints alternate. That is, variables perform internal and send events at even rounds and receive messages at odd rounds, while constraints perform internal and send events at odd rounds and receive messages at even rounds.

More specifically, in round of iteration , every variable agent sends to its neighbours the message containing all triplets of the form with . At round for , computes where are the messages it received at the end of round . Subsequently, for every triplet with and , marks as ‘not -supported’. Finally, it sends message to all its neighbours. This computation can be done in time provided that each message is stored as an ordered array.

In round , every constraint agent computes from the messages (received from each neighbour in the previous round) the set , which contains for every variable and every in , the triplet where . Finally, it sends to each neighbour the corresponding message . Note that while doesn’t know the address of specifically, knowing the label of the channel that connects them is sufficient to calculate correctly and send the message accordingly. Moreover, for given and , can compute in time as knows both and . Hence, since the arity of the relations is fixed (as is fixed) the total running time at iteration of a constraint agent is .

Now it is immediate to show by induction that for every , every and with the message sent by to at the end of round is precisely

and the message sent by to at the end of round is precisely

By Remark 12 only iterations are needed to identify all elements in that are not -supported. Hence, after exactly rounds every variable agent computes by removing all the elements in that are marked as “not -supported”. If , then initiates a wave, which is propagated by all its neighbours, broadcasting that an inconsistency was detected. In this case, in at most rounds all agents can correctly declare that is unsatisfiable. Otherwise, the computation of proceeds as detailed above.

To sum up, the distributed consistency algorithm consists of iterations consisting, each, of rounds where the total running time for internal events at a given round is and the maximum size of each message transmitted is . This completes the proof of Theorem 10.

5 Equitable Partitions, Fractional Isomorphism and the Basic Linear Programming Relaxation

In this section we lift the well-known link between iterated degree, fractional isomorphisms, and equitable partitions of graphs (see [Scheinerman2011fractional] for example) to CSPs. This provides a new perspective on some of the results seen in the previous sections and connects these concepts with the basic linear programming relaxation for CSPs. As a byproduct, we shall show how to reprove the main result of Section 3 using exclusively simple linear algebra. We start by giving some definitions.

A matrix of non-negative real numbers is said to be left (resp. right) stochastic if all its columns (resp. rows) sum to

. Note that, unlike in the classical definition of single stochastic matrix, we do not require

to be square. On the other hand, a doubly stochastic matrix is a square matrix that is both left and right stochastic. A permutation matrix is a doubly stochastic matrix where all entries are s and s.

It will be convenient to assume that the indices of the rows and columns of a matrix are arbitrary sets. Let be a matrix and consider two subsets , . The restriction of to and , denoted by , is the matrix obtained by removing from all the rows that do not belong in and all the columns that do not belong in .

A matrix is decomposable if can be partitioned into sets such that whenever and belong to different sets. In this case we can write where . Otherwise, is said to be indecomposable. Finally, denote by the matrix whose entries are all ones.

Let be a bipartite multigraph with bipartition where every edge is labelled with a symbol from a fixed finite set . We can represent as a matrix in , where denotes the power set of , and for any and , is the set of labels of the edges joining and . For our purposes it will be practical to associate with a number of -matrices. In particular, for every we define as follows: if