# On Regularity of Max-CSPs and Min-CSPs

We study approximability of regular constraint satisfaction problems, i.e., CSPs where each variable in an instance has the same number of occurrences. In particular, we show that for any CSP Λ, existence of an α approximation algorithm for unweighted regular Max-CSP Λ implies existence of an α-o(1) approximation algorithm for weighted Max-CSP Λ in which regularity of the instances is not imposed. We also give an analogous result for Min-CSPs, and therefore show that up to arbitrarily small error it is sufficient to conduct the study of approximability of CSPs only on regular unweighted instances.

There are no comments yet.

## Authors

• 3 publications
06/23/2018

### Approximating some network problems with scenarios

In this paper the shortest path and the minimum spanning tree problems i...
09/22/2020

### On the Mysteries of MAX NAE-SAT

MAX NAE-SAT is a natural optimization problem, closely related to its be...
09/09/2021

### Parameterized inapproximability of Morse matching

We study the problem of minimizing the number of critical simplices from...
11/17/2020

### Efficient stream-based Max-Min diversification with minimal failure rate

The stream-based Max-Min diversification problem concerns the task of se...
05/29/2019

### Accelerating Min-Max Optimization with Application to Minimal Bounding Sphere

We study the min-max optimization problem where each function contributi...
04/07/2021

### A Cycle Joining Construction of the Prefer-Max De Bruijn Sequence

We propose a novel construction for the well-known prefer-max De Bruijn ...
08/09/2019

### Discovering a Regularity: the Case of An 800-year Law of Advances in Small-Arms Technologies

Considering a broad family of technologies where a measure of performanc...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

This work studies approximability of regular constraint satisfaction problems (CSPs), where we interpret regularity to mean that each variable appears the same number of times in constraints of instances. Since regular CSPs are a subclass of CSPs, approximating their optimal values is not harder than approximating values of general CSPs. In this work we show that approximating values of regular CSPs is also essentially not easier, i.e. we show that an approximation algorithm for regular instances of a particular CSP induces approximation algorithm applicable to possibly non-regular instances. Therefore, we show that imposing regularity has almost no effect on the approximability of CSPs, and in particular if one is not interested in additive factors in approximation ratios, the study of approximability may be conducted solely on regular instances.

In order to make the result more general, we revisit previously studied question of weights vs. no weights for CSPs [10, 16] in a context of approximation. In particular, we show that it is sufficient to have an approximation algorithm for regular unweighted instances in order to construct an approximation algorithm applicable to possibly weighted instances of CSPs without regularity restriction. In order to do so, we use a result from [16] which shows that weighted versions of CSPs have essentially the same (up to additive error) approximation ratios as their unweighted counterparts. We reprove this result here for the sake of completeness.

We organize the paper as follows. In Section 1.1, we give an informal definition of constraint satisfaction problems, and introduce decision and optimization versions of these problems. In Section 1.2, we discuss approximation of CSPs and highlight some breakthrough results. Motivated by this discussion, we introduce regular CSPs in Section 1.3, and state the new results proved in this work. Then, in Section 1.4 we compare the results of this paper with prior work. In Sections 2 and 3 we formalize the discussion given in Section 1. In particular, in Section 2 we fix the notation, and discuss the difference between weighted and unweighted CSPs. In Section 3 we describe our reductions and prove the results. Finally, in Section 4 we discuss possible applications of ideas and theorems introduced in this paper, and mention some open questions.

### 1.1 Constraint Satisfaction Problems

Constraint satisfaction problems (CSPs) represent one of the most fundamental classes of problems studied in complexity theory. Each CSP is described by a collection of predicates, which are used in instances of these problems as constraints on tuples of variables. Probably the best known CSP is 3-Sat, in which the constraints are given as disjunctive clauses on at most three literals, where a literal is either a variable or its negation. A basic problem is to determine whether we can satisfy all the constraints of a given CSP instance.

This problem is very well understood, due to Schaefer’s dichotomy theorem for CSPs on Boolean domains [21] and more recent proofs of a dichotomy theorem on general domains by Bulatov [8] and Zhuk [23].

In this work we focus on optimization variants of CSPs, in which we are interested in finding an assignment which maximizes/minimizes the number of constraints satisfied. Depending on the optimization variant, we refer to these problems as either Max-CSPs or Min-CSPs. A typical problem in this setting is Max-Cut, which has Boolean constraints of the form

. Many optimization CSPs are intractable, and in this case we typically resort to approximation algorithms in order to estimate their optimal values. The strength of an approximation algorithm is expressed through its approximation ratio

, which measures the quality of a solution produced by the algorithm by comparing111By convention we assume in this work that approximation algorithms for Max-CSPs always have , while for Min-CSPs . it to the optimal one. In a study of the approximation algorithms, we are typically interested in finding algorithms with the value of as close to as possible. We are also interested in studying which values of are not feasible, in which case we talk about inapproximability.

### 1.2 Some Important Results on Approximability of CSPs

On the algorithmic side, semidefinite programming (SDP) has been very fruitful tool for approximating optimal values of CSPs. The first approximation algorithm based on SDP dates back to the work of Goemans and Williamson, who devised a approximation algorithm for the Max-Cut problem [11]. Ideas from this work have been very influential for subsequent research of approximation algorithms, and we highlight the approximation algorithm for Max-3-Sat [15] and the algorithm for Max--Sat [5, 18].

On the hardness of approximation side the celebrated PCP-theorem [3, 4], combined with the usual assumption that , provided a strong starting point used in many results showing impossibility of approximation. The highlight result using this starting point along with parallel repetition of Raz [20] and long codes [7] comes from Håstad, who gave optimal inapproximability results for Max-E-Sat and Max-E-Lin222In Max-E-Sat, constraints are clauses of width , while in Max-E-Lin, constraints are linear equations over . We use abbreviation E to denote that each constraint is of width exactly . Therefore, Max-E-Sat allows only clauses of width , while Max--Sat allows width and as well. problems [12]. Recently, Siu On Chan gave optimal (up to a constant factor) inapproximability results for Max-CSPs where arity of the predicates is larger than the size of the domain [9].

Even though the PCP theorem was used with great success over the years, researchers still faced seemingly insurmountable difficulties in pursuit of sharp inapproximability results for many fundamental problems such as Max-2-Sat, approximate graph coloring, and minimum vertex cover. More precisely, the starting point of almost all reductions was the Label Cover problem [1], which was constructed by combining the PCP theorem with the parallel repetition of Raz [20]. In order to overcome these difficulties, Khot introduced a modification of Label Cover called Unique Label Cover [17] and conjectured it to be NP-Hard. This conjecture is known as the Unique Games Conjecture (UGC), and it quickly became the central problem in the hardness of approximation area, especially since its validity implies optimality of many already known approximation algorithms. Of special importance among UGC based results is the one from Raghavendra [19], which shows that a certain version of semidefinite programming relaxation is optimal for all constraint satisfaction problems. Therefore, in case UGC is shown to be true, this work would end the quest for optimal approximation algorithms for CSPs.

However, with the validity of the UGC still in question, there is an incentive to derive strong inapproximability results relying on other (weaker) assumptions, most preferably on . Furthermore, while Raghavendra’s result shows how to optimally approximate CSPs, it does not give us a suitable way to compute numerical values of optimal approximation ratios; this question remains open for almost all CSPs, even very simple ones.

### 1.3 New Results for Regular CSPs

In order to facilitate further study, it can be valuable to ask whether some additional properties of instances can be assumed when studying approximability of CSPs. In this work we address this topic by studying regular instances of Max-CSPs and Min-CSPs, i.e. instances in which each variable occurs the same number of times in the constraints. In particular, we prove the following results for poly-time approximation algorithms.

###### Theorem 1.

If there is an approximation algorithm for unweighted regular instances of Max-CSP then for every there is an approximation algorithm for the weighted Max-CSP .

###### Theorem 2.

If there is an approximation algorithm for unweighted regular instances of Min-CSP then for every there is an approximation algorithm for the weighted Min-CSP .

The proofs of Theorems 1 and 2 are based on a deterministic reduction introduced in Theorem 11. We also give a randomized reduction for Max-CSPs in order to prove the following theorem.

###### Theorem 3.

Given a Max-CSP and , it is sufficient to have an approximation algorithm for unweighted regular instances of degree up to to have an randomized approximation algorithm for the weighted Max-CSP with success probability of at least , where is the number of variables appearing in an instance.

The details of the randomized reduction can be found in Theorem 10. Randomized reduction also works for Min-CSPs, although with the degree requirement of , which makes this reduction less efficient than even the deterministic one. For this reason we do not discuss randomized reduction for Min-CSPs.

In the theorems above instead of a constant we can choose , where is the number of variables, to obtain approximation in poly-time for Max-CSPs (or approximation for Min-CSPs).

### 1.4 Prior Work

Both the randomized and the deterministic reductions introduced in this paper are based on a construction introduced by Trevisan [22], which was used to show hardness of approximating values of bounded degree instances of the Max--Sat problem. The reduction of Trevisan outputs instances in which each variable has degree in expectation, and therefore by argument that relies on Chernoff’s bound it is shown that the degrees of variables is with high probability smaller than . Our deterministic reduction comes from derandomization of the aforementioned result, while in the randomized reduction we reuse mentioned result of Trevisan [22] and in our argument show that degrees are with high probability in range .

In order to make our reductions applicable to the weighted setting, in this work we also show that approximability of weighted Max-CSPs (or weighted Min-CSPs) is essentially the same333If we allow additive loss in the approximation ratio as the approximability of their unweighted versions. Let us remark that the same result was already proved in [16, Lemma 3.11] by relying on some results that appeared in [10]. We reprove this fact here for the sake of completeness.

## 2 Preliminaries

We consider constraint satisfaction problems given by the following definition.

###### Definition 4.

A constraint satisfaction problem (CSP) over a language , is a finite collection of predicates .

For a predicate we use to denote its arity. We are interested in solving instances of CSPs, which are defined as follows.

###### Definition 5.

An instance of a CSP consists of a set of variables taking values in and a set of constraints, where each constraint is a pair , with being a predicate with arity , and being an ordered tuple containing distinct variables which we call a scope.

Sometimes when working with Boolean CSPs, the definition of an instance allows applying constraints to literals instead of variables. However, Definition 5 is more general, since we can always extend the family of predicates belonging to a CSP to create CSP , such that each instance of in the sense of Definition 5 can be represented as an instance of in which we allow constraints over variables and their negations, and vice-versa. In particular, we can create by taking every of , considering all , and adding to predicates defined as

 PIr(x1,…,xn)=Pr(x1+I1,x2+I2,…,xar(Pr)+Iar(Pr)),

where is the -th element of the tuple , and addition takes place over .

The degree of a variable is defined as the number of times is mentioned in the constraints, or formally

 di=m∑r=111Sr(xi), (1)

where is an indicator function. Instances in which all variables have the same degree are called regular.

Max/Min-CSP problems frequently appear in a setting in which constraints of an instance are assigned with non-negative weights, which are typically used to encapsulate the significance of each constraint. Let us now give the definition of these problems.

###### Definition 6.

A weighted instance of a CSP is an instance of , where each constraint has a weight , and .

Obviously, unweighted instances can be seen as weighted where each constraint has a weight .

Let us denote by a function an assignment to variables of a CSP instance . We interpret as a coordinate-wise action of on . Given , we define the value of as

 Valχ(F)=m∑r=1wrPr(χ(Sr)). (2)

We also define the optimal value of in the case of Max-CSP to be

 Opt(F)=maxχ(Valχ(F)). (3)

In the minimization version, correct definition of the optimal value has “” instead of “” in the previous expression. Typically, the aim is to find a solution with the value close to the optimal one. In case of Max-CSP, an approximation algorithm is an algorithm which in polynomial time finds an assignment such that

 Valχ(F)≥α⋅Opt(F).

For Min-CSPs, the correct definition has “” instead of “” in the previous inequality.

While introducing weights allows convenient representation of CSPs, the hardness of approximation essentially does not change, as shown in [16, Lemma 3.11]. We reprove these results here, starting with a following lemma.

###### Lemma 7.

Consider a weighted instance of a Max-CSP (or Min-CSP) . Then, for each there is a poly-time algorithm which outputs an unweighted instance of the same CSP over the same variables as in such that

 Valζ(G)−ε≤Valζ(F)≤Valζ(G)+ε, (4)

where is any assignment to variables of (or ). Furthermore, the size of instance is polynomial in size of and .

###### Proof.

Let be an instance over constraints with respective weights . We fix , and construct by creating copies of each constraint , where are chosen such that

 ∑rℓr=q,ℓrq∈(wr−1q,wr+1q). (5)

We can find such by setting first , and then incrementing some to obtain .

For any given assignment to the variables, the contribution towards the value of of each constraint in is at most different from contributions of replicated constraints in . Finally, since we have constraints, the main claim

 Valζ(G)−ε≤Valζ(F)≤Valζ(G)+ε (6)

of the theorem follows. ∎

By relying on this lemma, we can show that weights do not affect approximability of Max/Min-CSPs, as long as we allow for additive loss in approximation ratio. We first prove this claim for Max-CSPs.

###### Theorem 8.

Consider a Max-CSP and assume that we can approximate the optimal value of unweighted instances within a multiplicative factor . Then, for every , weighted instances of Max-CSP can be approximated within a constant .

###### Proof.

Without loss of generality let us assume that does not contain a predicate , since we can remove each constraint with a predicate from an instance and rescale the weights, which does not affect approximability in the discussion that follows since the ratio between the values under any two assignments remains the same.

Now, let us fix a weighted instance of a CSP , and consider a random assignment in which each variable takes value or with probability , independently. Then, the expected value of under this random assignment is

 Eχ[m∑r=1wrPr(χ(Sr))]=m∑r=1wrEχ[Pr(χ(Sr))].

The value depends only on the properties of the predicate . Furthermore, by our assumption , and therefore . Thus, since are picked from a finite collection of predicates of , there is a such that , for every . Therefore, we have that

 Eχ[m∑r=1wrPr(χ(Sr))]=m∑r=1wrE[Pr(χ(Sr))]≥m∑r=1wrγ=γ.

Hence, under the randomized assignment, the instance has a value of at least in expectation. By the averaging argument, we have that . Now, consider the algorithm from Lemma 7 with parameter , which takes our instance and outputs unweighted instance . We can apply the approximation algorithm on to obtain some assignment for which

 Valζ(G)Opt(G)≥α.

Then, since , for the same assignment we have

 Valζ(F)Opt(F)≥Valζ(G)−εOpt(G)+ε≥Valζ(G)−εOpt(G)(1−ε/γ)≥Valζ(G)Opt(G)−εOpt(G)−εValζ(G)γOpt(G)+ε2γOpt(G)≥α−εγ−εγ≥α−δ,

which proves the statement of the theorem. ∎

The argument from the previous theorem does not work for Min-CSPs, since in this case can be arbitrarily small. Analogous claim for Min-CSPs was already proved in [16, Lemma 3.11] by using scaling techniques [14, 10]. For the sake of completeness, we give here somewhat more detailed proof of this claim, using essentially the same techniques.

###### Theorem 9.

Consider a Min-CSP , and assume we can approximate the optimal value of unweighted instances within a multiplicative factor . Then, for every , weighted instances of the Min-CSP can be approximated within a constant .

###### Proof.

Consider the decision version of CSP , in which we ask whether there is an assignment to the variables such that all the constraints of are not satisfied. By Schaefer’s dichotomy theorem [21], the problem of deciding whether there is an assignment which falsifies all the constraints is either -hard or in . If solving this problem is -hard, then both weighted and unweighted versions of Min-CSP are obviously -hard to approximate within any constant. Therefore, without loss of generality, we assume that deciding whether all constraints can be falsified is in for .

Hence, given an instance of the Min-CSP , we can check in polynomial time whether . In case , we have found an optimal assignment, so it only remains to consider .

Without loss of generality let us assume that the weights of constraints are sorted in descending order, i.e. . We can find in polynomial time the largest such that there is an assignment falsifying constraints .

For thusly chosen at least one of will be true in any assignment, so we have that . Also, since there is an assignment falsifying the first constraints, we have that .

Let us partition the constraints into the following three groups:

• light: constraints with weight .

• medium: constraints with weight .

• heavy: constraints with weight .

Then, we create an instance by adding medium and heavy constraints from . Furthermore, we scale down the weights of heavy constraints to in . Finally, we normalize the weights to total weight by multiplying them by some factor . Note that , since still has (although with different weights) constraints . Hence, in a completely analogous manner to Theorem 8, we can use the algorithm from Lemma 7 with , to get an assignment which gives us an approximation of the optimal value for . Finding the approximation can be performed in polynomial time, because ,

Let us now see how well approximates the optimal value of . We have that following two properties:

• Property A: . This holds since optimal values of both and do not satisfy heavy constraints.

• Property B: If an assignment does not satisfy heavy constraints, then . This statement holds since if we do not satisfy heavy (scaled down) constraints, then the only difference comes from light constraints, which can have a total weight of at most .

Finally, consider our -approximating assignment to the instance . This assignment certainly does not satisfy heavy constraints, since otherwise we have , and , so the approximation ratio would be at least , which can not happen since achieves a constant factor approximation. Therefore, by using properties A and B for we have

Since for sufficiently , our theorem holds. ∎

## 3 Reduction

We now prove the theorem which shows the existence of a randomized algorithm which can be used for proving Theorem 3. We remark that this theorem uses a reduction that appeared in [22], and that the main difference comes from the fact that we need to create instances in which degrees of variables are uniform, while bounded degree was sufficient in [22]. Additional complexity lies in the fact that we prove theorem for any Max-CSP, and therefore account for different arity of predicates, while [22] considered Max--Sat with predicates of arity .

Let us now give an overview of the proof. We start in the same way as [22], by creating copies for each variable in the starting instance of degree . Then, in order to create a regular instance, we sample constraints of the starting instance, and create a constraint in the new instance by replacing each occurring in the scope by some of its copies uniformly at random. Such a procedure outputs an instance in which every variable has the same degree in expectation. However, with small probability it can still happen that the deviations from this degree are large. For that reason, we repeat this procedure up to times until the degrees of the variables are close to the expected value, or otherwise our algorithm fails. In case of our algorithm not failing, we slightly update the resulting instance to ensure that each variable has the same degree. More precisely, in case expected degree of each variable is , we replace variables with degree higher than in scopes of some constraints with some new dummy variables, where is small. Finally, we also create new constraints in order to make sure that each variable has degree exactly . Final step in our construction consists in making sure that newly introduced dummy variables also have degree . We then show that with very high probability these updates changed/added only small number of constraints, so our regular instance "looks like" the random one. The last part of the proof shows that an assignment to regular instance can be used to construct an assignment which satisfies similar fraction of constraints of the starting instance. The idea is the same as in [22]; namely, the fraction of variables with value gives us the probability that variable should have value , and this is used in randomized algorithm which converts the values of to values of . This algorithm can be derandomized, and we show that this conversion does not incur large change in the value of the instance.

A formal statement and a proof are given below.

###### Theorem 10.

Consider an unweighted instance of a Max-CSP and let . Then, there is a randomized algorithm which outputs a regular instance of the Max-CSP such that with probability at least over the choices made in the randomized algorithm, the following two statements hold:

• For any assignment to the variables of , there is an algorithm which runs in polynomial time and finds an assignment to the variables of such that

 Valχ(F)≥Valζ(G)−ε.
• The optimal value of is upper bounded by .

Furthermore, the runtime of the randomized algorithm is polynomial in terms of the size of and , and the degree of variables in is .

###### Proof.

We begin by describing a randomized procedure that creates the regular instance . The instance contains constraints , and each constraint is applied to some tuple of variables . First, for each variable from with degree , we create new variables . Then we fix , which will be suitably chosen later, and create an instance with constraints over variables by repeating the following procedure times:

• Pick a constraint from uniformly at random.

• For each each variable appearing in , pick a variable from a set uniformly at random.

• Add a constraint to , which constrains the variables picked in the previous step by the predicate of the constraint . Furthermore, each variable appears at the same position in the tuple as the variable in the tuple .

Since each variable is picked at each step with probability , every variable in in expectation has degree . However, some variables can have larger (or smaller) degree than . For that reason, we will update the instance by changing and adding some constraints to create an instance in which each variable has degree , where is a number that will be fixed later. By Chernoff’s bound (Lemma 12), the probability that a variable appears in more than constraints in is given by

 P[deg(xji)≥(1+β)D]≤2e−β2D4. (7)

If variable appears times, we replace by some new variable in constraints. The expected number of times we need to do this for a fixed is at most

 mD∑t=(1+β)D+1Pr[deg(xji)≥t](t−(1+β)D)≤2∞∑t=(1+β)Dexp⎛⎜ ⎜⎝−(tD−1)24D⎞⎟ ⎟⎠(t−(1+β)D)≤2∞∑t=(1+β)Dexp(−(t−D)24D)(t−(1+β)D)

By introducing , we can further simplify this expression as

 2∞∑c=0exp(−(βD+c)24D)c≤2∞∑c=0exp⎛⎜⎝−14(βD+c)2ββD+1βc⎞⎟⎠c≤2∞∑c=0exp(−β4(βD+c))c≤2∫∞0exp(−β4(βD+c))cdc

We can show that this expression is smaller than for . Therefore, for such , we replace each variable at most once in expectation. Finally, by Markov’s inequality we conclude that with probability at least we need to replace at most variables in , where is the average arity of constraints in . If we need to replace more than variables, the construction of fails. Otherwise, the degree of each variable is at most .

Let us now consider variables such that . For each such variable we create new constraints, in order to make sure that each variable has degree . Note that in this process we might need to add some additional dummy variables, which can have degree different from . We will handle these variables later. Let us now estimate how many constraints we need to add. In order to do that, we first find an upper bound on the probability that the average arity of constraints in is smaller than . In particular, if we denote with the maximal arity of the constraints in , application of Hoeffding’s inequality (Lemma 14) gives

 Pr⎡⎢⎣∣∣ ∣∣1mD∑C′i∈G′ar(P′i)−W∣∣ ∣∣≥βW⎤⎥⎦≤2exp(−β2m2D2W2mDW2max)=2exp(−β2mDW2W2max).

Observe that for this value is smaller than . If average arity of constraints in is smaller than , construction of fails. Otherwise, we add up to constraints. After this stage of the algorithm, each variable will have degree exactly .

By union bound the probability that the construction of fails is at most . We actually try to construct up to times, and therefore the probability of failure for constructing drops to . In case all trials of constructing are unsuccessful, we stop the execution of the algorithm and report failure. Otherwise, we proceed in the manner described below.

In the process of ensuring that the degree of each variable is exactly

, we introduced some additional dummy variables. At each step of updating

we only need to keep additional dummy variables that were used less than times, because the largest arity of constraints in is . Therefore, we can assume that after this process we are left with at most dummy variables with degree smaller than , and therefore to ensure they have degree we need to use them times. If we pick such that is coprime444We can do this for all sufficiently small , by our choice of which we discuss later. with , then there is some such that and . Let us then introduce new dummy variables and add constraints with predicate of arity , by assigning the scopes such that the degree of each variable becomes exactly . In particular, we can assign scopes iteratively, by adding variables with the smallest degree to the scope at each step. Since and variables have the degree at the start, we can always assign distinct variables to the scope. Finally, since the iterative procedure will finish with all variables having degree after exactly steps.

Summarily, we have introduced or changed at most the following number of constraints:

• constraints by replacing variables by new dummy variables, in order to make sure that no variable has degree bigger than .

• constraints in order to make sure that every variable does not have degree smaller than .

• new constraints introduced to ensure that additionally added variables have degree .

In particular, the number of constraints that we did not change is at least , and the number of new or changed constraints is at most

 B:=8mW+2βmWD+3(1+β)D.

This concludes the description of . Let us now prove statement [a].

Let be an assignment to variables of . Then, consider a randomized assignment to the variables of , in which variables get values independently, and the probability of getting the value is proportional to the number of variables getting the value under . Let us denote with the expected value of under random .

Consider now one trial in the process of creating from . At each step, we pick a constraint , which is satisfied by with probability . Furthermore, the respective constraint in is satisfied by with probability as well. Therefore, our algorithm will create an instance that under satisfies a fraction of constraints in expectation. By Chernoff’s bound (Corollary 13), the probability that the fraction of satisfied constraints in is bigger than is at most . Therefore, if we pick , we have that

 Pr[Valζ(G′)≥ρζ+ε/2]≤2−2mW.

Since there are variables in , there are possible assignments . Hence, by union bound the probability that there is an assignment which satisfies more than constraints in is at most . Furthermore, since a trial succeeds with probability at least , the probability that no assignment satisfies more than constraints in conditioned on the fact that the trial is successful is at least .

Finally, when switching from to , in the worst case we can have at most more satisfied constraints, and hence

 Val¯χ(F)+ε/2≥Valζ(G′)≥Valζ(G)−BmD.

By choosing , and for sufficiently big and we have that , and therefore we have that

 Val¯χ(F)≥Valζ(G)−ε.

This inequality holds for the randomized assignment . However, using the method of conditional probabilities we can efficiently and deterministicaly find an assignment such that . This proves that [a] holds with probability at least over the random choices in the algorithm that creates .

The statement [b] can be proved in a similar way. In particular, let us fix the assignment to the variables of under which the optimal value is attained. Then, we construct an assignment for , by setting to have the same values as the corresponding under , and we assign the values to the remaining variables arbitrarily. Throughout randomized construction of in one trial, each constraint is satisfied with probability , and therefore the expected fraction of satisfied constraints in is . The probability (over the random choices made in a trial) that is by Chernoff’s bound (Corollary 13) at most

 2exp(−mDε216).

Therefore, for , this probability will be smaller than . Moreover, by noting that changing or adding constraints when switching from to can not impact the optimal value by more than , we see that the statement [b] holds with probability , over the random choices in the algorithm that creates . By union bound the statements [a] and [b] both hold with probability at least , which concludes the proof of this theorem. ∎

Let us now prove Theorem 1. For that reason, let us suppose that regular instances of Max-CSP can be approximated within some fixed approximation ratio , and let us consider an arbitrary (possibly not regular) instance of . Then, we construct with the probabilistic algorithm described in the previous theorem, apply the approximation algorithm to find an assignment , and from using the algorithm from Theorem 10 [a], we can find an assignment to the instance satisfying

Now, we have that , for some555As in the proof of Theorem 8, w.l.o.g. we assume that instance does not contain predicates which evaluate to under all assignments. fixed which depends only on . Therefore, by choosing , the claim of Theorem 1 follows.

Note that using analog of Theorem 10 for Min-CSPs to prove Theorem 2 would require , and therefore the instance in the reduction will be of size at least , with . We give a deterministic reduction instead, which works for both Max-CSPs and Min-CSPs, and which creates regular instance of degree where is the maximal degree of constraints in . The reduction is given as the following theorem.

###### Theorem 11.

Consider a Max-CSP (or Min-CSP) and let . Then, there is a reduction which takes as an input an instance of a Max-CSP (or Min-CSP) and outputs a regular instance of the Max-CSP (or Min-CSP) such that the following holds

• (or for Min-CSP)

• Let be an assignment to the variables of . There, there is an algorithm which finds an assignment to the variables of such that

Furthermore, the runtime of the reduction and of the algorithm from [b] is polynomial in terms of the size of and .

###### Proof.

We prove this theorem for Min-CSP . The proof for Max-CSP is analogous.

We begin by describing how a regular instance is constructed. We start from which has variables , with degrees , and constraints . For each variable from we create variables in . The constraints of are constructed as follows. We begin by creating copies of which we call blocks. Then, we go through the blocks and at each scope we replace variables by their corresponding copies , . In particular, each can be replaced only by variables . Since appears times in , it will get replaced times, and in order to impose regularity we replace by each copy exactly times. Therefore, the degree of all variables is , and instance is regular.

Actually, we will be a bit more careful when replacing variables by their copies . The idea is that each block should resemble as much as possible, and therefore we want to avoid replacing by two different copies , in the same block. Let us call a block good if each variables is replaced by a single copy in that block. Our aim is to maximize the number of good blocks, which we do greedily by creating a good block at each step of the algorithm as long as we can, after which each variable is replaced by any of its available copies. Let us remark that at all stages of our algorithm we still make sure that each copy is not used more than times, and that each is replaced only by its own copies.

This finishes our description of . Before we prove the claims of the theorem, let us find a lower bound on the number of good blocks created. In our greedy algorithm, a good block can be created if for each variable we can find which was used times or less. Therefore, each variable can be used at least times for creating a good block, and therefore the number of good blocks is . Hence, by letting , we conclude that there are at least good blocks.

We construct as described above with . It is straightforward to verify [a], and hence let us now prove the claim [b]. For a given assignment of variables in , let us consider a good block with the smallest value, and let us denote the value of this block by . We define an assignment to be the assignment of on this block. We note that we can do this since the copies of are unique in every good block. We have that , and therefore since is the minimal value of good blocks we have

 Valζ(G)≥1N((N−D)v)≥v−DN=Valχ(F)−ε,

which finishes the proof of [b]. ∎

Let us now show how this result can be used to prove Theorem 2. Hence, let us fix , and starting from an instance of a Min-CSP with constraints , we apply algorithm from the previous theorem with to get a regular instance . Then, we use the approximation algorithm to get an assignment to variables of , and then by algorithm from the point [b] of Theorem 11 we obtain an assignment for .

In case by claim [b] of Theorem 11 we have that as well. Therefore, since gives us approximation of , we have that . Finally, by claim [a] of Theorem 11 we have that , which can be only possible if .

It remains to consider the case when , i.e. . In that case we have

 Valχ(F)Opt(F)≤Valχ(G)+εOpt(F)≤Valχ(G)Opt(F)+εOpt(F)≤Valχ(G)Opt(G)+δ/m1/m≤α+δ, (8)

which finishes the proof of Theorem 2.

## 4 Conclusion and Some Open Questions

In this paper we introduced a reduction which shows how approximation algorithms working on regular unweighted instances of optimization CSPs can be converted (with an arbitrary small loss in approximation ratio) into approximation algorithms for weighted CSPs in which regularity is not imposed. One interesting question would be to see if we could use this result to obtain better approximation algorithms for different CSPs. Also, the aim of quantifying what makes the problems hard is interesting in its own right, and therefore it would be valuable to analyze whether some additional structure of CSP instances can always be assumed when studying their inapproximability.

It is not uncommon that reductions showing hardness of approximation output instances which satisfy some form of regularity. This work shows that we can not hope to obtain stronger inapproximability results by considering irregular instances of CSPs. However, for many other problems it is still not known whether regular instances might be easier to approximate; answering this question could facilitate search for optimal algorithms. One family of problems for which this is especially interesting topic due to their generality and applicability is defined as “Max Ones” in [16].

On the other side, let us remark that using irregular instances can also be instrumental for showing strong hardness results for certain problems, as recently shown in [6] which treated some cardinality constrained CSPs, i.e. a variant of a CSP problem where we also prescribe the cardinality of zeros/ones in admissible assignments. Hence, it would be interesting to explore whether we can obtain better hardness results by considering more irregular/asymmetric instances for some problems for which satisfactory understanding of approximability is lacking.

## Acknowledgments

I am indebted to Per Austrin for pointing out the reduction in [22] to me. I also thank Johan Håstad for numerous useful comments which significantly improved the quality of presentation of this work.

## References

• [1] S. Arora, L. Babai, J. Stern, and Z. Sweedyk, The hardness of approximate optima in lattices, codes, and systems of linear equations, J. Comput. Syst. Sci., 54 (1997), pp. 317–331.
• [2] S. Arora and B. Barak, Computational Complexity - A Modern Approach, Cambridge University Press, 2009.
• [3] S. Arora, C. Lund, R. Motwani, M. Sudan, and M. Szegedy, Proof verification and hardness of approximation problems, in 33rd Annual Symposium on Foundations of Computer Science, Pittsburgh, Pennsylvania, USA, 24-27 October 1992, 1992, pp. 14–23.
• [4] S. Arora and S. Safra, Probabilistic checking of proofs; A new characterization of NP, in 33rd Annual Symposium on Foundations of Computer Science, Pittsburgh, Pennsylvania, USA, 24-27 October 1992, 1992, pp. 2–13.
• [5] P. Austrin, Balanced max 2-sat might not be the hardest

, in Proceedings of the 39th Annual ACM Symposium on Theory of Computing, San Diego, California, USA, June 11-13, 2007, 2007, pp. 189–197.

• [6] P. Austrin and A. Stankovic, Global cardinality constraints make approximating some max-2-csps harder

, in Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2019, September 20-22, 2019, Massachusetts Institute of Technology, Cambridge, MA, USA., 2019, pp. 24:1–24:17.

• [7] M. Bellare, O. Goldreich, and M. Sudan, Free bits, pcps, and nonapproximability-towards tight results, SIAM J. Comput., 27 (1998), pp. 804–915.
• [8] A. A. Bulatov, A dichotomy theorem for nonuniform csps, in 58th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2017, Berkeley, CA, USA, October 15-17, 2017, 2017, pp. 319–330.
• [9] S. O. Chan, Approximation resistance from pairwise-independent subgroups, J. ACM, 63 (2016), pp. 27:1–27:32.
• [10] P. Crescenzi, R. Silvestri, and L. Trevisan, To weight or not to weight: Where is the question?, in Fourth Israel Symposium on Theory of Computing and Systems, ISTCS 1996, Jerusalem, Israel, June 10-12, 1996, Proceedings, IEEE Computer Society, 1996, pp. 68–77.
• [11] M. X. Goemans and D. P. Williamson, .879-approximation algorithms for MAX CUT and MAX 2sat, in Proceedings of the Twenty-Sixth Annual ACM Symposium on Theory of Computing, 23-25 May 1994, Montréal, Québec, Canada, 1994, pp. 422–431.
• [12] J. Håstad, Some optimal inapproximability results, J. ACM, 48 (2001), pp. 798–859.
• [13] W. Hoeffding,

Probability inequalities for sums of bounded random variables

, Journal of the American Statistical Association, 58 (1963), pp. 13–30.
• [14] O. H. Ibarra and C. E. Kim, Fast approximation algorithms for the knapsack and sum of subset problems, J. ACM, 22 (1975), pp. 463–468.
• [15] H. J. Karloff and U. Zwick, A 7/8-approximation algorithm for MAX 3sat?, in 38th Annual Symposium on Foundations of Computer Science, FOCS ’97, Miami Beach, Florida, USA, October 19-22, 1997, 1997, pp. 406–415.
• [16] S. Khanna, M. Sudan, L. Trevisan, and D. P. Williamson, The approximability of constraint satisfaction problems, SIAM J. Comput., 30 (2000), pp. 1863–1920.
• [17] S. Khot, On the power of unique 2-prover 1-round games, in Proceedings on 34th Annual ACM Symposium on Theory of Computing, May 19-21, 2002, Montréal, Québec, Canada, 2002, pp. 767–775.
• [18] M. Lewin, D. Livnat, and U. Zwick, Improved rounding techniques for the MAX 2-sat and MAX DI-CUT problems, in Integer Programming and Combinatorial Optimization, 9th International IPCO Conference, Cambridge, MA, USA, May 27-29, 2002, Proceedings, 2002, pp. 67–82.
• [19] P. Raghavendra, Optimal algorithms and inapproximability results for every csp?, in Proceedings of the 40th Annual ACM Symposium on Theory of Computing, Victoria, British Columbia, Canada, May 17-20, 2008, 2008, pp. 245–254.
• [20] R. Raz, A parallel repetition theorem, SIAM J. Comput., 27 (1998), pp. 763–803.
• [21] T. J. Schaefer, The complexity of satisfiability problems, in Proceedings of the Tenth Annual ACM Symposium on Theory of Computing, STOC ’78, New York, NY, USA, 1978, ACM, pp. 216–226.
• [22] L. Trevisan, Non-approximability results for optimization problems on bounded degree instances, in Proceedings on 33rd Annual ACM Symposium on Theory of Computing, July 6-8, 2001, Heraklion, Crete, Greece, 2001, pp. 453–461.
• [23] D. Zhuk, A proof of CSP dichotomy conjecture, in 58th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2017, Berkeley, CA, USA, October 15-17, 2017, 2017, pp. 331–342.

## Appendix A Appendix

We state here concentration inequalities which give bounds on probability that certain random variable deviates from its mean. While these bounds are widely known, the form in which they appear can vary, and therefore we fix below the versions which are used in this paper.

We use the following variant of Chernoff’s inequality.

###### Lemma 12.

Let , where are mutually independent random variables with range . Then

 P[|X−E[X]|≥δE[X]]≤2e−E[X]δ2/4,δ∈(0,1).

Proof of this lemma can be found in [2, Corollary A.15]. Sometimes it will be more convenient to use the following corollary of the previous lemma.

###### Corollary 13.

Let , where are mutually independent random variables with range . Then

 Pr[|X−E[X]|≥εK]≤2e−ε2K4.
###### Proof.

To proof follows by using inequality from Lemma 12 with , and noting that . ∎

We also need a concentration bound for sum of random variables with range . For that, we use the following variant of Hoeffding’s inequality [13].

###### Lemma 14.

Let be independent variables such that range of each is , where . Then for we have

 P[|X−E[X]|≥t]≤2e−t2Kb2.