1 Introduction
This work studies approximability of regular constraint satisfaction problems (CSPs), where we interpret regularity to mean that each variable appears the same number of times in constraints of instances. Since regular CSPs are a subclass of CSPs, approximating their optimal values is not harder than approximating values of general CSPs. In this work we show that approximating values of regular CSPs is also essentially not easier, i.e. we show that an approximation algorithm for regular instances of a particular CSP induces approximation algorithm applicable to possibly nonregular instances. Therefore, we show that imposing regularity has almost no effect on the approximability of CSPs, and in particular if one is not interested in additive factors in approximation ratios, the study of approximability may be conducted solely on regular instances.
In order to make the result more general, we revisit previously studied question of weights vs. no weights for CSPs [10, 16] in a context of approximation. In particular, we show that it is sufficient to have an approximation algorithm for regular unweighted instances in order to construct an approximation algorithm applicable to possibly weighted instances of CSPs without regularity restriction. In order to do so, we use a result from [16] which shows that weighted versions of CSPs have essentially the same (up to additive error) approximation ratios as their unweighted counterparts. We reprove this result here for the sake of completeness.
We organize the paper as follows. In Section 1.1, we give an informal definition of constraint satisfaction problems, and introduce decision and optimization versions of these problems. In Section 1.2, we discuss approximation of CSPs and highlight some breakthrough results. Motivated by this discussion, we introduce regular CSPs in Section 1.3, and state the new results proved in this work. Then, in Section 1.4 we compare the results of this paper with prior work. In Sections 2 and 3 we formalize the discussion given in Section 1. In particular, in Section 2 we fix the notation, and discuss the difference between weighted and unweighted CSPs. In Section 3 we describe our reductions and prove the results. Finally, in Section 4 we discuss possible applications of ideas and theorems introduced in this paper, and mention some open questions.
1.1 Constraint Satisfaction Problems
Constraint satisfaction problems (CSPs) represent one of the most fundamental classes of problems studied in complexity theory. Each CSP is described by a collection of predicates, which are used in instances of these problems as constraints on tuples of variables. Probably the best known CSP is 3Sat, in which the constraints are given as disjunctive clauses on at most three literals, where a literal is either a variable or its negation. A basic problem is to determine whether we can satisfy all the constraints of a given CSP instance.
This problem is very well understood, due to Schaefer’s dichotomy theorem for CSPs on Boolean domains [21] and more recent proofs of a dichotomy theorem on general domains by Bulatov [8] and Zhuk [23].
In this work we focus on optimization variants of CSPs, in which we are interested in finding an assignment which maximizes/minimizes the number of constraints satisfied. Depending on the optimization variant, we refer to these problems as either MaxCSPs or MinCSPs. A typical problem in this setting is MaxCut, which has Boolean constraints of the form
. Many optimization CSPs are intractable, and in this case we typically resort to approximation algorithms in order to estimate their optimal values. The strength of an approximation algorithm is expressed through its approximation ratio
, which measures the quality of a solution produced by the algorithm by comparing^{1}^{1}1By convention we assume in this work that approximation algorithms for MaxCSPs always have , while for MinCSPs . it to the optimal one. In a study of the approximation algorithms, we are typically interested in finding algorithms with the value of as close to as possible. We are also interested in studying which values of are not feasible, in which case we talk about inapproximability.1.2 Some Important Results on Approximability of CSPs
On the algorithmic side, semidefinite programming (SDP) has been very fruitful tool for approximating optimal values of CSPs. The first approximation algorithm based on SDP dates back to the work of Goemans and Williamson, who devised a approximation algorithm for the MaxCut problem [11]. Ideas from this work have been very influential for subsequent research of approximation algorithms, and we highlight the approximation algorithm for Max3Sat [15] and the algorithm for MaxSat [5, 18].
On the hardness of approximation side the celebrated PCPtheorem [3, 4], combined with the usual assumption that , provided a strong starting point used in many results showing impossibility of approximation. The highlight result using this starting point along with parallel repetition of Raz [20] and long codes [7] comes from Håstad, who gave optimal inapproximability results for MaxESat and MaxELin^{2}^{2}2In MaxESat, constraints are clauses of width , while in MaxELin, constraints are linear equations over . We use abbreviation E to denote that each constraint is of width exactly . Therefore, MaxESat allows only clauses of width , while MaxSat allows width and as well. problems [12]. Recently, Siu On Chan gave optimal (up to a constant factor) inapproximability results for MaxCSPs where arity of the predicates is larger than the size of the domain [9].
Even though the PCP theorem was used with great success over the years, researchers still faced seemingly insurmountable difficulties in pursuit of sharp inapproximability results for many fundamental problems such as Max2Sat, approximate graph coloring, and minimum vertex cover. More precisely, the starting point of almost all reductions was the Label Cover problem [1], which was constructed by combining the PCP theorem with the parallel repetition of Raz [20]. In order to overcome these difficulties, Khot introduced a modification of Label Cover called Unique Label Cover [17] and conjectured it to be NPHard. This conjecture is known as the Unique Games Conjecture (UGC), and it quickly became the central problem in the hardness of approximation area, especially since its validity implies optimality of many already known approximation algorithms. Of special importance among UGC based results is the one from Raghavendra [19], which shows that a certain version of semidefinite programming relaxation is optimal for all constraint satisfaction problems. Therefore, in case UGC is shown to be true, this work would end the quest for optimal approximation algorithms for CSPs.
However, with the validity of the UGC still in question, there is an incentive to derive strong inapproximability results relying on other (weaker) assumptions, most preferably on . Furthermore, while Raghavendra’s result shows how to optimally approximate CSPs, it does not give us a suitable way to compute numerical values of optimal approximation ratios; this question remains open for almost all CSPs, even very simple ones.
1.3 New Results for Regular CSPs
In order to facilitate further study, it can be valuable to ask whether some additional properties of instances can be assumed when studying approximability of CSPs. In this work we address this topic by studying regular instances of MaxCSPs and MinCSPs, i.e. instances in which each variable occurs the same number of times in the constraints. In particular, we prove the following results for polytime approximation algorithms.
The proofs of Theorems 1 and 2 are based on a deterministic reduction introduced in Theorem 11. We also give a randomized reduction for MaxCSPs in order to prove the following theorem.
The details of the randomized reduction can be found in Theorem 10. Randomized reduction also works for MinCSPs, although with the degree requirement of , which makes this reduction less efficient than even the deterministic one. For this reason we do not discuss randomized reduction for MinCSPs.
In the theorems above instead of a constant we can choose , where is the number of variables, to obtain approximation in polytime for MaxCSPs (or approximation for MinCSPs).
1.4 Prior Work
Both the randomized and the deterministic reductions introduced in this paper are based on a construction introduced by Trevisan [22], which was used to show hardness of approximating values of bounded degree instances of the MaxSat problem. The reduction of Trevisan outputs instances in which each variable has degree in expectation, and therefore by argument that relies on Chernoff’s bound it is shown that the degrees of variables is with high probability smaller than . Our deterministic reduction comes from derandomization of the aforementioned result, while in the randomized reduction we reuse mentioned result of Trevisan [22] and in our argument show that degrees are with high probability in range .
In order to make our reductions applicable to the weighted setting, in this work we also show that approximability of weighted MaxCSPs (or weighted MinCSPs) is essentially the same^{3}^{3}3If we allow additive loss in the approximation ratio as the approximability of their unweighted versions. Let us remark that the same result was already proved in [16, Lemma 3.11] by relying on some results that appeared in [10]. We reprove this fact here for the sake of completeness.
2 Preliminaries
We consider constraint satisfaction problems given by the following definition.
For a predicate we use to denote its arity. We are interested in solving instances of CSPs, which are defined as follows.
Sometimes when working with Boolean CSPs, the definition of an instance allows applying constraints to literals instead of variables. However, Definition 5 is more general, since we can always extend the family of predicates belonging to a CSP to create CSP , such that each instance of in the sense of Definition 5 can be represented as an instance of in which we allow constraints over variables and their negations, and viceversa. In particular, we can create by taking every of , considering all , and adding to predicates defined as
where is the th element of the tuple , and addition takes place over .
The degree of a variable is defined as the number of times is mentioned in the constraints, or formally
(1) 
where is an indicator function. Instances in which all variables have the same degree are called regular.
Max/MinCSP problems frequently appear in a setting in which constraints of an instance are assigned with nonnegative weights, which are typically used to encapsulate the significance of each constraint. Let us now give the definition of these problems.
Obviously, unweighted instances can be seen as weighted where each constraint has a weight .
Let us denote by a function an assignment to variables of a CSP instance . We interpret as a coordinatewise action of on . Given , we define the value of as
(2) 
We also define the optimal value of in the case of MaxCSP to be
(3) 
In the minimization version, correct definition of the optimal value has “” instead of “” in the previous expression. Typically, the aim is to find a solution with the value close to the optimal one. In case of MaxCSP, an approximation algorithm is an algorithm which in polynomial time finds an assignment such that
For MinCSPs, the correct definition has “” instead of “” in the previous inequality.
While introducing weights allows convenient representation of CSPs, the hardness of approximation essentially does not change, as shown in [16, Lemma 3.11]. We reprove these results here, starting with a following lemma.
By relying on this lemma, we can show that weights do not affect approximability of Max/MinCSPs, as long as we allow for additive loss in approximation ratio. We first prove this claim for MaxCSPs.
The argument from the previous theorem does not work for MinCSPs, since in this case can be arbitrarily small. Analogous claim for MinCSPs was already proved in [16, Lemma 3.11] by using scaling techniques [14, 10]. For the sake of completeness, we give here somewhat more detailed proof of this claim, using essentially the same techniques.
3 Reduction
We now prove the theorem which shows the existence of a randomized algorithm which can be used for proving Theorem 3. We remark that this theorem uses a reduction that appeared in [22], and that the main difference comes from the fact that we need to create instances in which degrees of variables are uniform, while bounded degree was sufficient in [22]. Additional complexity lies in the fact that we prove theorem for any MaxCSP, and therefore account for different arity of predicates, while [22] considered MaxSat with predicates of arity .
Let us now give an overview of the proof. We start in the same way as [22], by creating copies for each variable in the starting instance of degree . Then, in order to create a regular instance, we sample constraints of the starting instance, and create a constraint in the new instance by replacing each occurring in the scope by some of its copies uniformly at random. Such a procedure outputs an instance in which every variable has the same degree in expectation. However, with small probability it can still happen that the deviations from this degree are large. For that reason, we repeat this procedure up to times until the degrees of the variables are close to the expected value, or otherwise our algorithm fails. In case of our algorithm not failing, we slightly update the resulting instance to ensure that each variable has the same degree. More precisely, in case expected degree of each variable is , we replace variables with degree higher than in scopes of some constraints with some new dummy variables, where is small. Finally, we also create new constraints in order to make sure that each variable has degree exactly . Final step in our construction consists in making sure that newly introduced dummy variables also have degree . We then show that with very high probability these updates changed/added only small number of constraints, so our regular instance "looks like" the random one. The last part of the proof shows that an assignment to regular instance can be used to construct an assignment which satisfies similar fraction of constraints of the starting instance. The idea is the same as in [22]; namely, the fraction of variables with value gives us the probability that variable should have value , and this is used in randomized algorithm which converts the values of to values of . This algorithm can be derandomized, and we show that this conversion does not incur large change in the value of the instance.
A formal statement and a proof are given below.
Let us now prove Theorem 1. For that reason, let us suppose that regular instances of MaxCSP can be approximated within some fixed approximation ratio , and let us consider an arbitrary (possibly not regular) instance of . Then, we construct with the probabilistic algorithm described in the previous theorem, apply the approximation algorithm to find an assignment , and from using the algorithm from Theorem 10 [a], we can find an assignment to the instance satisfying
Now, we have that , for some^{5}^{5}5As in the proof of Theorem 8, w.l.o.g. we assume that instance does not contain predicates which evaluate to under all assignments. fixed which depends only on . Therefore, by choosing , the claim of Theorem 1 follows.
Note that using analog of Theorem 10 for MinCSPs to prove Theorem 2 would require , and therefore the instance in the reduction will be of size at least , with . We give a deterministic reduction instead, which works for both MaxCSPs and MinCSPs, and which creates regular instance of degree where is the maximal degree of constraints in . The reduction is given as the following theorem.
Let us now show how this result can be used to prove Theorem 2. Hence, let us fix , and starting from an instance of a MinCSP with constraints , we apply algorithm from the previous theorem with to get a regular instance . Then, we use the approximation algorithm to get an assignment to variables of , and then by algorithm from the point [b] of Theorem 11 we obtain an assignment for .
In case by claim [b] of Theorem 11 we have that as well. Therefore, since gives us approximation of , we have that . Finally, by claim [a] of Theorem 11 we have that , which can be only possible if .
It remains to consider the case when , i.e. . In that case we have
(8) 
which finishes the proof of Theorem 2.
4 Conclusion and Some Open Questions
In this paper we introduced a reduction which shows how approximation algorithms working on regular unweighted instances of optimization CSPs can be converted (with an arbitrary small loss in approximation ratio) into approximation algorithms for weighted CSPs in which regularity is not imposed. One interesting question would be to see if we could use this result to obtain better approximation algorithms for different CSPs. Also, the aim of quantifying what makes the problems hard is interesting in its own right, and therefore it would be valuable to analyze whether some additional structure of CSP instances can always be assumed when studying their inapproximability.
It is not uncommon that reductions showing hardness of approximation output instances which satisfy some form of regularity. This work shows that we can not hope to obtain stronger inapproximability results by considering irregular instances of CSPs. However, for many other problems it is still not known whether regular instances might be easier to approximate; answering this question could facilitate search for optimal algorithms. One family of problems for which this is especially interesting topic due to their generality and applicability is defined as “Max Ones” in [16].
On the other side, let us remark that using irregular instances can also be instrumental for showing strong hardness results for certain problems, as recently shown in [6] which treated some cardinality constrained CSPs, i.e. a variant of a CSP problem where we also prescribe the cardinality of zeros/ones in admissible assignments. Hence, it would be interesting to explore whether we can obtain better hardness results by considering more irregular/asymmetric instances for some problems for which satisfactory understanding of approximability is lacking.
Acknowledgments
I am indebted to Per Austrin for pointing out the reduction in [22] to me. I also thank Johan Håstad for numerous useful comments which significantly improved the quality of presentation of this work.
References
 [1] S. Arora, L. Babai, J. Stern, and Z. Sweedyk, The hardness of approximate optima in lattices, codes, and systems of linear equations, J. Comput. Syst. Sci., 54 (1997), pp. 317–331.
 [2] S. Arora and B. Barak, Computational Complexity  A Modern Approach, Cambridge University Press, 2009.
 [3] S. Arora, C. Lund, R. Motwani, M. Sudan, and M. Szegedy, Proof verification and hardness of approximation problems, in 33rd Annual Symposium on Foundations of Computer Science, Pittsburgh, Pennsylvania, USA, 2427 October 1992, 1992, pp. 14–23.
 [4] S. Arora and S. Safra, Probabilistic checking of proofs; A new characterization of NP, in 33rd Annual Symposium on Foundations of Computer Science, Pittsburgh, Pennsylvania, USA, 2427 October 1992, 1992, pp. 2–13.

[5]
P. Austrin, Balanced max 2sat might not be the hardest
, in Proceedings of the 39th Annual ACM Symposium on Theory of Computing, San Diego, California, USA, June 1113, 2007, 2007, pp. 189–197.

[6]
P. Austrin and A. Stankovic, Global cardinality constraints make
approximating some max2csps harder
, in Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2019, September 2022, 2019, Massachusetts Institute of Technology, Cambridge, MA, USA., 2019, pp. 24:1–24:17.
 [7] M. Bellare, O. Goldreich, and M. Sudan, Free bits, pcps, and nonapproximabilitytowards tight results, SIAM J. Comput., 27 (1998), pp. 804–915.
 [8] A. A. Bulatov, A dichotomy theorem for nonuniform csps, in 58th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2017, Berkeley, CA, USA, October 1517, 2017, 2017, pp. 319–330.
 [9] S. O. Chan, Approximation resistance from pairwiseindependent subgroups, J. ACM, 63 (2016), pp. 27:1–27:32.
 [10] P. Crescenzi, R. Silvestri, and L. Trevisan, To weight or not to weight: Where is the question?, in Fourth Israel Symposium on Theory of Computing and Systems, ISTCS 1996, Jerusalem, Israel, June 1012, 1996, Proceedings, IEEE Computer Society, 1996, pp. 68–77.
 [11] M. X. Goemans and D. P. Williamson, .879approximation algorithms for MAX CUT and MAX 2sat, in Proceedings of the TwentySixth Annual ACM Symposium on Theory of Computing, 2325 May 1994, Montréal, Québec, Canada, 1994, pp. 422–431.
 [12] J. Håstad, Some optimal inapproximability results, J. ACM, 48 (2001), pp. 798–859.

[13]
W. Hoeffding,
Probability inequalities for sums of bounded random variables
, Journal of the American Statistical Association, 58 (1963), pp. 13–30.  [14] O. H. Ibarra and C. E. Kim, Fast approximation algorithms for the knapsack and sum of subset problems, J. ACM, 22 (1975), pp. 463–468.
 [15] H. J. Karloff and U. Zwick, A 7/8approximation algorithm for MAX 3sat?, in 38th Annual Symposium on Foundations of Computer Science, FOCS ’97, Miami Beach, Florida, USA, October 1922, 1997, 1997, pp. 406–415.
 [16] S. Khanna, M. Sudan, L. Trevisan, and D. P. Williamson, The approximability of constraint satisfaction problems, SIAM J. Comput., 30 (2000), pp. 1863–1920.
 [17] S. Khot, On the power of unique 2prover 1round games, in Proceedings on 34th Annual ACM Symposium on Theory of Computing, May 1921, 2002, Montréal, Québec, Canada, 2002, pp. 767–775.
 [18] M. Lewin, D. Livnat, and U. Zwick, Improved rounding techniques for the MAX 2sat and MAX DICUT problems, in Integer Programming and Combinatorial Optimization, 9th International IPCO Conference, Cambridge, MA, USA, May 2729, 2002, Proceedings, 2002, pp. 67–82.
 [19] P. Raghavendra, Optimal algorithms and inapproximability results for every csp?, in Proceedings of the 40th Annual ACM Symposium on Theory of Computing, Victoria, British Columbia, Canada, May 1720, 2008, 2008, pp. 245–254.
 [20] R. Raz, A parallel repetition theorem, SIAM J. Comput., 27 (1998), pp. 763–803.
 [21] T. J. Schaefer, The complexity of satisfiability problems, in Proceedings of the Tenth Annual ACM Symposium on Theory of Computing, STOC ’78, New York, NY, USA, 1978, ACM, pp. 216–226.
 [22] L. Trevisan, Nonapproximability results for optimization problems on bounded degree instances, in Proceedings on 33rd Annual ACM Symposium on Theory of Computing, July 68, 2001, Heraklion, Crete, Greece, 2001, pp. 453–461.
 [23] D. Zhuk, A proof of CSP dichotomy conjecture, in 58th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2017, Berkeley, CA, USA, October 1517, 2017, 2017, pp. 331–342.
Appendix A Appendix
We state here concentration inequalities which give bounds on probability that certain random variable deviates from its mean. While these bounds are widely known, the form in which they appear can vary, and therefore we fix below the versions which are used in this paper.
We use the following variant of Chernoff’s inequality.
Proof of this lemma can be found in [2, Corollary A.15]. Sometimes it will be more convenient to use the following corollary of the previous lemma.
We also need a concentration bound for sum of random variables with range . For that, we use the following variant of Hoeffding’s inequality [13].
Comments
There are no comments yet.