Minimizing functions of discrete variables represented as a sum of low-order terms is a ubiquitous problem occurring in many real-world applications. Understanding complexity of different classes of such optimization problems is thus an important task. In a prominent VCSP framework (which stands for Valued Constraint Satisfaction Problem) a class is parameterized by a set of cost functions of the form that are allowed to appear as terms in the objective. Set is usually called a language.
Different types of languages give rise to many interesting classes. A widely studied type is crisp languages , in which all functions are -valued. They correspond to Constraint Satisfaction Problems (CSPs), whose goal is to decide whether a given instance has a feasible solution. Feder and Vardi conjectured in  that there exists a dichotomy for CSPs, i.e. every crisp language is either tractable or NP-hard. This conjecture was refined by Bulatov, Krokhin and Jeavons , who proposed a specific algebraic condition that should separate tractable languages from NP-hard ones. The conjecture was verified for many special cases [28, 5, 3, 2, 8], and was finally proved in full generality by Bulatov  and Zhuk .
At the opposite end of the VCSP spectrum are the finite-valued CSPs, in which functions do not take infinite values. In such VCSPs, the feasibility aspect is trivial, and one has to deal only with the optimization issue. One polynomial-time algorithm that solves tractable finite-valued CSPs is based on the so-called basic linear programming (BLP) relaxation, and its applicability (also for the general-valued case) was fully characterized by Kolmogorov, Thapper and Živný
. The complexity of finite-valued CSPs was completely classified by Thapper and Živný, where it is shown that all finite-valued CSPs not solvable by BLP are NP-hard.
The dichotomy is also known to hold for general-valued CSPs, i.e. when cost functions in are allowed to take arbitrary values in . First, Kozik and Ochremiak showed  that languages that do not satisfy a certain algebraic condition are NP-hard. Kolmogorov, Krokhin and Rolínek then proved  that all other languages are tractable, assuming the (now established) dichotomy for crisp languages conjectured in .
In this paper languages that satisfy the condition in  are called solvable. Since optimization problems encountered in practice often come without any guarantees, it is natural to ask what is the complexity of checking solvability of a given language . We envisage that an efficient algorithm for this problem could help in theoretical investigations, and could also facilitate designing optimization approaches for tackling specific tasks.
Checking solvability of a given language is known as a meta-problem or a meta-question in the literature. Note it can be solved in polynomial time for languages on a fixed domain (since the solvability condition can be expressed by a linear program with variables and polynomial number of constraints, where if the language is finite-valued and otherwise). This naive solution, however, becomes very inefficient if is a part of the input (which is what we assume in this paper).
The meta-problem above was studied by Thapper and Živný for finite-valued languages , and by Chen and Larose for crisp languages . In both cases it was shown to be NP-complete. We therefore focus on exponential-time algorithms. We obtain the following results for the problem of checking solvability of a given finite-valued language :
An algorithm with complexity , where is the domain of and is some fixed polynomial.
Assuming the Strong Exponential Time Hypothesis (SETH), we prove that for any constant the problem cannot be solved in time.
We also present a few weaker results for general-valued languages (see Section 3).
Other related work
There is a vast literature devoted to exponential-time algorithms for various problems, both on the algorithmic side and on the hardness side. Hardness results usually assume one of the following two hypotheses [15, 16, 9].
Conjecture 1 (Exponential Time Hypothesis (ETH)).
Deciding satisfiability of a -CNF-SAT formula on variables cannot be solved in time.
Conjecture 2 (Strong Exponential Time Hypothesis (SETH)).
For any there exists integer such that deciding satisfiability of a -CNF-SAT formula on variables cannot be solved in time.
Below we discuss some results specific to CSPs. Let -CSP be the class of CSP problems on a -element domain where each constraint involves at most variables. The number of variables in an instance will be denoted as . A trivial exhaustive search for a -CSP instance runs in time, where notation hides factors polynomial in the size of the input. For -CSP instances the complexity can be improved to . Some important subclasses of -CSP can even be solved in time. For example,  and  developed respectively and algorithms for solving the -coloring problem. On the negative side, ETH is known to have the following implications:
The -CSP problem cannot be solved in time .
The Graph Homomorphism problem cannot be solved in time . (This problem can be viewed as a special case of -CSP, in which a single binary relation is applied to different pairs of variables).
Recently, exponential-time algorithms for crisp NP-hard languages have been studied using algebraic techniques [17, 18, 24]. For example,  showed that the following conditions are equivalent, assuming the (now proved) algebraic CSP dichotomy conjecture: (a) ETH fails; (b) there exists a finite crisp NP-hard language that can be solved in subexponential time (i.e. all -instances on variables can be solved in time); (c) all finite crisp NP-hard languages can be solved in subexponential time.
We denote , where is the positive infinity. A function of the form will be called a cost function over of arity . We will always assume that the set is finite. The effective domain of is the set . Note that can be viewed both as an -ary relation over and as a function . We assume that is represented as a list of pairs . Accordingly, we define , where the size of a rational number (for integers ) is .
A valued constraint satisfaction language over domain is a set of cost functions , where the arity depends on and may be different for different functions in . The domain of will be denoted as . For a finite we define .
Language is called finite-valued if all functions take finite (rational) values. It is called crisp if all functions take only values in . We denote to be the crisp language obtained from in a natural way. Throughout the paper, for a subset we use to denote the unary function with . (Domain should always be clear from the context). For a label we also write for brevity.
An instance of the valued constraint satisfaction problem (VCSP) is a function given by
It is specified by a finite set of variables , finite set of terms , cost functions of arity and indices for . A solution to is a labeling with the minimum total value. The size of is defined as .
The instance is called a -instance if all terms belong to .
The set of all -instances will be denoted as . A finite language is called tractable if all instances can be solved in polynomial time, and it is NP-hard if the corresponding optimization problem is NP-hard. A long sequence of works culminating with recent breakthrough papers [6, 32] has established that every finite language is either tractable or NP-hard.
2.1 Polymorphisms and cores
Let denote the set of all operations and let . When is clear from the context, we will sometimes write simply and .
Any language defined on can be associated with a set of operations on , known as the polymorphisms of , which allow one to combine (often in a useful way) several feasible assignments into a new one.
An operation is a polymorphism of a cost function if, for any , we have that where is applied component-wise.
For any valued constraint language over a set , we denote by the set of all operations on which are polymorphisms of every . We also let .
Clearly, if is a polymorphism of a cost function , then is also a polymorphism of . For -valued functions, which naturally correspond to relations, the notion of a polymorphism defined above coincides with the standard notion of a polymorphism for relations. Note that the projections (aka dictators), i.e. operations of the form , are polymorphisms of all valued constraint languages. Polymorphisms play the key role in the algebraic approach to the CSP, but, for VCSPs, more general constructs are necessary, which we now define.
An -ary fractional operation on is a probability distribution on
is a probability distribution on. The support of is defined as .
For an operation we will denote
to be characteristic vector of, i.e. the fractional operation with and for .
A -ary fractional operation on is said to be a fractional polymorphism of a cost function if, for any , we have
For a constraint language , will denote the set of all -ary fractional operations that are fractional polymorphisms of each function in . Also, let , and .
(It is easy to check that , and if is crisp).
Next, we will need the notion of cores.
Language on domain is called a core if all operations are bijections. Subset is called a core of if for some operation and the language is a core, where is the language on domain obtained by restricting each function in to .
The following facts are folklore knowledge. We do not know an explicit reference (at least in the case of general-valued languages), so we prove them in Appendix A for completeness.
Let be a subset of such that for some .
(a) Set is a core of if and only if and .
(b) There exists vector such that for all . Furthermore, if is a core of then such can be chosen so that for all and .
(c) Let be a -instance on variables . Then .
For a language we denote to be the set of subsets which are cores of , and to be set of operations such that (or equivalently such that ).
2.2 Dichotomy theorem
Several types of operations play a special role in the algebraic approach to (V)CSP.
An operation is called
idempotent if for all ;
cyclic if and for all ;
symmetric if and for all , and any permutation on ;
Siggers if and for all .
A fractional operation is said to be idempotent/cyclic/symmetric if all operations in have the corresponding property.
Note, the Siggers operation is traditionally defined in the literature as an idempotent operation satisfying . Here we follow the terminology in  that does not require idempotency. (In  such operation was called quasi-Siggers).
We can now formulate the dichotomy theorem.
We will call languages satisfying the condition of Theorem 9 solvable. The following equivalent characterizations of solvability are either known or can be easily be derived from previous work [29, 19, 22, 23] (see Appendix B):
Let be a language and . The following conditions are equivalent:
admits a cyclic fractional polymorphism of some arity .
contains a Siggers operation.
is solvable for any core of .
Furthermore, a finite-valued language is solvable if and only if it admits a symmetric fractional polymorphism of arity .
Note that checking solvability of a given language is a decidable problem. Indeed, condition (c) can be tested by solving a linear program with variables and constraints, where we maximize the total weight of Siggers operations subject to linear constraints expressing that is a fractional polymorphism of of arity .
2.3 Basic LP relaxation
Symmetric operations are known to be closely related to LP-based algorithms for CSP-related problems. One algorithm in particular has been known to solve many VCSPs to optimality. This algorithm is based on the so-called basic LP relaxation, or BLP, defined as follows.
Let be the set of probability distributions over labelings in . We also denote ; thus, is the standard ()-dimensional simplex. The corners of can be identified with elements in . For a distribution and a variable , let be the marginal probability of distribution for :
Given a VCSP instance in the form (1), we define the value as follows:
If there are no feasible solutions then . The objective function and all constraints in this system are linear, therefore this is a linear program. Its size is polynomial in , so can be found in time polynomial in .
We say that BLP solves if , and BLP solves if it solves all instances of . The following results are known.
Theorem 11 ().
(a) BLP solves if and only if admits a symmetric fractional polymorphism of every arity . (b) If is finite-valued then BLP solves if and only if admits a symmetric fractional polymorphism of arity (i.e. if it is solvable).
BLP relaxation also plays a key role for general-valued languages, as the following result shows. Recall that for a subset is the unary function with .
Consider instance with the set of variables and domain . For node denote . We define and to be the instances with variables and the following objective functions:
It is easy to see that for any . However, the BLP relaxations of these two instances may differ.
Theorem 13 ().
If is solvable and is a -instance then BLP solves .
If is solvable and we know a core of , then an optimal solution for every instance -instance can be found by using the standard self-reducibility method, in which we repeatedly add unary terms of the form to the instance for different and and check whether this changes the optimal value of the BLP relaxation. A formal description of the method is given below. (Notations and should be self-explanatory; in particular, the former is the instance obtained from by restricting each term to domain ).
(a) If LP-Probe returns a labeling then
(b) Suppose that is a -instance where is solvable and . Then LP-Probe.
Part (a) holds by construction, and part (b) can be derived from the following two facts (which hold under the preconditions of part (b)):
by Lemma 7(c).
2.4 Meta-questions and uniform algorithms
In the light of the previous discussion, it is natural to ask the following questions about a given language : (i) Is solvable? (ii) Is a core? (iii) What is a core of ? Such questions are usually called meta-questions or meta-problems in the literature. For finite-valued languages their computational complexity has been studied in .
Theorem 15 ().
Problems (i) and (ii) for -valued languages are NP-complete and co-NP-complete, respectively.
Theorem 16 ().
There is a polynomial-time algorithm that, given a core finite-valued language , either finds a binary idempotent symmetric fractional polymorphism of with , or asserts that none exists.
For crisp languages the following hardness results are known.
Theorem 17 ().
Deciding whether a given crisp language with a single binary relation is a core is a co-NP-complete problem. (Equivalently, testing whether a given directed graph has a non-bijective homomorphism onto itself is an NP-complete problem).
Theorem 18 ().
Deciding whether a given crisp language with binary relations is solvable is an NP-complete problem.
It is still an open question whether an analogue of Theorem 16 holds for crisp languages, i.e. whether solvability of a given core crisp language can be tested in polynomial time. However, it is known  that the answer would be positive assuming the existence of a certain uniform polynomial-time algorithm for CSPs.
Let be a class of languages. A uniform polynomial-time algorithm for is a polynomial-time algorithm that, for each input with and , computes .
Theorem 20 ().
Suppose that there exists a uniform polynomial-time algorithm for the class of core crisp languages. Then there exists a polynomial-time algorithm that decides whether a given core crisp language is solvable (or equivalently admits a Siggers polymorphism).
Currently it is not known whether a uniform polynomial-time algorithm for core crisp languages exists. (Algorithms in [6, 32] assume that needed polymorphisms of the language are part of the input; furthermore, the worst-case bound on the runtime is exponential in ).
We remark that  considered a wider range of meta-questions for crisp languages. In particular, they studied the complexity of deciding whether a given admits polymorphism satisfying a given strong linear Maltsev condition specified by a set of linear identities. Examples of such identities are (meaning that is idempotent), (meaning that is cyclic), and (meaning that is Siggers). We refer to  to further details.
3 Our results
Let be the set of -instances on variables with (for some fixed polynomial). We denote to be the running time of a procedure that computes for . Also, let be the combined running times of computing for instances during a call to LP-Probe for and some subset . Note, if is finite-valued then computing is a trivial problem, so and would be polynomial in .
In the results below is always assumed to be the domain of language . The size of is denoted as .
First, we consider the problem of computing a core of a given language . A naive solution is to solve a linear program with variables. We will present an alternative technique that runs more efficiently (in the case of finite-valued languages) but is allowed to output an incorrect result if is not solvable. It will be convenient to introduce the following terminology: language is a conditional core if either is a core or is not solvable. Similarly, set is a conditional core of if either or is not solvable. Note, is a conditional core of if and only if is not solvable.
To compute a conditional core of , we will use the following approach. Consider a pair where is a string of size that specifies set of candidate cores of . Formally, where for each . We assume that elements of can be efficiently enumerated, i.e. there exists a polynomial-time procedure for computing from and from . If is a set of subsets , we will denote
There exists an algorithm that for a given input does one of the following:
Produces a fractional polymorphism with and .
Asserts that there exists no vector with .
Asserts that one of the following holds: (i) is not solvable; (ii) .
It runs in time and uses space.
The algorithm in the theorem above is based on the ellipsoid method , which tests feasibility of a polytope using a polynomial number of calls to the separation oracle. In our case this oracle is implemented via one or more calls to LP-Probe for appropriate and .
One possibility would be to use Theorem 21 with the set . If the algorithm returns result (a) then we can take operation and call the algorithm recursively for the language on a smaller domain. If we get result (b) or (c) then one can show that is a conditional core, so we can stop. For finite-valued languages this approach would run in time. We will pursue an alternative approach with an improved complexity .
This approach will use partitions of domain . For such we denote
We say that is a partition of if the set is non-empty. In particular, the partition of into singletons is a partition of , since contains the identity mapping . We say that is a maximal partition of if is a partition of and no coarser partition (i.e. with ) has this property. Clearly, for any there exists at least one which is a maximal partition of . By analogy with cores, we say that is a conditional (maximal) partition of if either is a (maximal) partition of or is not solvable.
In the results below is always assumed to be a partition of .
(a) If is a maximal partition of then and .
(b) If is a partition of then .
There exists an algorithm with runtime that for a given input does one of the following:
Asserts that is a conditional partition of .
Asserts that is not a partition of .
As before, the algorithm in Theorem 23 is based on the ellipsoid method. However, now we cannot use procedure LP-Probe to implement the separation oracle, since a candidate core is not available. Instead, we solve the BLP relaxation of instance
and derive a separating hyperplane from a (fractional) optimal solution of the relaxation.
(1) A conditional maximal partition of can be computed in time. (2) Once such is found, a conditional core of can be computed using time and space. If then the algorithm also produces a fractional polymorphism such that , and contains operation with .
Testing solvability of a conditional core
Once we found a conditional core of , we need to test whether language is solvable. This problem is known to be solvable in polynomial-time for finite-valued languages , see Theorem 16. Their result can be extended as follows.
There exists an algorithm that for a given language does one of the following:
Produces an idempotent fractional polymorphism certifying solvability of :
has arity and is symmetric, if is finite-valued;
has arity and contains a Siggers operation in the support, if is not finite-valued. Furthermore, in each case vector satisfies .
Asserts that one of the following holds: (i) is not solvable; (ii) is not a core.
Its runtime is if is finite-valued, and otherwise.
Solvability of a given finite-valued language can be tested in time. If the answer is positive, the algorithm also returns a fractional polymorphism with and a symmetric idempotent fractional polymorphism where for some ; furthermore, for .
Let us fix a constant . As Theorems 15, 17 and 18 state, testing whether is (i) solvable and (ii) is a core are both NP-hard problems for -valued languages. We now present additional hardness results under the Exponential Time Hypothesis (ETH) and the Strong Exponential Time Hypothesis (SETH) (see Conjectures 1 and 2). Note that for Theorem 27 we simply reuse the reductions from . We say that a family of languages is -bounded if each satisfies for some fixed polynomial, and for all .
Suppose that ETH holds. Then there exists a 2-bounded family of -valued languages
such that the following problems cannot be solved in time:
(a) Deciding whether language is solvable.
(b) Deciding whether language is a core.
Suppose that SETH holds. Then for any
there exists an -bounded family of -valued languages
such that the following problems cannot be solved in time:
(a) Deciding whether language is solvable (assuming the existence of a uniform polynomial-time algorithm for core crisp languages, in the case when ).
(b) Deciding whether language satisfies .
4.1 Ellipsoid method
Using the ellipsoid method, Grötschel, Lovász and Schrijver  established a polynomial-time equivalence between linear optimization and separation problems in polytopes. We will need one implication of this result, namely that efficient separation implies efficient feasibility testing. A formal statement is given below.
Consider a family of instances where an instance can be described by a string of length over a fixed alphabet. Suppose that for each we have an integer and a finite set , where each element corresponds to a hyperplane . This hyperplane encodes linear inequality on vector . Let us denote and for a subset .
We make the following assumptions: (i) each can be described by a string of size ; (ii) vector can be computed from and in polynomial time (implying that , where the size of a vector in is the sum of sizes of its components); (iii) set can be constructed algorithmically from input . Note that quantities , , all depend on ; for brevity this dependence is not reflected in the notation.
Theorem 29 ([13, Lemma 6.5.15]).
Consider the following problems:
[Separation] Given instance and vector , either decide that , or find a separating hyperplane with satisfying and for all .
[Feasibility] Given instance , decide whether .
There exists an algorithm for solving [Feasibility] that makes a polynomial number of calls to the oracle for [Separation] plus a polynomial number of other operations.
Note that a (possibly inefficient) algorithm for solving [Separation] always exists: if then one possibility is to find an element with and return hyperplane . (Its size is polynomial in by assumption).
For some parts of the proof we will also need the following variation.
Consider the following problems:
[Separation+] Given instance and vector , either decide that , or find an element with (i.e. element with ).
[Feasibility+] Given instance , decide whether . If , find subset such that and .
There exists an algorithm for solving [Feasibility+] that makes a polynomial number of calls to the oracle for [Separation+] plus a polynomial number of other operations.
This result can be deduced from Theorem 29: the desired subset can be taken as the set of all elements in returned by the oracle during the algorithm.
4.2 Farkas lemma for fractional polymorphisms
Let us fix integer and sets with . These choices will be specified later (they will depend on the specific theorem that we will be proving). Let be the set of tuples such that is an -ary function in and . Note, can be viewed as a matrix of size :
For such we will write and . For an operation we denote , and for a cost function we denote .
Next, we define various hyperplanes in as follows:
For let be the hyperplane corresponding to the inequality
where we used the Iverson bracket notation: if is true, and otherwise.
For let be the hyperplane corresponding to the inequality .
Introduce a special element , and let be the hyperplane corresponding to the (unsatisfiable) inequality .
For a subset it will be convenient to denote . In other words, is the set of vectors satisfying
Suppose that . Then if and only if admits an -ary fractional polymorphism such that and .
If then it is possible to compute such in time (given and ) so that it additionally satisfies .
Introducing slack variables , we have if and only if the following system does not have a solution :