Recall that SAT and its restrictions cnf-SAT, k-cnf-SAT and 3-cnf-SAT are NP-complete as shown in cook71 ; karp72 ; levin73 . The 1-3-SAT problem is that, given a collection of triples over some variables, to determine whether there exists a truth assignment to the variables so that each triple contains exactly one true literal and exactly two false literals.
Schaefer’s reduction given in schaefer78 transforms an instance of 3-cnf-SAT into a 1-3-SAT instance. A simple truth-table argument shows this reduction to be parsimonious, hence 1-3-SAT is complete for the class #P while a parsimonious reduction from 1-3-SAT also shows 1-3-SAT to be complete for #P. We mention Toda’s result in toda91 implying that P.H. is as hard computationally as counting, which in a sense is our preoccupation here. A related result of Valiant and Vazirani valiant85 implies that detecting unique solutions is as hard as NP. The algorithm we present does count solutions exhaustively, making use of preprocessing and brute-force on the resulting kernel.
The 1-K-SAT problem, a generalization of 1-3-SAT, is that, given a collection of tuples of size over some variables, to determine whether there exists a truth assignment to the variables so that each -tuple contains exactly one true and false literals.
The 1-K-SAT problem has been studied before under the name of XSAT. In dahllof04 very strong upper bounds are given for this problem, including the counting version. These bounds are and respectively, while in bjorklund08 the same bound of is given for both decision and counting, where is the number of variables and the number of clauses.
Gaussian Elimination was used before in the context of boolean satisfiability. In soos10 the author uses this method for handling xor types of constraints. Other recent examples of Gaussian elimination used in exact algorithms or kernelization may be indeed found in the literature wahlstrom13 ; giannopoulou16 .
Hence the idea that constraints of the type implying this type of exclusivity can be formulated in terms of equations, and therefore processed using Gaussian Elimination, is not new and the intuition behind it is very straightforward.
We mention the influential paper by Dell and Van Melkebeek dell14 together with a continuation of their study by Jansen and Pieterse jansen16 ; jansen17 . It is shown in these papers that, under the assumption that , there cannot exist a significantly small kernelization of various problems, of which exact satisfiability is one. We shall use these results directly in our current approach.
We begin our investigation by showing how a 1-3-SAT instance can be turned into an integer programming version 0-1-IP instance with fewer variables. The number of variables in the 0-1-IP instance is at most two-thirds of the number of variables in the 1-3-SAT instance. We achieve this by a straightforward preprocessing of the 1-3-SAT instance using Gauss-Jordan elimination.
We are then able to count the solutions of the 1-3-SAT instance by performing a brute-force search on the 0-1-IP instance. This method gives interesting upper bounds on 1-3-SAT, and the associated counting problem, though without a further analysis, the bounds thus obtained may not be the strongest upper bounds found in the literature for these problems.
Our method shows how instances become easier to solve with variation in clauses-to-variables ratio. For random k-cnf-SAT the ratio of clauses to variables has been studied intensively, for example ding15
gives the proof that a formula with density below a certain threshold is with high probability satisfiable while above the threshold is unsatisfiable.
The ratio plays a similar role in our treatment of 1-3-SAT. Another important observation is that in our case this ratio cannot go below up to uniqueness of clauses, at the expense of polynomial time pre-processing of the problem instance.
We note that, by reduction from 3-cnf-SAT any instance of 1-3-SAT in which the number of clauses does not exceed the number of variables is NP-complete. Hence we restrict our attention to these instances.
Our preprocessing induces a certain type of “order” on the variables, such that some of the non-satisfying assignments can be omitted by our solution search. We therefore manage to dissect the 1-K-SAT instance and obtain a “core” of variables on which the search can be performed. For a treatment of Parameterized Complexity the reader is directed to downey16 .
After a brief consideration of the notation used in Section 3, we define in Section 4 the problems 1-3-SAT, 1-3-SAT and the associated counting problems #1-3-SAT and #1-3-SAT.
We elaborate on the relationship between the number of clauses and the number of variables in 1-3-SAT. We give a proof that 1-3-SAT is via reduction from 3-cnf-SAT, and a proof that 1-3-SAT is via reduction from 1-3-SAT.
We conclude by remarking that due to this chain of reductions, the restriction of 1-3-SAT to instances with more variables than clauses is also , since these kind of instances encode the 3-cnf-SAT problem. We hence restrict our treatment of 1-3-SAT to these instances.
Section 5 presents our method of reducing a 1-3-SAT instance to an instance of 0/1 Integer Programming with Equality only. This results in a 0/1 I.P.E. instance with at most two thirds the number of variables found in the 1-3-SAT instance.
Our method describes the method sketched by Jansen and Pieterse in an introductory paragraph of jansen17 . Jansen and Pieterse are not primarily interested in reduction of the number of variables, only in reduction of number of constraints. They do not tackle the associated counting problem.
The method consists of encoding a 1-3-SAT instance into a system of linear equations and performing Gaussian Elimination on this system.
Linear Algebraic methods show the resulting matrix can be rearranged into an diagonal submatrix of “independent” columns, where is the rank of the system, to which it is appended a submatrix containing the rest of the columns and the result column which correspond roughly to the 0/1 I.P.E. instance we have in mind. We further know the values in the independent submatrix can be scaled to .
The most pessimistic scenario complexity-wise is when the input clauses, or the rank of the resulting system, is a third the number of variables, , from which we obtain our complexity upper bounds.
To this case, one may wish to contrast the case of the system matrix being full rank, for which Gaussian Elimination alone suffices to find a solution. Further to this, we explain how to solve the 0/1 I.P.E. problem in order to recover the number of solutions to the 1-3-SAT problem.
Section 6 outlines the method of substitution, well-known to be equivalent to Gaussian Elimination. Section 7 gives a worked example of this algorithm. Section 8 and Section 9 are concerned with an analysis of the algorithm complexity, and correctness proof, respectively.
Section 10 outlines the implications for Computational Complexity, giving an argument that the existence of the 1-3-SAT kernel found in previous sections implies the existence of a non-trivial kernel for the more general
We write as usual to signify a polynomial-time reduction.
We denote boolean variables by Denote the true and false constants by and respectively. For any SAT formula , write if is satisfiable and write otherwise. Reserve the notation for a truth assignment to the variable .
We write for the set of formulas in -CNF with variables and unique clauses. We also write to specify concretely such a formula, where shall denote the sets of variables and clauses of . We write .
We will make use of the following properties of a given map :
scalability: for constant .
For a given tuple we let denote the element .
For given linear constraints for some and , we let be the result of substituting uniformly the expression of constraint for variable in constraint . This is to be performed in restricted circumstances.
4 Exact Satisfiability
One-in-three satisfiability arose in late seventies as an elaboration relating to Schaefer’s Dichotomy Theorem schaefer78 . It is proved there using certain assumptions boolean satisfiability problems are either in P or they are NP-complete.
The counting versions of satisfiability problems were introduced in valiant79 and it is known in general that counting is in some sense strictly harder than the corresponding decision problem.
This is due to the fact that, for example, producing the number of satisfying assignments of a formula in -CNF is complete for #P, while the corresponding decision problem is known to be in P valiant79 . We thus restrict our attention to 1-3-SAT and more precisely 1-3-SAT formulas.
Definition 4.1 (1-3-Sat).
1-3-SAT is defined as determining whether a formula is satisfiable, where the formula comprises of a collection of triples
such that and for any clause exactly one of the literals is allowed to be true in an assignment, and no clause may contain repeated literals or a literal and its negation, and such that every variable in appears in at least one clause.
In the restricted case that for we denote the problem as 1-3-SAT.
In the extended case that we are required to produce the number of satisfying assignments, these problems will be denoted as #1-3-SAT and #1-3-SAT.
The 1-3-SAT formula is satisfiable by the assignment and for . The 1-3-SAT formula is not satisfiable.
Up to uniqueness of clauses and variable naming the set determines one 1-3-SAT formula and this formula is trivially satisfiable.
Consider the formula which has variables and clauses, hence belongs to the set and it is satisfiable, trivially, by any assignment that makes each clause evaluate to true.
Now take any clause with . We claim there is no other clause such that , for otherwise let be in their intersection and we can see the number of variables used by the clauses reduces by one variable, to be . Now, since the clauses of do not overlap in variables, we can see that our uniqueness claim must hold, since the elements of are partitions of the set of variables. ∎
For 1-3-SAT, the sets for are empty.
Schaefer gives a polynomial time parsimonious reduction from 3-cnf-SAT to 1-3-SAT hence showing that 1-3-SAT and its counting version #1-3-SAT are NP-complete and respectively #P-complete.
Proposition 4.1 (Schaefer, schaefer78 ).
1-3-SAT is NP-complete.
Proof by reduction from 3-cnf-SAT. For a clause create three 1-3-SAT clauses , , . Hence, we obtain an instance with variables and clauses, for the instance of 3-cnf-SAT of variables and clauses. ∎
The following statement is given in garey02 . For the sake of completeness, we provide a proof by a parsimonious reduction from 1-3-SAT.
Proposition 4.2 (Garey and Johnsongarey02 ).
1-3-SAT is NP-complete.
Construct instance of 1-3-SAT from an instance of 1-3-SAT. Add every clause in the 1-3-SAT instance with no negation to the 1-3-SAT instance.
For every clause containing one negation , add to the 1-3-SAT two clauses and where is a fresh variable.
For a clause containing two negations we add two fresh variables and three clauses , and .
For a clause containing three negations we add three fresh variables and four clauses , , and .
We obtain a 1-3-SAT formula with at most more clauses and at most more variables, for initial number of clauses and variables and respectively.
Our reduction is parsimonious, for it is verifiable by truth-table the number of satisfying assignments to the 1-3-SAT clause is the same as the number of satisfying assignments to the 1-3-SAT collection of clauses . ∎
For if an instance of 3-cnf-SAT is reduced to an instance of 1-3-SAT then our reduction entails and .
We analyze the further reduction to the instance of 1-3-SAT . Let be the collections of clauses in containing, no negation, one negation, two negations and three negations respectively.
Our reduction implies and . Then, .
5 Rank of a Formula
A rank function is used as a measure of “independence” for members of a certain set. The dual notion of nullity is defined as the complement of the rank.
A rank function obeys the following
Definition 5.2 (rank and nullity).
For a 1-3-SAT formula define the system of linear equations as follows:
for any clause add to equation ;
Define the rank and nullity of as and .
If formula is clear from context, we also use the shorthand and .
is a rank function with respect to sets of 1-3-SAT triples.
For any 1-3-SAT instance transformed into a linear system we observe the following:
has a solution if and only if exactly one of is equal to and the other two are equal to .
For any formula we have if and only if has at least one solution over .
A formula has as many satisfiability assignments as the number of solutions of over .
We define the binary integer programming problem with equality here and show briefly that 1-3-SAT is reducible to a “smaller” instance of this problem.
Definition 5.3 (-integer programming with equality).
The 0-1-IP problem is defined as follows. Given a family of finite tuples with each for some fixed , and given a sequence , decide whether there exists a tuple such that
0-1-IP where is the number of 0-1-IP tuples and is the size of the tuples.
The bound is obtained through applying an exhaustive search. ∎
Let be a 1-3-SAT formula, then and .
Follows from the observation that is a rank function. ∎
Consider a 1-3-SAT formula and suppose and . The satisfiability of is decidable in .
The result of performing Gauss-Jordan Elimination on
yields, after a suitable re-arrangement of column vectors, the reduced echelon form
Now consider the following structure, obtained from the given dependencies above through ignoring the zero entries
This induces an instance of 0-1-IP which can be solved as follows
We note the length of sequences is , hence the brute force procedure has to enumerate members of . Furthermore, each such sequence is tested twice against all of the constraints for , resulting in the claimed time complexity of .
To see the algorithm is correct, we give a proof that considers when the counter is incremented. Suppose for all some is not a solution to either or . In this case, the counter is not incremented and we claim does not induce a solution to the 1-3-SAT formula . For in this case is not a solution to the system and hence by Corollary 5.1 cannot be a satisfying solution to . In effect, the counter is not incremented as we have not seen an additional satisfying solution.
Now suppose for all some is a solution to either or . In this case, the counter is incremented and we claim is indeed a solution to the 1-3-SAT formula .
For if is a solution to all th rows constraint then satisfies the constraint giving the satisfying assignment for all variables corresponding to variables in the diagonal matrix, and for variables corresponding to column for which , and for variables corresponding to column for which .
Similarly, if is a solution to all th rows constraint then satisfies the constraint giving the satisfying assignment if corresponds to the diagonal variable , for all variables corresponding to all other variables in the diagonal matrix, and for variables corresponding to column for which , and for variables corresponding to column for which . ∎
By the pre-processing of the problem instance using Gaussian Elimination, shown above, 1-3-SAT is reduced in polynomial time to 0-1-IP. ∎
1-3-SAT for formula rank and nullity and .
There are -many equations to satisfy by any assignment, and there are -many variables to search through exhaustively in order to solve the 0-1-IP problem, which in turn solves the 1-3-SAT problem. ∎
for any instance and .
6 The Method of Substitution
The substitution algorithm is depicted below in Fig. 1. We give a brief textual explanation of the algorithm below.
1. Let be the lowest, middle and highest labeled variable in clause . These values are distinct.
2. Represent clause in normal form as .
3. Sort the formula in ascending order of .
1. Initialize ,
2. For each ,
3. Initialize ,
4. For each clause with with such that is found in the variables of , or is found in the variables of , do
5. Perform the substitution , and normalize the result.
6. Decrement variable . Continue step 4.
7. Decrement variable . Continue step 2.
We remark an essentially cubic halting time on the substitution algorithm, which intuitively corresponds to the cubic halting time of Gaussian Elimination, an equivalent method.
Substitution halts in time for any formula .
Denote by or by , when clear from context, the structure thus obtained, denote by and the rank and nullity thus induced, and denote by and the sets of independent, and dependent variables generated through our process.
We remark the operator is idempotent.
Each clause is read, and each read clause is compared with every other clause , in search for a common variable , if this variable is found, a replacement is performed on .
Suppose there exists a clause such that .
Consider the case . Let variable be in this set difference. It cannot be the case that since this means the procedure missed a mandatory substitution of , which the second iteration picked up.
Therefore . In this case, is the result of a chain of substitutions ending with clause . An induction on this chain shows the procedure missed a mandatory substitution of an -variable, which the second iteration picked up.
Consider the case . Let variable be in this set difference. It cannot be the case that , since this means the second iteration introduced a new variable in clause , the result of a substitution of which the first iteration missed.
Therefore . In this case, is a result of a chain of substitutions ending with clause . An induction on this chain of substitutions shows the second iteration of the procedure introduced a new variable in clause , the result of a substitution of an -variable, which the first iteration missed. ∎
As a consequence, any set of formulas is closed under substitution.
7 An example
Consider the 1-3-SAT formula
We outline the meaning of the rows and columns within our tabular format. : :
The formula is represented in tabular format. Sort according to .
The formula is encoded as below. Use a tabular data structure for the algorithm, initialized to empty.
Substitution phase. Operate on the data structure. 3. : : : : 4. : : : :
Obtain the following partial result. 5. : : : :
Rearrange the tabular structure.
Note the result of the computation:
Hence the rank and nullity of the formula are and .
A Brute-Force Search on the set of dependent variables yields the desired result to the 1-3-SAT formula.
After the substitution process is finished, each of the clauses is expressed in terms of independent variables, variables which cannot be expressed in terms of other variables. We denote by the number of variables in constraint induced by the substitution method, excluding the variable .
8 Algorithm Analysis
We maximize the number of substitutions performed at each step. Hence, at first step we encounter two substitutions, at the second we encounter three substitutions, while at every subsequent step we must assume there exist two variables for which we can substitute in terms of previously found variables, which indicates that the formula for the Fibonacci expansion describes our process.
The largest number of expansions determined by running substitution on the collection of clauses, is
Definition 8.1 (Representation).
The size of a representation for a given instance of 1-3-SAT expressed by substitution as is given by the formula
The size of the resulting representation associated to formulas treated by Remark 8.1 converges asymptotically to .
The bound is given by an analysis of the growth of the Fibonacci sequence. It is well known the rate of growth of the sequence converges approximately to . ∎
Contrast the scenario in Remark 8.1, to the case in which there are no substitutions induced, i.e. .
The size of the resulting representation associated to formulas treated by Remark 8.3 is .
In this case we have independent variables, for a value of of . ∎
Any 1-3-SAT formula admits a representation with size for
The size of any representation is bounded above by for
implies and therefore
9 Adequacy Proof
Let be a 1-3-SAT formula and let be the resulting structure obtained by performing substitution on . Then, and .
It suffices to show that .
Suppose for a contradiction this is not the case. We have that .
That is, that the dependent variables of the system of equations exceed in number the dependent variables obtained through our substitution algorithm.
We let . What this means is there exist variables such that for .
Take any such variable in this list and perform another substitution such as to decrease by one. The existence of the list hence contradicts the statement of Remark 6.2. ∎
Proposition 10.1 (Schroeppel and Shamirschroeppel81 ).
#1-3-SAT can be solved in time and space .
Proposition 10.2 (Schroeppel and Shamirschroeppel81 ).
#0-1-IP can be solved in time and space .
#1-3-SAT can be solved in time in time and space .
Dell and Melkebeek dell14 give a rigorous treatment of the concept of “sparsification”. In their framework, an oracle communication protocol for a language is a communication protocol between two players.
The first player is given the input and is only allowed to run in time polynomial in the length of . The second player is computationally unbounded, without initial access to . At the end of communication, the first player should be able to decide membership in . The cost of the protocol is the length in bits of the communication from the first player to the second.
Therefore, if the first player is able to reduce, in polynomial time, the problem instance significantly, the cost of communicating the “kernel” to the second player would also decrease, hence providing us with a very natural formal account for the notion of sparsification.
Jansen and Pieterse in jansen16 state and give a procedure for any instance of Exact Satisfiability with unbounded clause length to be reduced to an equivalent instance of the same problem with only clauses, for number of variables .
The concern regarding the number of clauses in 1-3-SAT can be addressed, as we have done above. We observe that for any instance of 3-cnf-SAT, the chain of polynomial-time parsimonious reductions , for and instances of 1-3-SAT and 1-3-SAT respectively, implies that the variables of and outnumber the clauses.
What is also claimed in jansen16 is that, assuming , no polynomial time algorithm can in general transform an instance of Exact Satisfiability of -many variables to a significantly smaller equivalent instance, i.e. an instance encoded using for any .
We believe it is already transparent that, in fact, we have obtained a significantly smaller kernel for 1-3-SAT above, i.e. transforming parsimoniously an instance of variables to a “compressed” instance of 0-1-IP of at most variables.
Definition 10.1 (Constraint Satisfaction Problem).
A csp is a triple where
is a set of variables,
is the discrete domain the variables may range over,and
is a set of constraints.
Every constraint is of the form where is a subset of and is a relation on . An evaluation of the variables is a function . An evaluation satisfies a constraint if the values assigned to elements of by satisfies relation .
The following are constraint satisfaction problems:
In what follows we switch between notations and write a csp in a more general form, with a problem written as , with instances such that and a string representation of and .
Definition 10.2 (Kernelization).
Let be two parameterized decision problems, i.e. for some finite alphabet .
A kernelization for the problem parameterized by is a polynomial time reduction of an instance to an instance such that:
if and only if ,
Definition 10.3 (Encoding).
An encoding of a problem is a bijection such that for any we have .
A non-trivial kernel for 3-cnf-SAT is a kernelization of this problem transforming any instance to an instance of an arbitrary NP-complete csp , such that and with for an encoding of and some .
Remark 10.2 (Dell and Melkebeek dell14 ).
3-cnf-SAT admits a trivial kernel with and .
Lemma 10.1 (Dell and Melkebeek dell14 ).
If 3-cnf-SAT admits a non-trivial kernel, then .
A non-trivial kernel for 1-3-SAT is a kernelization of this problem transforming any instance to an instance of an arbitrary NP-complete csp , such that and with for an encoding of and some .
Remark 10.3 (Jansen and Pieterse jansen16 ).
1-3-SAT admits a kernel with and .
The following statement is given in jansen16 . The authors elaborate on the results of dell14 to analyze combinatorial problems from the perspective of sparsification, and give several arguments that non-trivial kernels for such problems would entail a collapse of the Polynomial Hierarchy to the level above .
It is essential to note here that this line of reasoning was used by researchers studying sparsification with the intention of proving lower bounds on the existence of kernels, while the results presented by us are slightly more optimistic.
Lemma 10.2 (Jansen and Pieterse jansen16 ).
If 1-3-SAT admits a non-trivial kernel, then .
If 1-3-SAT admits a non-trivial kernel, then 1-3-SAT admits a non-trivial kernel.
Let be an instance of 1-3-SAT. By Schaeffer’s results it follows can be parsimoniously polynomial time reduced to a 1-3-SAT formula with and .
Assuming 1-3-SAT admits a non-trivial kernel, this implies 1-3-SAT admits a non-trivial kernel, and therefore through Lemma 10.1 .
To spell this out, suppose we have non-trivial kernel for the problem 1-3-SAT, with and . We observe using the reduction from 1-3-SAT, and therefore and, we obtain via the reduction the existence of a non-trivial kernel for 1-3-SAT, that is with . ∎
Essentially the following result is a restatement of Corollary 5.3.
1-3-SAT admits a non-trivial kernel.
Follows from Lemma 5.3. The first player preprocesses the input in polynomial time using Substitution, and passes the input to the second player which makes use of its unbounded resources to provide a solution to this kernel.
It remains to show the cost of this computation is bounded non-trivially, i.e. for .
This requirement follows from Lemma 5.3. For the instance of 0-1-IP to which we reduce has at most variables and at most clauses.
We store the resulting instance of 0-1-IP in a matrix with polynomial-bounded entries, such that iff is the coefficient of variable in constraint , to which we add the result column.
From Remark 8.5 we obtain indeed that the bit representation of this kernel is indeed for some non-negative . ∎
We have shown the mechanism through which a 1-3-SAT instance can be transformed into an integer programming version 0-1-IP instance with variables at most two-thirds of the number of variables in the 1-3-SAT instance.
This was done by a straightforward preprocessing of the 1-3-SAT instance using the method of Substitution.
We manage to count satisfying assignments to the 1-3-SAT instance through a type of brute-force search on the 0-1-IP instance.
The method we have presented before in the shape of Gaussian Elimination gives interesting upper bounds on 1-3-SAT, and shows how instances become harder to solve with variations on the clauses-to-variables ratio.
An essential observation here is that in this case this ratio cannot go below up to uniqueness of clauses. This can be easily checked in polynomial time..
By reduction from 3-cnf-SAT any instance of 1-3-SAT in which the number of clauses does not exceed the number of variables is also NP-complete.
Our contribution is in pointing out how the method of Substitution together with a type of brute-force approach suffice to find, constructively, a non-trivial kernel for 1-3-SAT.
The most important question in Theoretical Computer Science remains open.
Foremost thanks are due to Igor Potapov for his support and benevolence shown towards this project.
Most of the ideas presented here have crystallized while the author was studying with Rod Downey at Victoria University of Wellington, in the New Zealand winter of 2010.
This work would have been much harder to write without the kind hospitality of Gernot Salzer at TU Wien in 2013. There I have met and discussed with experts in the field such as Miki Hermann from Ecole Politechnique.
I was fortunate enough to attend at TU Wien the outstanding exposition in Computational Complexity delivered by Reinhard Pichler.
I am very much indebted to Noam Greenberg for supervising my Master of Science Dissertation in the year of 2012, one hundred years after the birth of Alan Turing.
I thank Asher Kach, Dan Turetzky and David Diamondstone for many useful thoughts on Computability, Complexity and Model Theory.
I have also found useful Dillon Mayhew’s insights in Combinatorics, and Cristian Calude’s research on Algorithmic Information Theory.
Exceptional logicians such as Rob Goldblatt, Max Cresswell and Ed Mares have also supervised various projects in which I was involved.
Western Australia is also in my thoughts, and I thank Mark Reynolds and Tim French for teaching me to think, and act under pressure.
Special acknowledgments are given to my colleague Reino Niskanen for useful comments and proof reading an initial compressed version of this manuscript.
Bucharest, June 2019
S. A. Cook, The complexity of theorem-proving procedures, in: Proceedings of the third annual ACM symposium on Theory of computing, ACM, 1971, pp. 151–158.
- (2) R. M. Karp, Reducibility among combinatorial problems, in: Complexity of computer computations, Springer, 1972, pp. 85–103.
- (3) L. A. Levin, Universal sequential search problems, Problemy Peredachi Informatsii 9 (3) (1973) 115–116.
- (4) T. J. Schaefer, The complexity of satisfiability problems, in: Proceedings of the tenth annual ACM symposium on Theory of computing, ACM, 1978, pp. 216–226.
- (5) S. Toda, Pp is as hard as the polynomial-time hierarchy, SIAM Journal on Computing 20 (5) (1991) 865–877.
- (6) L. G. Valiant, V. V. Vazirani, Np is as easy as detecting unique solutions, in: Proceedings of the seventeenth annual ACM symposium on Theory of computing, ACM, 1985, pp. 458–463.
- (7) V. Dahllöf, P. Jonsson, R. Beigel, Algorithms for four variants of the exact satisfiability problem, Theoretical Computer Science 320 (2-3) (2004) 373–394.
- (8) A. Björklund, T. Husfeldt, Exact algorithms for exact satisfiability and number of perfect matchings, Algorithmica 52 (2) (2008) 226–249.
- (9) M. Soos, Enhanced gaussian elimination in dpll-based sat solvers., in: POS@ SAT, 2010, pp. 2–14.
- (10) M. Wahlström, Abusing the tutte matrix: An algebraic instance compression for the k-set-cycle problem, arXiv preprint arXiv:1301.1517.
- (11) A. C. Giannopoulou, D. Lokshtanov, S. Saurabh, O. Suchy, Tree deletion set has a polynomial kernel but no opt^o(1) approximation, SIAM Journal on Discrete Mathematics 30 (3) (2016) 1371–1384.
- (12) H. Dell, D. Van Melkebeek, Satisfiability allows no nontrivial sparsification unless the polynomial-time hierarchy collapses, Journal of the ACM (JACM) 61 (4) (2014) 23.
- (13) B. M. Jansen, A. Pieterse, Optimal sparsification for some binary csps using low-degree polynomials, arXiv preprint arXiv:1606.03233.
- (14) B. M. Jansen, A. Pieterse, Sparsification upper and lower bounds for graph problems and not-all-equal sat, Algorithmica 79 (1) (2017) 3–28.
- (15) J. Ding, A. Sly, N. Sun, Proof of the satisfiability conjecture for large k, in: Proceedings of the forty-seventh annual ACM symposium on Theory of computing, ACM, 2015, pp. 59–68.
- (16) R. G. Downey, M. R. Fellows, Fundamentals of parameterized complexity, Vol. 201, Springer, 2016.
- (17) L. G. Valiant, The complexity of computing the permanent, Theoretical computer science 8 (2) (1979) 189–201.
- (18) M. R. Garey, D. S. Johnson, Computers and intractability, W.H. Freeman, New York, 1979.
- (19) R. Schroeppel, A. Shamir, A t=o(2^n/2), s=o(2^n/4) algorithm for certain np-complete problems, SIAM journal on Computing 10 (3) (1981) 456–464.