1 Introduction
Recall that SAT and its restrictions cnfSAT, kcnfSAT and 3cnfSAT are NPcomplete as shown in cook71 ; karp72 ; levin73 . The 13SAT problem is that, given a collection of triples over some variables, to determine whether there exists a truth assignment to the variables so that each triple contains exactly one true literal and exactly two false literals.
Schaefer’s reduction given in schaefer78 transforms an instance of 3cnfSAT into a 13SAT instance. A simple truthtable argument shows this reduction to be parsimonious, hence 13SAT is complete for the class #P while a parsimonious reduction from 13SAT also shows 13SAT to be complete for #P. We mention Toda’s result in toda91 implying that P.H. is as hard computationally as counting, which in a sense is our preoccupation here. A related result of Valiant and Vazirani valiant85 implies that detecting unique solutions is as hard as NP. The algorithm we present does count solutions exhaustively, making use of preprocessing and bruteforce on the resulting kernel.
The 1KSAT problem, a generalization of 13SAT, is that, given a collection of tuples of size over some variables, to determine whether there exists a truth assignment to the variables so that each tuple contains exactly one true and false literals.
The 1KSAT problem has been studied before under the name of XSAT. In dahllof04 very strong upper bounds are given for this problem, including the counting version. These bounds are and respectively, while in bjorklund08 the same bound of is given for both decision and counting, where is the number of variables and the number of clauses.
Gaussian Elimination was used before in the context of boolean satisfiability. In soos10 the author uses this method for handling xor types of constraints. Other recent examples of Gaussian elimination used in exact algorithms or kernelization may be indeed found in the literature wahlstrom13 ; giannopoulou16 .
Hence the idea that constraints of the type implying this type of exclusivity can be formulated in terms of equations, and therefore processed using Gaussian Elimination, is not new and the intuition behind it is very straightforward.
We mention the influential paper by Dell and Van Melkebeek dell14 together with a continuation of their study by Jansen and Pieterse jansen16 ; jansen17 . It is shown in these papers that, under the assumption that , there cannot exist a significantly small kernelization of various problems, of which exact satisfiability is one. We shall use these results directly in our current approach.
We begin our investigation by showing how a 13SAT instance can be turned into an integer programming version 01IP instance with fewer variables. The number of variables in the 01IP instance is at most twothirds of the number of variables in the 13SAT instance. We achieve this by a straightforward preprocessing of the 13SAT instance using GaussJordan elimination.
We are then able to count the solutions of the 13SAT instance by performing a bruteforce search on the 01IP instance. This method gives interesting upper bounds on 13SAT, and the associated counting problem, though without a further analysis, the bounds thus obtained may not be the strongest upper bounds found in the literature for these problems.
Our method shows how instances become easier to solve with variation in clausestovariables ratio. For random kcnfSAT the ratio of clauses to variables has been studied intensively, for example ding15
gives the proof that a formula with density below a certain threshold is with high probability satisfiable while above the threshold is unsatisfiable.
The ratio plays a similar role in our treatment of 13SAT. Another important observation is that in our case this ratio cannot go below up to uniqueness of clauses, at the expense of polynomial time preprocessing of the problem instance.
We note that, by reduction from 3cnfSAT any instance of 13SAT in which the number of clauses does not exceed the number of variables is NPcomplete. Hence we restrict our attention to these instances.
Our preprocessing induces a certain type of “order” on the variables, such that some of the nonsatisfying assignments can be omitted by our solution search. We therefore manage to dissect the 1KSAT instance and obtain a “core” of variables on which the search can be performed. For a treatment of Parameterized Complexity the reader is directed to downey16 .
2 Outline
After a brief consideration of the notation used in Section 3, we define in Section 4 the problems 13SAT, 13SAT and the associated counting problems #13SAT and #13SAT.
We elaborate on the relationship between the number of clauses and the number of variables in 13SAT. We give a proof that 13SAT is via reduction from 3cnfSAT, and a proof that 13SAT is via reduction from 13SAT.
We conclude by remarking that due to this chain of reductions, the restriction of 13SAT to instances with more variables than clauses is also , since these kind of instances encode the 3cnfSAT problem. We hence restrict our treatment of 13SAT to these instances.
Section 5 presents our method of reducing a 13SAT instance to an instance of 0/1 Integer Programming with Equality only. This results in a 0/1 I.P.E. instance with at most two thirds the number of variables found in the 13SAT instance.
Our method describes the method sketched by Jansen and Pieterse in an introductory paragraph of jansen17 . Jansen and Pieterse are not primarily interested in reduction of the number of variables, only in reduction of number of constraints. They do not tackle the associated counting problem.
The method consists of encoding a 13SAT instance into a system of linear equations and performing Gaussian Elimination on this system.
Linear Algebraic methods show the resulting matrix can be rearranged into an diagonal submatrix of “independent” columns, where is the rank of the system, to which it is appended a submatrix containing the rest of the columns and the result column which correspond roughly to the 0/1 I.P.E. instance we have in mind. We further know the values in the independent submatrix can be scaled to .
The most pessimistic scenario complexitywise is when the input clauses, or the rank of the resulting system, is a third the number of variables, , from which we obtain our complexity upper bounds.
To this case, one may wish to contrast the case of the system matrix being full rank, for which Gaussian Elimination alone suffices to find a solution. Further to this, we explain how to solve the 0/1 I.P.E. problem in order to recover the number of solutions to the 13SAT problem.
Section 6 outlines the method of substitution, wellknown to be equivalent to Gaussian Elimination. Section 7 gives a worked example of this algorithm. Section 8 and Section 9 are concerned with an analysis of the algorithm complexity, and correctness proof, respectively.
Section 10 outlines the implications for Computational Complexity, giving an argument that the existence of the 13SAT kernel found in previous sections implies the existence of a nontrivial kernel for the more general
3 Notation
We write as usual to signify a polynomialtime reduction.
We denote boolean variables by Denote the true and false constants by and respectively. For any SAT formula , write if is satisfiable and write otherwise. Reserve the notation for a truth assignment to the variable .
We write for the set of formulas in CNF with variables and unique clauses. We also write to specify concretely such a formula, where shall denote the sets of variables and clauses of . We write .
We will make use of the following properties of a given map :

subadditivity:

scalability: for constant .
For a given tuple we let denote the element .
For given linear constraints for some and , we let be the result of substituting uniformly the expression of constraint for variable in constraint . This is to be performed in restricted circumstances.
4 Exact Satisfiability
Oneinthree satisfiability arose in late seventies as an elaboration relating to Schaefer’s Dichotomy Theorem schaefer78 . It is proved there using certain assumptions boolean satisfiability problems are either in P or they are NPcomplete.
The counting versions of satisfiability problems were introduced in valiant79 and it is known in general that counting is in some sense strictly harder than the corresponding decision problem.
This is due to the fact that, for example, producing the number of satisfying assignments of a formula in CNF is complete for #P, while the corresponding decision problem is known to be in P valiant79 . We thus restrict our attention to 13SAT and more precisely 13SAT formulas.
Definition 4.1 (13Sat).
13SAT is defined as determining whether a formula is satisfiable, where the formula comprises of a collection of triples
such that and for any clause exactly one of the literals is allowed to be true in an assignment, and no clause may contain repeated literals or a literal and its negation, and such that every variable in appears in at least one clause.
In the restricted case that for we denote the problem as 13SAT.
In the extended case that we are required to produce the number of satisfying assignments, these problems will be denoted as #13SAT and #13SAT.
Example 4.1.
The 13SAT formula is satisfiable by the assignment and for . The 13SAT formula is not satisfiable.
Lemma 4.1.
Up to uniqueness of clauses and variable naming the set determines one 13SAT formula and this formula is trivially satisfiable.
Proof.
Consider the formula which has variables and clauses, hence belongs to the set and it is satisfiable, trivially, by any assignment that makes each clause evaluate to true.
Now take any clause with . We claim there is no other clause such that , for otherwise let be in their intersection and we can see the number of variables used by the clauses reduces by one variable, to be . Now, since the clauses of do not overlap in variables, we can see that our uniqueness claim must hold, since the elements of are partitions of the set of variables. ∎
Remark 4.1.
For 13SAT, the sets for are empty.
Schaefer gives a polynomial time parsimonious reduction from 3cnfSAT to 13SAT hence showing that 13SAT and its counting version #13SAT are NPcomplete and respectively #Pcomplete.
Proposition 4.1 (Schaefer, schaefer78 ).
13SAT is NPcomplete.
Proof.
Proof by reduction from 3cnfSAT. For a clause create three 13SAT clauses , , . Hence, we obtain an instance with variables and clauses, for the instance of 3cnfSAT of variables and clauses. ∎
The following statement is given in garey02 . For the sake of completeness, we provide a proof by a parsimonious reduction from 13SAT.
Proposition 4.2 (Garey and Johnsongarey02 ).
13SAT is NPcomplete.
Proof.
Construct instance of 13SAT from an instance of 13SAT. Add every clause in the 13SAT instance with no negation to the 13SAT instance.
For every clause containing one negation , add to the 13SAT two clauses and where is a fresh variable.
For a clause containing two negations we add two fresh variables and three clauses , and .
For a clause containing three negations we add three fresh variables and four clauses , , and .
We obtain a 13SAT formula with at most more clauses and at most more variables, for initial number of clauses and variables and respectively.
Our reduction is parsimonious, for it is verifiable by truthtable the number of satisfying assignments to the 13SAT clause is the same as the number of satisfying assignments to the 13SAT collection of clauses . ∎
Remark 4.2.
For if an instance of 3cnfSAT is reduced to an instance of 13SAT then our reduction entails and .
We analyze the further reduction to the instance of 13SAT . Let be the collections of clauses in containing, no negation, one negation, two negations and three negations respectively.
Our reduction implies and . Then, .
5 Rank of a Formula
A rank function is used as a measure of “independence” for members of a certain set. The dual notion of nullity is defined as the complement of the rank.
Definition 5.1.
A rank function obeys the following

1. ,

2. ,

3. .
Definition 5.2 (rank and nullity).
For a 13SAT formula define the system of linear equations as follows:

for any clause add to equation ;
Define the rank and nullity of as and .
If formula is clear from context, we also use the shorthand and .
Remark 5.1.
is a rank function with respect to sets of 13SAT triples.
Lemma 5.1.
For any 13SAT instance transformed into a linear system we observe the following:

has a solution if and only if exactly one of is equal to and the other two are equal to .
Proposition 5.1.
For any formula we have if and only if has at least one solution over .
Corollary 5.1.
A formula has as many satisfiability assignments as the number of solutions of over .
We define the binary integer programming problem with equality here and show briefly that 13SAT is reducible to a “smaller” instance of this problem.
Definition 5.3 (integer programming with equality).
The 01IP problem is defined as follows. Given a family of finite tuples with each for some fixed , and given a sequence , decide whether there exists a tuple such that
Remark 5.2.
01IP where is the number of 01IP tuples and is the size of the tuples.
Proof.
The bound is obtained through applying an exhaustive search. ∎
Lemma 5.2.
Let be a 13SAT formula, then and .
Proof.
Follows from the observation that is a rank function. ∎
Lemma 5.3.
Consider a 13SAT formula and suppose and . The satisfiability of is decidable in .
Proof.
The result of performing GaussJordan Elimination on
yields, after a suitable rearrangement of column vectors, the reduced echelon form
Now consider the following structure, obtained from the given dependencies above through ignoring the zero entries
This induces an instance of 01IP which can be solved as follows
We note the length of sequences is , hence the brute force procedure has to enumerate members of . Furthermore, each such sequence is tested twice against all of the constraints for , resulting in the claimed time complexity of .
To see the algorithm is correct, we give a proof that considers when the counter is incremented. Suppose for all some is not a solution to either or . In this case, the counter is not incremented and we claim does not induce a solution to the 13SAT formula . For in this case is not a solution to the system and hence by Corollary 5.1 cannot be a satisfying solution to . In effect, the counter is not incremented as we have not seen an additional satisfying solution.
Now suppose for all some is a solution to either or . In this case, the counter is incremented and we claim is indeed a solution to the 13SAT formula .
For if is a solution to all th rows constraint then satisfies the constraint giving the satisfying assignment for all variables corresponding to variables in the diagonal matrix, and for variables corresponding to column for which , and for variables corresponding to column for which .
Similarly, if is a solution to all th rows constraint then satisfies the constraint giving the satisfying assignment if corresponds to the diagonal variable , for all variables corresponding to all other variables in the diagonal matrix, and for variables corresponding to column for which , and for variables corresponding to column for which . ∎
Corollary 5.2.
13SAT01IP.
Proof.
By the preprocessing of the problem instance using Gaussian Elimination, shown above, 13SAT is reduced in polynomial time to 01IP. ∎
Theorem 5.1.
13SAT for formula rank and nullity and .
Proof.
There are many equations to satisfy by any assignment, and there are many variables to search through exhaustively in order to solve the 01IP problem, which in turn solves the 13SAT problem. ∎
Corollary 5.3.
for any instance and .
6 The Method of Substitution
The substitution algorithm is depicted below in Fig. 1. We give a brief textual explanation of the algorithm below.

Preprocessing phase

1. Let be the lowest, middle and highest labeled variable in clause . These values are distinct.

2. Represent clause in normal form as .

3. Sort the formula in ascending order of .

Substitution phase

1. Initialize ,

2. For each ,

3. Initialize ,

4. For each clause with with such that is found in the variables of , or is found in the variables of , do

5. Perform the substitution , and normalize the result.

6. Decrement variable . Continue step 4.

7. Decrement variable . Continue step 2.
We remark an essentially cubic halting time on the substitution algorithm, which intuitively corresponds to the cubic halting time of Gaussian Elimination, an equivalent method.
Remark 6.1.
Substitution halts in time for any formula .
Denote by or by , when clear from context, the structure thus obtained, denote by and the rank and nullity thus induced, and denote by and the sets of independent, and dependent variables generated through our process.
We remark the operator is idempotent.
Remark 6.2.
.
Proof.
Each clause is read, and each read clause is compared with every other clause , in search for a common variable , if this variable is found, a replacement is performed on .
Suppose there exists a clause such that .
Consider the case . Let variable be in this set difference. It cannot be the case that since this means the procedure missed a mandatory substitution of , which the second iteration picked up.
Therefore . In this case, is the result of a chain of substitutions ending with clause . An induction on this chain shows the procedure missed a mandatory substitution of an variable, which the second iteration picked up.
Consider the case . Let variable be in this set difference. It cannot be the case that , since this means the second iteration introduced a new variable in clause , the result of a substitution of which the first iteration missed.
Therefore . In this case, is a result of a chain of substitutions ending with clause . An induction on this chain of substitutions shows the second iteration of the procedure introduced a new variable in clause , the result of a substitution of an variable, which the first iteration missed. ∎
As a consequence, any set of formulas is closed under substitution.
Remark 6.3.
.
7 An example
Consider the 13SAT formula
We outline the meaning of the rows and columns within our tabular format. : :
The formula is represented in tabular format. Sort according to .
The formula is encoded as below. Use a tabular data structure for the algorithm, initialized to empty.
Substitution phase. Operate on the data structure. 3. : : : : 4. : : : :
Obtain the following partial result. 5. : : : :
Rearrange the tabular structure.
:  
:  
:  
: 
Note the result of the computation:
Hence the rank and nullity of the formula are and .
A BruteForce Search on the set of dependent variables yields the desired result to the 13SAT formula.
After the substitution process is finished, each of the clauses is expressed in terms of independent variables, variables which cannot be expressed in terms of other variables. We denote by the number of variables in constraint induced by the substitution method, excluding the variable .
8 Algorithm Analysis
We maximize the number of substitutions performed at each step. Hence, at first step we encounter two substitutions, at the second we encounter three substitutions, while at every subsequent step we must assume there exist two variables for which we can substitute in terms of previously found variables, which indicates that the formula for the Fibonacci expansion describes our process.
Remark 8.1.
The largest number of expansions determined by running substitution on the collection of clauses, is
Definition 8.1 (Representation).
The size of a representation for a given instance of 13SAT expressed by substitution as is given by the formula
Remark 8.2.
The size of the resulting representation associated to formulas treated by Remark 8.1 converges asymptotically to .
Proof.
The bound is given by an analysis of the growth of the Fibonacci sequence. It is well known the rate of growth of the sequence converges approximately to . ∎
Remark 8.3.
Contrast the scenario in Remark 8.1, to the case in which there are no substitutions induced, i.e. .
Remark 8.4.
The size of the resulting representation associated to formulas treated by Remark 8.3 is .
Proof.
In this case we have independent variables, for a value of of . ∎
Theorem 8.1.
Any 13SAT formula admits a representation with size for
Remark 8.5.
The size of any representation is bounded above by for
Proof.
implies and therefore
∎
9 Adequacy Proof
Proposition 9.1.
Let be a 13SAT formula and let be the resulting structure obtained by performing substitution on . Then, and .
Proof.
It suffices to show that .
Suppose for a contradiction this is not the case. We have that .
That is, that the dependent variables of the system of equations exceed in number the dependent variables obtained through our substitution algorithm.
We let . What this means is there exist variables such that for .
Take any such variable in this list and perform another substitution such as to decrease by one. The existence of the list hence contradicts the statement of Remark 6.2. ∎
10 Implications
Proposition 10.1 (Schroeppel and Shamirschroeppel81 ).
#13SAT can be solved in time and space .
Proposition 10.2 (Schroeppel and Shamirschroeppel81 ).
#01IP can be solved in time and space .
Corollary 10.1.
#13SAT can be solved in time in time and space .
Dell and Melkebeek dell14 give a rigorous treatment of the concept of “sparsification”. In their framework, an oracle communication protocol for a language is a communication protocol between two players.
The first player is given the input and is only allowed to run in time polynomial in the length of . The second player is computationally unbounded, without initial access to . At the end of communication, the first player should be able to decide membership in . The cost of the protocol is the length in bits of the communication from the first player to the second.
Therefore, if the first player is able to reduce, in polynomial time, the problem instance significantly, the cost of communicating the “kernel” to the second player would also decrease, hence providing us with a very natural formal account for the notion of sparsification.
Jansen and Pieterse in jansen16 state and give a procedure for any instance of Exact Satisfiability with unbounded clause length to be reduced to an equivalent instance of the same problem with only clauses, for number of variables .
The concern regarding the number of clauses in 13SAT can be addressed, as we have done above. We observe that for any instance of 3cnfSAT, the chain of polynomialtime parsimonious reductions , for and instances of 13SAT and 13SAT respectively, implies that the variables of and outnumber the clauses.
What is also claimed in jansen16 is that, assuming , no polynomial time algorithm can in general transform an instance of Exact Satisfiability of many variables to a significantly smaller equivalent instance, i.e. an instance encoded using for any .
We believe it is already transparent that, in fact, we have obtained a significantly smaller kernel for 13SAT above, i.e. transforming parsimoniously an instance of variables to a “compressed” instance of 01IP of at most variables.
Definition 10.1 (Constraint Satisfaction Problem).
A csp is a triple where

is a set of variables,

is the discrete domain the variables may range over,and

is a set of constraints.
Every constraint is of the form where is a subset of and is a relation on . An evaluation of the variables is a function . An evaluation satisfies a constraint if the values assigned to elements of by satisfies relation .
Remark 10.1.
The following are constraint satisfaction problems:

3cnfSAT

13SAT

13SAT
In what follows we switch between notations and write a csp in a more general form, with a problem written as , with instances such that and a string representation of and .
Definition 10.2 (Kernelization).
Let be two parameterized decision problems, i.e. for some finite alphabet .
A kernelization for the problem parameterized by is a polynomial time reduction of an instance to an instance such that:

if and only if ,

, and

.
Definition 10.3 (Encoding).
An encoding of a problem is a bijection such that for any we have .
Definition 10.4.
A nontrivial kernel for 3cnfSAT is a kernelization of this problem transforming any instance to an instance of an arbitrary NPcomplete csp , such that and with for an encoding of and some .
Remark 10.2 (Dell and Melkebeek dell14 ).
3cnfSAT admits a trivial kernel with and .
Lemma 10.1 (Dell and Melkebeek dell14 ).
If 3cnfSAT admits a nontrivial kernel, then .
Definition 10.5.
A nontrivial kernel for 13SAT is a kernelization of this problem transforming any instance to an instance of an arbitrary NPcomplete csp , such that and with for an encoding of and some .
Remark 10.3 (Jansen and Pieterse jansen16 ).
13SAT admits a kernel with and .
The following statement is given in jansen16 . The authors elaborate on the results of dell14 to analyze combinatorial problems from the perspective of sparsification, and give several arguments that nontrivial kernels for such problems would entail a collapse of the Polynomial Hierarchy to the level above .
It is essential to note here that this line of reasoning was used by researchers studying sparsification with the intention of proving lower bounds on the existence of kernels, while the results presented by us are slightly more optimistic.
Lemma 10.2 (Jansen and Pieterse jansen16 ).
If 13SAT admits a nontrivial kernel, then .
Lemma 10.3.
If 13SAT admits a nontrivial kernel, then 13SAT admits a nontrivial kernel.
Proof.
Let be an instance of 13SAT. By Schaeffer’s results it follows can be parsimoniously polynomial time reduced to a 13SAT formula with and .
Assuming 13SAT admits a nontrivial kernel, this implies 13SAT admits a nontrivial kernel, and therefore through Lemma 10.1 .
To spell this out, suppose we have nontrivial kernel for the problem 13SAT, with and . We observe using the reduction from 13SAT, and therefore and, we obtain via the reduction the existence of a nontrivial kernel for 13SAT, that is with . ∎
Essentially the following result is a restatement of Corollary 5.3.
Theorem 10.1.
13SAT admits a nontrivial kernel.
Proof.
Follows from Lemma 5.3. The first player preprocesses the input in polynomial time using Substitution, and passes the input to the second player which makes use of its unbounded resources to provide a solution to this kernel.
It remains to show the cost of this computation is bounded nontrivially, i.e. for .
This requirement follows from Lemma 5.3. For the instance of 01IP to which we reduce has at most variables and at most clauses.
We store the resulting instance of 01IP in a matrix with polynomialbounded entries, such that iff is the coefficient of variable in constraint , to which we add the result column.
From Remark 8.5 we obtain indeed that the bit representation of this kernel is indeed for some nonnegative . ∎
Corollary 10.2.
11 Conclusion
We have shown the mechanism through which a 13SAT instance can be transformed into an integer programming version 01IP instance with variables at most twothirds of the number of variables in the 13SAT instance.
This was done by a straightforward preprocessing of the 13SAT instance using the method of Substitution.
We manage to count satisfying assignments to the 13SAT instance through a type of bruteforce search on the 01IP instance.
The method we have presented before in the shape of Gaussian Elimination gives interesting upper bounds on 13SAT, and shows how instances become harder to solve with variations on the clausestovariables ratio.
An essential observation here is that in this case this ratio cannot go below up to uniqueness of clauses. This can be easily checked in polynomial time..
By reduction from 3cnfSAT any instance of 13SAT in which the number of clauses does not exceed the number of variables is also NPcomplete.
Our contribution is in pointing out how the method of Substitution together with a type of bruteforce approach suffice to find, constructively, a nontrivial kernel for 13SAT.
The most important question in Theoretical Computer Science remains open.
Acknowledgments
Foremost thanks are due to Igor Potapov for his support and benevolence shown towards this project.
Most of the ideas presented here have crystallized while the author was studying with Rod Downey at Victoria University of Wellington, in the New Zealand winter of 2010.
This work would have been much harder to write without the kind hospitality of Gernot Salzer at TU Wien in 2013. There I have met and discussed with experts in the field such as Miki Hermann from Ecole Politechnique.
I was fortunate enough to attend at TU Wien the outstanding exposition in Computational Complexity delivered by Reinhard Pichler.
I am very much indebted to Noam Greenberg for supervising my Master of Science Dissertation in the year of 2012, one hundred years after the birth of Alan Turing.
I thank Asher Kach, Dan Turetzky and David Diamondstone for many useful thoughts on Computability, Complexity and Model Theory.
I have also found useful Dillon Mayhew’s insights in Combinatorics, and Cristian Calude’s research on Algorithmic Information Theory.
Exceptional logicians such as Rob Goldblatt, Max Cresswell and Ed Mares have also supervised various projects in which I was involved.
Western Australia is also in my thoughts, and I thank Mark Reynolds and Tim French for teaching me to think, and act under pressure.
Special acknowledgments are given to my colleague Reino Niskanen for useful comments and proof reading an initial compressed version of this manuscript.
Bucharest, June 2019
References

(1)
S. A. Cook, The complexity of theoremproving procedures, in: Proceedings of the third annual ACM symposium on Theory of computing, ACM, 1971, pp. 151–158.
 (2) R. M. Karp, Reducibility among combinatorial problems, in: Complexity of computer computations, Springer, 1972, pp. 85–103.
 (3) L. A. Levin, Universal sequential search problems, Problemy Peredachi Informatsii 9 (3) (1973) 115–116.
 (4) T. J. Schaefer, The complexity of satisfiability problems, in: Proceedings of the tenth annual ACM symposium on Theory of computing, ACM, 1978, pp. 216–226.
 (5) S. Toda, Pp is as hard as the polynomialtime hierarchy, SIAM Journal on Computing 20 (5) (1991) 865–877.
 (6) L. G. Valiant, V. V. Vazirani, Np is as easy as detecting unique solutions, in: Proceedings of the seventeenth annual ACM symposium on Theory of computing, ACM, 1985, pp. 458–463.
 (7) V. Dahllöf, P. Jonsson, R. Beigel, Algorithms for four variants of the exact satisfiability problem, Theoretical Computer Science 320 (23) (2004) 373–394.
 (8) A. Björklund, T. Husfeldt, Exact algorithms for exact satisfiability and number of perfect matchings, Algorithmica 52 (2) (2008) 226–249.
 (9) M. Soos, Enhanced gaussian elimination in dpllbased sat solvers., in: POS@ SAT, 2010, pp. 2–14.
 (10) M. Wahlström, Abusing the tutte matrix: An algebraic instance compression for the ksetcycle problem, arXiv preprint arXiv:1301.1517.
 (11) A. C. Giannopoulou, D. Lokshtanov, S. Saurabh, O. Suchy, Tree deletion set has a polynomial kernel but no opt^o(1) approximation, SIAM Journal on Discrete Mathematics 30 (3) (2016) 1371–1384.
 (12) H. Dell, D. Van Melkebeek, Satisfiability allows no nontrivial sparsification unless the polynomialtime hierarchy collapses, Journal of the ACM (JACM) 61 (4) (2014) 23.
 (13) B. M. Jansen, A. Pieterse, Optimal sparsification for some binary csps using lowdegree polynomials, arXiv preprint arXiv:1606.03233.
 (14) B. M. Jansen, A. Pieterse, Sparsification upper and lower bounds for graph problems and notallequal sat, Algorithmica 79 (1) (2017) 3–28.
 (15) J. Ding, A. Sly, N. Sun, Proof of the satisfiability conjecture for large k, in: Proceedings of the fortyseventh annual ACM symposium on Theory of computing, ACM, 2015, pp. 59–68.
 (16) R. G. Downey, M. R. Fellows, Fundamentals of parameterized complexity, Vol. 201, Springer, 2016.
 (17) L. G. Valiant, The complexity of computing the permanent, Theoretical computer science 8 (2) (1979) 189–201.
 (18) M. R. Garey, D. S. Johnson, Computers and intractability, W.H. Freeman, New York, 1979.
 (19) R. Schroeppel, A. Shamir, A t=o(2^n/2), s=o(2^n/4) algorithm for certain npcomplete problems, SIAM journal on Computing 10 (3) (1981) 456–464.
Comments
There are no comments yet.