Relating counting complexity to non-uniform probability measures

11/24/2017
by   Eleni Bakali, et al.
0

A standard method for designing randomized algorithms to approximately count the number of solutions of a problem in #P, is by constructing a rapidly mixing Markov chain converging to the uniform distribution over this set of solutions. This construction is not always an easy task, and it is conjectured that it is not always possible. We want to investigate other possibilities for using Markov Chains in relation to counting, and whether we can relate algorithmic counting to other, non-uniform, probability distributions over the set we want to count. In this paper we present a family of probability distributions over the set of solutions of a problem in TotP, and show how they relate to counting; counting is equivalent to computing their normalizing factors. We analyse the complexity of sampling, of computing the normalizing factor, and of computing the size support of these distributions. The latter is also equivalent to counting. We also show how the above tasks relate to each other, and to other problems in complexity theory as well. In short, we prove that sampling and approximating the normalizing factor is easy. We do this by constructing a family of rapidly mixing Markov chains for which these distributions are stationary. At the same time we show that exactly computing the normalizing factor is TotP-hard. However the reduction proving the latter is not approximation preserving, which conforms with the fact that TotP-hard problems are inapproximable if NP ≠ RP. The problem we consider is the Size-of-Subtree, a TotP-complete problem under parsimonious reductions. Therefore the results presented here extend to any problem in TotP. TotP is the Karp-closure of self-reducible problems in #P, having decision version in P.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

03/12/2019

Counting Polygon Triangulations is Hard

We prove that it is #P-complete to count the triangulations of a (non-si...
12/20/2017

On Counting Perfect Matchings in General Graphs

Counting perfect matchings has played a central role in the theory of co...
06/08/2021

Markov Chains Generated by Convolutions of Orthogonality Measures

About two dozens of exactly solvable Markov chains on one-dimensional fi...
08/03/2020

The Amazing Power of Randomness: NP=RP

We (claim to) prove the extremely surprising fact that NP=RP. It is achi...
02/19/2018

On Local Distributed Sampling and Counting

In classic distributed graph problems, each instance on a graph specifie...
11/04/2019

Fast sampling and counting k-SAT solutions in the local lemma regime

We give new algorithms based on Markov chains to sample and approximatel...
02/25/2020

The Moran Genealogy Process

We give a novel representation of the Moran Genealogy Process, a continu...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The set of all self-reducible counting problems in #P, having decision version in P, is contained in a complexity class called TotP PZ06 . E.g. #DNF, #IS, and the Permanent belong to TotP. TotP is a proper subclass of #P unless P=NP PZ06 . Regarding approximability, TotP admits FPRAS if and only if NP=RP (DFJ02 for the one direction, and theorem 2 for the other).

We are (I am) in particularly interested in understanding whether NP=RP, through the lens of counting complexity. In other words we are interested in understanding whether there exist counting problems in TotP, that are hard to approximate, and if so, what the reason for this difficulty is.

More specifically in this paper, the motivation derives from a theorem JS89 asserting that approximate counting is equivalent to uniform sampling. Therefore counting is most often performed via uniform sampling. Sampling in turn is usually performed by designing a rapidly mixing Markov chain, having as stationary distribution the uniform over the set of solutions to the problem at hand. This is a special case of the Markov Chain Monte Carlo method (MCMC), applied to counting problems.

In order to study the possibility of approximating the problems in TotP, one can focus on any TotP-complete problem, under parsimonious reductions (i.e. reductions that preserve the value of the function, and thus also preserve approximations). Recently we found some such complete problems BCPPZ17

, and one of them is the Size-Of-Subtree problem, also known as the problem of estimating the size of a backtracking-tree

K74 ; Stockmeyer85a . This problem asks for an estimation of the size of some sub-tree of the perfect binary tree, with height , given in some succinct (polynomial in ) representation.

We first observe that uniform sampling over the nodes of a tree can be performed by a simple random walk on the tree, but its mixing time is quadratic to the size of the tree, and thus in worst case exponential in . This holds unconditionally, i.e. independently of whether NP = RP. However it might be possible to perform uniform sampling differently (open question).

A different direction, which we do follow in this paper, is to explore whether it is possible to associate counting in TotP to other probability measures over the set of solutions we want to count, (we mean other than the uniform). In particular, we want to investigate whether counting can be related to computational tasks that concern these probability measures, as sampling, estimating the normalizing factor, and estimating the support. For example, it was natural (for me) to wonder whether there exist a rapidly mixing Markov chain over the nodes of a tree, such that its stationary distribution can be somehow related to exactly or approximately computing the size of the tree.

In this paper, in short, it is shown that there indeed exists a family of probability distributions related to counting in TotP, in the sense that counting in TotP is equivalent to computing the normalizing factors of these distributions. We analyse the complexity of the above mentioned three tasks. We also show how these three tasks relate to each other and to other problems in complexity theory. We discuss the results in detail later.

Before proceeding to the presentation of the results and proofs, we would like to provide an indication that it might be easier and more fruitful to study problems in TotP, rather than in #P, and in particular to try to use Markov chains for them, and study their stationary distributions.

Take for example #SAT. From the study of random SAT it is known that, for formulas considered hard, the satisfying assignments are widely scattered in the space of all assignments A09 ; AR09 ; ACR10 . That is, the solutions form clusters s.t. it is not only hard to find even one solution, but even if you are given one solution in some cluster, it is hard to find another one in a different cluster. The solutions doesn’t seem to relate to each other in any algorithmically tractable way. On the contrary, for the TotP-complete version of #SAT (see BCPPZ17 for details), the situation is completely different. The solutions of an input formula build an easy neighbourhood structure, in particular a tree of polynomial height, s.t. from any solution, it is easy to move to its neighbouring solutions. This property of connectedness seems to be a main difference between problems in TotP and #P. In fact it can be generalized to any problem in TotP.

From an algorithmic point of view, connectedness is important, because it allows us to easily construct irreducible Markov chains. Moreover, from the hardness point of view, the observation of scattered solutions can be a reasonable explanation for the failure of many algorithmic approaches for #SAT, but not for its TotP-complete version.

Of course #P admits an FPRAS if and only if TotP admits an FPRAS (as we can see from the discussion in the beginning of this section), however at first glance it might be easier to design an approximation algorithm for a TotP-complete problem, than it is for a #P-complete problem, if such an algorithm exists. If on the other hand NP RP, and thus FPRAS is impossible for both #P and TotP, new insights and explanations are needed, that apply to TotP, not only to #P.

2 Main Results

In this paper a family of probability distributions defined on the set of nodes of finite binary trees is presented, and the complexity of sampling, computing the normalizing factor, and computing the size of the support is studied. Time complexity is considered with respect to the height of the corresponding tree.

The family of distributions is defined as follows.

Definition 1.

Let be a binary tree of height and let be the set of nodes of For all where is the depth of node and is the normalizing factor of , that is a constant s.t.

The following are shown.

Theorem 1.
  1. For every distribution in this family, there is a Markov chain with polynomial mixing time, having this distribution as stationary.

  2. For every distribution in this family, sampling with respect to this distribution is possible in randomized polynomial time, using the above Markov Chain.

  3. Computing the normalizing factor of any distribution in this family

    1. is TotP-hard under Turing reductions,

    2. approximation is possible with FPRAS, using sampling,

    3. exact computation is impossible deterministically (or respectively probabilistically) if NP P (or respectively NP RP).

  4. Computing the size of the support of any distribution in this family

    1. is TotP-hard under parsimonious reductions,

    2. reduces to exactly computing the normalizing factors,

    3. additive error approximation (see def. 4) is possible in randomized polynomial time, (note that the size of the support is in general exponential in ),

    4. exact computation is impossible deterministically (or respectively probabilistically) if NP P (or respectively NP RP),

    5. multiplicative polynomial factor approximation is impossible deterministically (or respectively probabilistically) iff NP P (or respectively NP RP),

    6. additive error approximation can also be achieved in randomized polynomial time, by reducing it to approximately computing normalizing factors for any distribution in the family.

2.1 Main ideas and techniques

A simple random walk on a perfect binary tree mixes in exponential time w.r.t the height of the tree. An intuitive reason for this, is that from any internal node, the probability of going a level down is double the probability of going a level up (since it has got two children, but one parent). The idea is to construct a Markov chain s.t. the probability of going a level up equals the probability of going a level down. This Markov chain turns out to converge rapidly on the full complete binary tree, and as we prove, this also generalises to an arbitrary binary tree (not full or complete).

Of course this Markov chain does not converge to the uniform distribution over the set of nodes However the normalizing factor of its stationary distribution contains enough information to compute More precisely, if you consider the pruned subtrees where each contains all nodes of up to depth then the corresponding normalizing factors can be used to iteratively count the number of nodes at level

Our techniques refer to the analysis of Markov chains. We prove polynomial mixing time, by bounding a quantity called conductance.

2.2 Proofs

The counting problem considered here is the Size-of-Subtree.

Definition 2.

Let be the perfect binary tree of height Let be a subtree of height containing the root of , given in succinct representation, e.g. by a polynomial computable predicate s.t. iff where denotes the set of vertices. The counting problem Size-of-Subtree is to compute the size of .

This problem is TotP-complete under parsimonious reductions BCPPZ17 , so the results extend to any problem in TotP. For an arbitrary problem #A in TotP, there is a tree s.t. its nodes are in one-to-one correspondence with the solutions of A. We will not get into that. For more details see BCPPZ17 .

In order to prove theorem 1, we will first prove some propositions.

We define a family of Markov chains each having as states the nodes of a binary tree .

Definition 3.

Let be a subtree of the perfect binary tree of height , containing the root of . We define the Markov chain over the nodes of , with the following transition probabilities.
if is the parent of ,
if is a child of ,
for every other , and
.

Proposition 1.

The stationary distribution of is as defined in 1.

Proof.

It is easy to check that

Note For simplicity of notations from now on we will assume that S is fixed and omit it from the subscripts in .

Now we will prove that is rapidly mixing, i.e. mixes in time polynomial in the height of the tree . We will use the following lemma from JS89 .

Let be a Markov chain over a finite state space with transition probabilities . Let be the distribution of when starting from state . Let be the stationary distribution, and let be the mixing time when starting from state . An ergodic Markov chain is called time reversible if . Let be the underlying graph of the chain, for which we have an edge with weight for each . A Markov chain is called lazy if In JS89 the conductance of a time reversible Markov chain is defined, as follows: , where the minimum is taken over all s.t.

Lemma 1.

JS89 For any lazy, time reversible Markov chain

Proposition 2.

The mixing time of , when starting from the root, is polynomial in the height of the tree .

Proof.

First of all, we will consider the lazy version of the Markov chain, i.e. in every step, with probability we do nothing, and with probability we follow the rules as in definition 3. The mixing time of is bounded by the mixing time of its lazy version. The stationary distribution is the same. The Markov chain is time reversible, and the underlying graph is a tree with edge weights if we suppose that is the father of and is the probability .

The quantity is , as we will show in lemma 2.

Now it suffices to show that is polynomial in .

Let be the set of the nodes of , i.e. the state space of the Markov chain . We will consider all possible with We will bound the quantity

If is connected and does not contain the root of , then it is a subtree of , with root let say u, and for some We have

Now let be the perfect binary tree with root and height the same as , i.e. . We have

where this comes if we sum over the levels of the tree So it holds

If is the union of two subtrees of , not containing the root of , and the root of the first is an ancestor of the second’s root, then the same arguments hold, where now we take as the root of the first subtree.

If is the union of subtrees not containing the root of , for which it holds that no one’s root is an ancestor of any other’s root, then we can prove a same bound as follows. Let be the subtrees, and let be the respective exponents in the probabilities of the roots of them, in the stationary distribution. Then as before

and

thus

If is a subtree of containing the root of , then the complement of , i.e. is the union of subtrees of the previous form. So if we let be as before, then

and since from hypothesis , we have

thus the same bound holds again.

Finally, similar arguments imply the same bound when is an arbitrary subset of i.e. an arbitrary union of subtrees of .

In total we have

Note that this result implies mixing time This agrees with the intuition that on the full binary tree, the mixing time should be as much as the mixing time of a simple random walk over the levels of the tree, i.e. over a chain of length . The bound is looser only by a factor.

The following lemma proves two properties of this Markov chain, needed for the proofs that will follow.

Lemma 2.

Let be a binary tree of height , and let be the normalizing factor of the stationary distribution of the above Markov chain. It holds and

Proof.

Let be the number of nodes in depth .

which is maximized when the ’s are maximized, i.e. when the tree is perfect binary, in which case and This also implies that for the root of it holds

Now we will reduce the computation of the size of to the computation of the normalizing factors of the above probability distributions

Proposition 3.

Let be a binary tree of height , and let be the subtree of that contains all nodes up to depth , and let be the corresponding normalizing factors defined as above. Then

Proof.

For let be the number of nodes in depth . So

Obviously if is not empty,

(1)

We will prove that

(2)

so then

We will prove claim (2) by induction.

For we have

Suppose claim (2) holds for We will prove it holds for

and substituting for by (1) and (2), we get

Now we give an FPRAS for the computation of the normalizing factor using the previously defined Markov chain .

Proposition 4.

For any binary tree of height we can estimate , within for any , with probability for any , in time .

Proof.

Let be a binary tree of height . We can estimate as follows.

As we saw, , and we observe that this is always at least (which is the case when is full binary). So we can estimate within for any , by sampling nodes of according to and taking, as estimate, the fraction , where if the -th sample node was the root, else

It is known by standard variance analysis arguments that we need

to get

We can boost up this probability to for any , by repeating the above sampling procedure times, and taking as final estimate the median of the estimates computed each time.

(Proofs for the above arguments are elementary in courses on probabilistic algorithms or statistics, see e.g. in Snotes

the unbiased estimator theorem and the median trick, for detailed proofs.)

The random sampling according to can be performed by running the Markov chain defined earlier, on the nodes of . Observe that the deviation from the stationary distribution can be negligible and be absorbed into , with only a polynomial increase in the running time of the Markov chain.

Finally, the estimate for is , and it holds

Finally we show how the above propositions yield a probabilistic additive approximation to the problem Size-of-Subtree, although it could be also obtained by a simple random sampling process that chooses nodes of the full binary tree of height uniformly at random, and taking as estimate of the size of the proportion of those samples that belong to This is an application of the general method of Goldreich08 chapter 6.2.2. The significance of our alternative method is related to the CAPE problem (def 4), and we discuss it in section 3.

Definition 4.

We call additive approximation to a probability , a number for some In the case of Size-of-Subtree the quantity under consideration is In the case of the Circuit Acceptance Probability Estimation problem (CAPE)Will13 the quantity under consideration is where is the number of input gates of the given circuit.

Proposition 5.

For all we can get an estimate of in time s.t.

Proof.

Let and , thus .

So according to proposition 4 we have in time estimations

(3)

We will use proposition 3. Let and so , and clearly

From (3) we have and similarly

Thus

and since , we have

And since from lemma 2 the maximum is , we have

Corollary 1.

Let For all we can get an estimation in time s.t.

Proof of theorem 1

Proof.

1. Follows from propositions 1 and 2.

2. Follows from theorem 1.1.

3a. Follows from proposition 3, combined with the fact that Size-of-Subtree is TotP-complete BCPPZ17 .

3b. Follows from proposition 4.

3c. #IS TotP. If NPP (respectively NPRP) then #IS does not admit FPTAS (respectively FPRAS) SinclairNotesMC . Thus from 3a of theorem 1, the same holds for the computation of the normalizing factors for the family

4a. Follows from the TotP-completeness of Size-of-Subtree under parsimonious reductions BCPPZ17 . From definition 1, a positive probability is given to every node of the corresponding input tree , so the size of the support equals the size of the tree.

4d. By same arguments as for 3c.

4e. By same arguments as for 3c, for the one direction. For the other direction, if NP=P then #P admits FPTAS (AB09 , ch. 17.3.2). If NP=RP then #P admits FPRAS (theorem 2).

4c and 4f. come from corollary 1.

4b. follows from proposition 3. ∎

3 Discussion

We have studied some relationships between counting complexity and non-uniform probability distributions. We also studied the complexity of some computational tasks related to such distributions. Similar relationships had not been studied before. Some exceptions concern complexity results for individual problems, that do not generalize to a whole class, (see the last paragraph in section 4). Our results generalize to all problems in TotP.

We have considered three computational tasks related to any probability distribution: sampling, computing the normalizing factor, and computing the size of the support. For the uniform distribution, these three tasks are equivalent JS89 . However, for a general distribution, it is not only unknown whether these tasks are solvable in polynomial time; it is even unclear whether these three tasks are equivalent.

For the family of distributions we studied here (definition 1), it turns out that the three tasks are not all equivalent, unless NP=RP. First of all counting coincides with computing the size of the support (fact 4a). Then we showed that sampling is in polynomial time, and that sampling yields an FPRAS for the normalizing factors. So sampling and FPRAS for the normalizing factor are always equivalent, since both are possible in polynomial time. Also existence of FPRAS for counting, implies existence of FPRAS for the normalizing factor, since the second is always true. For the opposite direction of the last fact, our results, combined with the fact that FPRAS for TotP is equivalent to NP=RP, imply that:

Corollary 2.

(FPRAS for the normalizing factors FPRAS for counting) iff (NP=RP).

We also showed that exact counting reduces to exactly computing the normalizing factor (fact 4b), but not under approximation preserving reductions. The previous arguments imply that such an approximation preserving reduction, between the two tasks, exists if and only if NP=RP.

We now turn to another issue. Since the Size-of-Subtree problem is TotP complete, under parsimonious reductions, our results generalize to any problem in TotP. However if P NP, we can’t derive such a generalization for #P. As we mentioned in the introduction, it is not even clear how to construct a Markov chain, with an underlying graph that connects the set of solutions of a #P- complete problem. Besides we can’t decide if a solution exists. Our results demonstrate the essence of two properties of TotP: easy decision, and connectedness of the set of solutions.

Finally, note that additive error approximation for any problem in #P can be achieved in a simple way (Goldreich08 , chapter 6.2.2). Thus the same method works for TotP, too. The last fact 4f provides an alternative way for achieving additive error approximation for problems in TotP. This alternative method is restricted to TotP, and does not generalize to #P.

A positive side of this restriction is relevant to derandomization issues. It is known that derandomizing the general simple method is as difficult as proving circuit lower bounds Will13 . However we don’t know a similar relationship between circuit lower bounds and deterministic additive approximation, restricted to problems in TotP. Since our method does not generalize to #P, it might be easier to derandomize it. We discuss this in more detail in the ”further research” section.

3.1 Further research

Several results of this paper point to new research directions, towards the study of important open problems in complexity theory.

It is interesting to investigate the following: (a) probabilistically exactly computing the normalizing factor, (b) an approximation preserving reduction from the problem of computing the size of the support to the problem of computing the normalizing factor, (c) computing the size of the support for this family of distributions with FPRAS in some completely different way. A solution to any of them implies NP=RP. A negative proof for (b) or (c) implies NP RP.

Note that the algorithms and proof arguments presented here, do not take into account the fact that the tree is given in succinct representation. It might be easier to show unconditional negative results for the above open questions, if we suppose is any possible binary tree. However, in order to derive a proof of NP RP, the arguments should apply to the family of distributions corresponding to trees with succinct representation.

Another open problem is how to derandomize the additive error approximation algorithms for the size of the support, in subexponential time. This would yield a deterministic solution of the CAPE problem (see def. 4), within additive approximation, for families of circuits, for which counting the number of accepting inputs is in TotP, (we will call it TotP-CAPE.)

Note that the best (exact, and additive error) deterministic algorithm known for CAPE, on an arbitrary circuit, is by exhaustive search. Derandomizing it faster than the exhaustive search algorithm, i.e. even in time for some , would yield NEXP P/poly Will13 . The latter is a long standing conjecture.

We don’t know similar relationships between circuit lower bounds, and TotP-CAPE. This is another open problem. Nevertheless, solving deterministically TotP-CAPE in subexponential time, would be interesting on its own.

A final open problem, is whether we can achieve derandomization of the same task in polynomial time. Such a result would also imply a solution to the CAPE problem in deterministic polynomial time for depth-two circuits (i.e. DNF’s and CNF’s). The best deterministic algorithm known until now is of time GMR13 . (For more on -CAPE, see the survey Williams14 p.13, and LV96 for an older result.)

4 Related work

TotP is defined in KPSZ01 , some of its properties and relationships to other classes are studied in PZ06 ; BGPT17 , and completeness is studied in BCPPZ17 .

Regardint the Size-of-Subtree: In K74 Knuth provides a probabilistic algorithm practically usefull, but with an exponential worst case error. Modifications and extensions of Knuth’s algorithm have been presented and experimentaly tested in Purdom78 ; Chen92 ; Kilby06

, without signifcant improvements for worst case instances. There are also many heuristics and experimental results on the problem ristricted to special backtracking algorithms, or special instances, see e.g.

Belov17 for more references. Surprisingly there exist FRAS’s for random models of the problem Furer04 ; Vaisman17 . In Ambainis17 quantum algorithms for the problem are studied. In Stockmeyer85a

Stockmeyer provided unconditional lower bounds for the problem under a model of computation which is different from the Turing machine, namely a variant of the (non-uniform) decision tree model.

The relationship between approximate counting and uniform sampling has been studied in JS89 .

There are numerous papers regarding algorithmic and hardness results for individual problems in #P and TotP, e.g. Valiant79 ; JS96perm ; KLM89 ; DFJ02 ; Wei06 . However, apart from the backtracking-tree problem, other TotP-complete problems have not been studied algorithmically yet.

Some non-uniform probability measures have already been studied in other areas of computer science, where problems concern the computation of a weighted sum over the set of solutions to a combinatorial problem. E.g. computing the partition function of the hard-core model and the Potts-model from statistical physics. There are Markov chains associated to these problems, that converge to non-uniform distributions in general. E.g. Glauber dynamics, Gibbs sampling Gibbs , Metropolis-Hastings algorithm Metropolis ; Hastings etc. However, for the special cases where weights are in , the problems correspond to conventional counting problems in #P, and the associated stationary distributions are uniform, again. The literature on this areas is enormous, e.g. BST10 ; BKZZ13 ; BW02 ; So0 ; WM0 ; W82 .

5 Conclusions

We presented some non-uniform probability measures that can be related to counting in TotP. We showed that both computing the size of their supports, and computing their normalizing factors are equivalent to counting.

For these probability measures we proved that, the tasks of sampling, approximately computing the normalizing factor with FPRAS, and approximating the size of the support with an additive error, can be performed in randomized polynomial time.

We also showed that an exact computation of the normalizing factor, and a multiplicative error approximation of the size of the support, are hard problems if NP RP.

Such relationships between counting complexity and non-uniform probability measures had not been studied before.

Our results apply to the whole complexity class TotP. Similar results are not possible for #P, if P NP. Our algorithmic results demonstrate the importance of two main properties of TotP; easy decision and connectedness of the set of solutions.

Our results also suggest new research directions towards the study of other open problems in complexity theory.

Appendix

We don’t know if the following is folklore, but since we haven’t seen it stated explicitly anywhere, we give a proof sketch, for the sake of completeness of this paper.

Theorem 2.

If NP=RP then all problems in #P admit an FPRAS.

Proof.

(sketch) In Stockmeyer85a Stockmeyer has proven that an FPRAS, with access to a oracle, exists for any problem in #P. If NP=RP then . Finally it is easy to see that an FPRAS with access to a BPP oracle, can be replaced by another FPRAS, that simulates the oracle calls itself. ∎

Acknowledgements

I want to thank Stathis Zachos, Dimitris Fotakis, Aris Pagourtzis, Manolis Zampetakis and Gerasimos Palaiopanos for their useful comments and writing assistance.

References

  • (1) Aris Pagourtzis, Stathis Zachos: The Complexity of Counting Functions with Easy Decision Version. MFCS 2006: 741-752
  • (2) Martin E. Dyer, Alan M. Frieze, Mark Jerrum: On Counting Independent Sets in Sparse Graphs. SIAM J. Comput. 31(5): 1527-1541 (2002)
  • (3) Alistair Sinclair, Mark Jerrum: Approximate Counting, Uniform Generation and Rapidly Mixing Markov Chains. Inf. Comput. 82(1): 93-133 (1989)
  • (4) Eleni Bakali, Aggeliki Chalki, Aris Pagourtzis, Petros Pantavos, Stathis Zachos: Completeness Results for Counting Problems with Easy Decision. CIAC 2017: 55-66
  • (5) Donald E. Knuth. 1974. Estimating the Efficiency of Backtrack Programs. Technical Report. Stanford University, Stanford, CA, USA.
  • (6) Larry J. Stockmeyer: On Approximation Algorithms for #P. SIAM J. Comput. 14(4): 849-861 (1985)
  • (7) Dimitris Achlioptas: Random Satisfiability. Handbook of Satisfiability 2009: 245-270
  • (8) Dimitris Achlioptas, Federico Ricci-Tersenghi: Random Formulas Have Frozen Variables. SIAM J. Comput. 39(1): 260-280 (2009)
  • (9) Dimitris Achlioptas, Amin Coja-Oghlan, Federico Ricci-Tersenghi: On the solution-space geometry of random constraint satisfaction problems. Random Struct. Algorithms 38(3): 251-268 (2011)
  • (10) Alistair Sinclair, CS271 Randomness and Computation, lecture notes, Fall 2011, https://people.eecs.berkeley.edu/
     sinclair/cs271/n10.pdf
  • (11) Oded Goldreich: Computational complexity - a conceptual perspective. Cambridge University Press 2008, ISBN 978-0-521-88473-0, pp. I-XXIV, 1-606
  • (12) Ryan Williams: Improving Exhaustive Search Implies Superpolynomial Lower Bounds. SIAM J. Comput. 42(3): 1218-1244 (2013)
  • (13) Alistair Sinclair, CS294 Markov Chain Monte Carlo: Foundations and Applications, lecture notes, Fall 2009 https://people.eecs.berkeley.edu/
     sinclair/cs294/f09.html
  • (14) Sanjeev Arora, Boaz Barak: Computational Complexity - A Modern Approach. Cambridge University Press 2009, ISBN 978-0-521-42426-4, pp. I-XXIV, 1-579
  • (15) Parikshit Gopalan, Raghu Meka, Omer Reingold: DNF sparsification and a faster deterministic counting algorithm. Computational Complexity 22(2): 275-310 (2013)
  • (16) Ryan Williams: Algorithms for Circuits and Circuits for Algorithms. IEEE Conference on Computational Complexity 2014: 248-261
  • (17) Michael Luby and Boban Velickovic. On deterministic approximation of DNF. Algorithmica, 16(4/5):415–433 (1996)
  • (18) Aggelos Kiayias, Aris Pagourtzis, Kiron Sharma, Stathis Zachos: Acceptor-Definable Counting Classes. Panhellenic Conference on Informatics 2001: 453-463.
  • (19) Evangelos Bampas, Andreas-Nikolas Göbel, Aris Pagourtzis, Aris Tentes: On the connection between interval size functions and path counting. Computational Complexity 26(2): 421-467 (2017)
  • (20) Paul Walton Purdom Jr.: Tree Size by Partial Backtracking. SIAM J. Comput. 7(4): 481-491 (1978)
  • (21) Pang C. Chen: Heuristic Sampling: A Method for Predicting the Performance of Tree Searching Programs. SIAM J. Comput. 21(2): 295-315 (1992)
  • (22) Philip Kilby, John K. Slaney, Sylvie Thiébaux, Toby Walsh: Estimating Search Tree Size. AAAI 2006: 1014-1019
  • (23) Gleb Belov, Samuel Esler, Dylan Fernando, Pierre Le Bodic, George L. Nemhauser: Estimating the size of search trees by sampling with domain knowledge. IJCAI 2017: 473-479
  • (24) Martin Fürer, Shiva Prasad Kasiviswanathan: An Almost Linear Time Approximation Algorithm for the Permanen of a Random (0-1) Matrix. FSTTCS 2004: 263-274
  • (25) Vaisman, R., Kroese, D.P.: Stochastic Enumeration Method for Counting Trees. Methodol. Comput. Appl. Probab. 19: 31-73 (2017)
  • (26) Andris Ambainis, Martins Kokainis: Quantum algorithm for tree size estimation, with applications to backtracking and 2-player games. STOC 2017: 989-1002
  • (27) Leslie G. Valiant: The Complexity of Computing the Permanent. Theor. Comput. Sci. 8: 189-201 (1979)
  • (28) M. Jerrum and A. Sinclair: The Markov chain Monte-Carlo method: an approach to approximate counting and integration. In Approximation Algorithms for NP-hard Problems (Dorit Hochbaum, ed.), PWS, pp. 482– 520 (1996)
  • (29) Richard M. Karp, Michael Luby, Neal Madras: Monte-Carlo Approximation Algorithms for Enumeration Problems. J. Algorithms 10(3): 429-448 (1989)
  • (30) Dror Weitz: Counting independent sets up to the tree threshold. STOC 2006: 140-149
  • (31) Stuart Geman, Donald Geman: Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images. IEEE Trans. Pattern Anal. Mach. Intell. 6(6): 721-741 (1984)
  • (32) Metropolis, N., Rosenbluth, A., Rosenbluth, M., Teller, A., and Teller, E.: Equations of state calculations by fast computing machines. J. Chem. Phys., 21(6): 1087–1092 (1953)
  • (33) Hastings, W.: Monte Carlo sampling methods using Markov chains and their application. Biometrika, 57: 97–109 (1970)
  • (34) Nayantara Bhatnagar, Allan Sly, Prasad Tetali: Reconstruction Threshold for the Hardcore Model. APPROX-RANDOM 2010: 434-447
  • (35) Jean Barbier, Florent Krzakala, Lenka Zdeborova and Pan Zhang : The hard-core model on random graphs revisited. J. Phys.: Conf. Ser. 473 : 012021 (2013)
  • (36) G. Brightwell and P. Winkler: Hard constraints and the Bethe lattice: adventures at the interface of combinatorics and statistical physics. In Proc. Int’l. Congress of Mathematicians, volume III : 605– 624 (2002)
  • (37) A. D. Sokal. Chromatic polynomials, Potts models and all that. Physica A: Statistical Mechanics and its Applications, 279(1):324–332, (2000)
  • (38) D. J. Welsh and C. Merino: The Potts model and the Tutte polynomial. Journal of Mathematical Physics, 41(3):1127–1152 (2000)
  • (39) F.-Y. Wu: The Potts model. Reviews of modern physics, 54(1):235 (1982)