Eric Vigoda

is this you? claim profile

0

  • Rapid Mixing Swendsen-Wang Sampler for Stochastic Partitioned Attractive Models

    The Gibbs sampler is a particularly popular Markov chain used for learning and inference problems in Graphical Models (GMs). These tasks are computationally intractable in general, and the Gibbs sampler often suffers from slow mixing. In this paper, we study the Swendsen-Wang dynamics which is a more sophisticated Markov chain designed to overcome bottlenecks that impede the Gibbs sampler. We prove O( n) mixing time for attractive binary pairwise GMs (i.e., ferromagnetic Ising models) on stochastic partitioned graphs having n vertices, under some mild conditions, including low temperature regions where the Gibbs sampler provably mixes exponentially slow. Our experiments also confirm that the Swendsen-Wang sampler significantly outperforms the Gibbs sampler when they are used for learning parameters of attractive GMs.

    04/06/2017 ∙ by Sejun Park, et al. ∙ 0 share

    read it

  • Structure Learning of H-colorings

    We study the structure learning problem for graph homomorphisms, commonly referred to as H-colorings, including the weighted case which corresponds to spin systems with hard constraints. The learning problem is as follows: for a fixed (and known) constraint graph H with q colors and an unknown graph G=(V,E) with n vertices, given uniformly random H-colorings of G, how many samples are required to learn the edges of the unknown graph G? We give a characterization of H for which the problem is identifiable for every G, i.e., we can learn G with an infinite number of samples. We focus particular attention on the case of proper vertex q-colorings of graphs of maximum degree d where intriguing connections to statistical physics phase transitions appear. We prove that when q>d the problem is identifiable and we can learn G in poly(d,q)× O(n^2n) time. In contrast for soft-constraint systems, such as the Ising model, the best possible running time is exponential in d. When q≤ d we prove that the problem is not identifiable, and we cannot hope to learn G. When q<d-√(d) + Θ(1) we prove that even learning an equivalent graph (any graph with the same set of H-colorings) is computationally hard---sample complexity is exponential in n in the worst-case. For the q-colorings problem, the threshold for efficient learning seems to be connected to the uniqueness/non-uniqueness phase transition at q=d. We explore this connection for general H-colorings and prove that under a well-known condition in statistical physics, known as Dobrushin uniqueness condition, we can learn G in poly(d,q)× O(n^2n) time.

    08/17/2017 ∙ by Antonio Blanca, et al. ∙ 0 share

    read it

  • On Counting Perfect Matchings in General Graphs

    Counting perfect matchings has played a central role in the theory of counting problems. The permanent, corresponding to bipartite graphs, was shown to be #P-complete to compute exactly by Valiant (1979), and a fully polynomial randomized approximation scheme (FPRAS) was presented by Jerrum, Sinclair, and Vigoda (2004) using a Markov chain Monte Carlo (MCMC) approach. However, it has remained an open question whether there exists an FPRAS for counting perfect matchings in general graphs. In fact, it was unresolved whether the same Markov chain defined by JSV is rapidly mixing in general. In this paper, we show that it is not. We prove torpid mixing for any weighting scheme on hole patterns in the JSV chain. As a first step toward overcoming this obstacle, we introduce a new algorithm for counting matchings based on the Gallai-Edmonds decomposition of a graph, and give an FPRAS for counting matchings in graphs that are sufficiently close to bipartite. In particular, we obtain a fixed-parameter tractable algorithm for counting matchings in general graphs, parameterized by the greatest "order" of a factor-critical subgraph.

    12/20/2017 ∙ by Daniel Stefankovic, et al. ∙ 0 share

    read it

  • Sampling in Uniqueness from the Potts and Random-Cluster Models on Random Regular Graphs

    We consider the problem of sampling from the Potts model on random regular graphs. It is conjectured that sampling is possible when the temperature of the model is in the uniqueness regime of the regular tree, but positive algorithmic results have been for the most part elusive. In this paper, for all integers q≥ 3 and Δ≥ 3, we develop algorithms that produce samples within error o(1) from the q-state Potts model on random Δ-regular graphs, whenever the temperature is in uniqueness, for both the ferromagnetic and antiferromagnetic cases. The algorithm for the antiferromagnetic Potts model is based on iteratively adding the edges of the graph and resampling a bichromatic class that contains the endpoints of the newly added edge. Key to the algorithm is how to perform the resampling step efficiently since bichromatic classes may induce linear-sized components. To this end, we exploit the tree uniqueness to show that the average growth of bichromatic components is typically small, which allows us to use correlation decay algorithms for the resampling step. While the precise uniqueness threshold on the tree is not known for general values of q and Δ in the antiferromagnetic case, our algorithm works throughout uniqueness regardless of its value. In the case of the ferromagnetic Potts model, we simplify the algorithm significantly by utilising the random-cluster representation of the model. In particular, we show that a percolation-type algorithm succeeds in sampling from the random-cluster model with parameters p,q on random Δ-regular graphs for all values of q≥ 1 and p<p_c(q,Δ), where p_c(q,Δ) corresponds to a uniqueness threshold for the model on the Δ-regular tree. When restricted to integer values of q, this yields a simplified algorithm for the ferromagnetic Potts model on random Δ-regular graphs.

    04/22/2018 ∙ by Antonio Blanca, et al. ∙ 0 share

    read it

  • Swendsen-Wang Dynamics for General Graphs in the Tree Uniqueness Region

    The Swendsen-Wang dynamics is a popular algorithm for sampling from the Gibbs distribution for the ferromagnetic Ising model on a graph G=(V,E). The dynamics is a "global" Markov chain which is conjectured to converge to equilibrium in O(|V|^1/4) steps for any graph G at any (inverse) temperature β. It was recently proved by Guo and Jerrum (2017) that the Swendsen-Wang dynamics has polynomial mixing time on any graph at all temperatures, yet there are few results providing o(|V|) upper bounds on its convergence time. We prove fast convergence of the Swendsen-Wang dynamics on general graphs in the tree uniqueness region of the ferromagnetic Ising model. In particular, when β < β_c(d) where β_c(d) denotes the uniqueness/non-uniqueness threshold on infinite d-regular trees, we prove that the relaxation time (i.e., the inverse spectral gap) of the Swendsen-Wang dynamics is Θ(1) on any graph of maximum degree d ≥ 3. Our proof utilizes a version of the Swendsen-Wang dynamics which only updates isolated vertices. We establish that this variant of the Swendsen-Wang dynamics has mixing time O(|V|) and relaxation time Θ(1) on any graph of maximum degree d for all β < β_c(d). We believe that this Markov chain may be of independent interest, as it is a monotone Swendsen-Wang type chain. As part of our proofs, we provide modest extensions of the technology of Mossel and Sly (2013) for analyzing mixing times and of the censoring result of Peres and Winkler (2013). Both of these results are for the Glauber dynamics, and we extend them here to general monotone Markov chains. This class of dynamics includes for example the heat-bath block dynamics, for which we obtain new tight mixing time bounds.

    06/12/2018 ∙ by Antonio Blanca, et al. ∙ 0 share

    read it

  • Markov chains for the hard-core model via polymer models

    Jenssen, Keevash and Perkins give an FPTAS and an efficient sampling algorithm for the high-fugacity hard-core model on bounded-degree bipartite expander graphs. Their method is based on using the cluster expansion to obtain a zero-free region for a so-called polymer model partition function in the complex plane, and then approximating the polymer partition function using the Taylor series truncation method of Barvinok. We circumvent the zero-free analysis and the generalisation to complex fugacities, showing that the polymer partition function can be approximated using Glauber dynamics on polymers. The proof that polymer Glauber dynamics mixes rapidly is easy and is based on using the sizes of the disagreeing polymers as a distance function. We then use Markov chain comparison to show that the hard-core partition function can be approximated using spin Glauber dynamics restricted to even and odd dominant portions of the state space. Our techniques also apply to the Ferromagnetic Potts model.

    01/20/2019 ∙ by Zongchen Chen, et al. ∙ 0 share

    read it

  • Lower bounds for testing graphical models: colorings and antiferromagnetic Ising models

    We study the identity testing problem in the context of spin systems or undirected graphical models, where it takes the following form: given the parameter specification of the model M and a sampling oracle for the distribution μ_M̂ of an unknown model M̂, can we efficiently determine if the two models M and M̂ are the same? We consider identity testing for both soft-constraint and hard-constraint systems. In particular, we prove hardness results in two prototypical cases, the Ising model and proper colorings, and explore whether identity testing is any easier than structure learning. For the ferromagnetic (attractive) Ising model, Daskalasis et al. (2018) presented a polynomial time algorithm for identity testing. We prove hardness results in the antiferromagnetic (repulsive) setting in the same regime of parameters where structure learning is known to require a super-polynomial number of samples. In particular, for n-vertex graphs of maximum degree d, we prove that if |β| d = ω(n) (where β is the inverse temperature parameter), then there is no polynomial running time identity testing algorithm unless RP=NP. We also establish computational lower bounds for a broader set of parameters under the (randomized) exponential time hypothesis. Our proofs utilize insights into the design of gadgets using random graphs in recent works concerning the hardness of approximate counting by Sly (2010). In the hard-constraint setting, we present hardness results for identity testing for proper colorings. Our results are based on the presumed hardness of #BIS, the problem of (approximately) counting independent sets in bipartite graphs. In particular, we prove that identity testing is hard in the same range of parameters where structure learning is known to be hard.

    01/22/2019 ∙ by Ivona Bezakova, et al. ∙ 0 share

    read it