The structure of low-complexity Gibbs measures on product spaces

10/16/2018
by   Tim Austin, et al.
0

Let K_1, ..., K_n be bounded, complete, separable metric spaces. Let f:∏_i K_i →R be a bounded and continuous potential function, and let μ ∝ e^f be the associated Gibbs distribution. At each point x∈∏_i K_i one can define a `discrete gradient' ∇_xf by comparing the values of f at all points which differ from x in at most one coordinate. In case ∏_i K_i = {-1,1}^n ⊂R^n, the discrete gradient ∇_xf is naturally identified with a vector in R^n. This paper shows that a `low-complexity' assumption on ∇ f implies that μ can be approximated by a mixture of other measures, relatively few in number, and most of them close in a natural transportation distance to product measures. This implies also an approximation to the partition function of f in terms of product measures, along the lines of Chatterjee and Dembo's theory of `nonlinear large deviations'. An important precedent for this work is a result of Eldan in the case ∏_i K_i = {-1,1}^n. Eldan's assumption is that the discrete gradients ∇_x f all lie in a subset of R^n that has small Gaussian width. His proof is based on the careful construction of a diffusion in R^n which starts at the origin and ends with the desired distribution on the subset {-1,1}^n. Here our assumption is a more naive covering-number bound on the set of gradients {∇_xf:x∈∏_i K_i}, and our proof relies only on basic inequalities of information theory. As a result, it is shorter, and applies to Gibbs measures on abitrary product spaces.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro