A Probability Density Theory for Spin-Glass Systems

01/03/2020 ∙ by Gavin S. Hartnett, et al. ∙ Google RAND Corporation 0

Spin-glass systems are universal models for representing many-body phenomena in statistical physics and computer science. High quality solutions of NP-hard combinatorial optimization problems can be encoded into low energy states of spin-glass systems. In general, evaluating the relevant physical and computational properties of such models is difficult due to critical slowing down near a phase transition. Ideally, one could use recent advances in deep learning for characterizing the low-energy properties of these complex systems. Unfortunately, many of the most promising machine learning approaches are only valid for distributions over continuous variables and thus cannot be directly applied to discrete spin-glass models. To this end, we develop a continuous probability density theory for spin-glass systems with arbitrary dimensions, interactions, and local fields. We show how our formulation geometrically encodes key physical and computational properties of the spin-glass in an instance-wise fashion without the need for quenched disorder averaging. We show that our approach is beyond the mean-field theory and identify a transition from a convex to non-convex energy landscape as the temperature is lowered past a critical temperature. We apply our formalism to a number of spin-glass models including the Sherrington-Kirkpatrick (SK) model, spins on random Erdős-Rényi graphs, and random restricted Boltzmann machines.



There are no comments yet.


page 8

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Spin-glasses are a general class of models which can be used to study complexity in physics, chemistry, biology, computer science, and social sciences Stein and Newman (2013). They also provide a theoretical and phenomenological framework to analyze hard real-world problems in discrete optimization and probabilistic inference over graphical models Mezard and Montanari (2009). At the heart of such complex phenomena is the emergent behavior that can occur when disordered systems contain many particles, variables, or agents which exert two-body or higher order interactions. Today, there is a fundamental gap in our knowledge of how such non-trivial correlations emerge at low temperatures. For these systems there is a sudden increase in correlations, occurring simultaneously at various scales (up to the overall system size), as the temperature is reduced below a critical threshold. After half a century of intense study, it is not yet fully understood why, or under what conditions, a distribution of small to large clusters of variables can become rigid or frozen below a critical point in a hierarchical or multi-scale fashion in the absence of any obvious symmetries. Additionally, the relaxation time-scales grow exponentially large as a function of the correlation length-scales, which in the worst-case prevents the system from achieving equilibrium in finite time. This phenomenon is at the heart of the hardness of combinatorial optimization problems, such as random K-SAT, near computational phase transitions Moore and Mertens (2011).

Our main motivation is to explore the critical and low temperature properties of spin-glass systems that encode practical computational problems, which are typically at the intermediate scales with respect to number of variables, range of physical interactions, and spatial dimensions. The spin-glass formulation of such problems often involves thousands or even millions of variables, which precludes any hope of successfully applying brute-force or ab initio methods. A given instance of these problems typically contains considerable structure, with an underlying graph that could have a power-law distribution over the degree of connectivity with a fat tail for variables with many long-range physical interactions. Such realistic spin-glasses are also often in an intermediate zone with respect to their fractal dimensions and their physical and computational properties, and may be thought of as lying between the two well-studied limiting cases of short-range Edwards-Anderson model Mezard and Montanari (2009) and infinite range Sherrington-Kirkpatrick (SK) model Sherrington and Kirkpatrick (1975). Consequently spin-glass representations of most interesting and relevant problems reside in an uncharted territory that is analytically and computationally intractable. Although the disorders for each instance can be considered fixed or "quenched" for the relevant time-scales, the self-averaging assumption in statistical physics nonetheless becomes inadequate. Mean-field techniques which can be otherwise successfully applied to toy model problems such as random energy models Mezard and Montanari (2009), or -spin models Castellani and Cavagna (2005), become invalid as the fluctuations over the mean values are typically large. Moreover, Renormalization Group (RG) techniques Nishimori and Ortiz (2010)

are ineffective as these approaches rely on strong symmetry assumptions, which are difficult to setup for a particular problem class, not well-defined in presence of strong inhomogeneities, and usually involve crude and irreversible coarse-graining of the microscopic degrees of freedom.

Recent advances in deep learning open up the possibility that these non-linear and non-perturbative emergent properties of spin-glass systems could be machine-learned. Unfortunately, many of the most promising machine learning approaches, such as gradient-based iterative optimization, are only valid for distributions over continuous variables and thus they either cannot be directly applied to discrete spin-glass systems; or they can be applied at the cost of simply ignoring the fact that the machine learning algorithm was developed specifically for distributions over continuous variables. Despite this, there has been some progress in using neural-network for discovering new phases of matter or accelerating Monte Carlo sampling

Albergo et al. (2019); Huang and Wang (2017); Liu et al. (2017); Shen et al. (2018); Li and Wang (2018). Here, we are interested in eventually applying recent techniques in deep generative models, such as normalizing flows Rezende and Mohamed (2015), to discrete spin-glass distributions described by the following family of Hamiltonians


where is an Ising spin, is the coupling matrix, and is the external magnetic field, and is an index which runs from 1 to . The equilibrium properties of these systems at an inverse temperature are governed by the Boltzmann distribution , where is the usual discrete partition function involving the sum over all possible spin configurations:


Our goal is to formulate an alternate version of the spin-glass problem in terms of a new Hamiltonian density, over real variables , such that we can obtain a probability density over as a Boltzmann distribution, , where is a new, partition function given by an integration over all possible continuous configurations:


As we will show, the continuous Boltzmann distribution can be constructed by using the original discrete distribution as a prior, and taking the conditional distribution such that the

variables are normally distributed around each of the

possible spin configurations.

In order to apply Hamiltonian Monte Carlo to energy-based models of learning, a continuous relaxation method for transforming discrete Hamiltonians of the form Eq. 

1 into continuous variables was introduced in Ref. Zhang et al. (2012). In this approach a continuous distribution is defined which is in a sense “dual” to the original, discrete distribution. In this work, we use such continuous dual distribution to develop a Hamiltonian density formulation of spin-glasses to explore their behaviors near and far from critical point in sufficiently high and low temperatures. Our continuous formulation of spin-glasses is particularly attractive since it provides a geometric encoding of many thermodynamic properties and allows gradient-based optimization techniques to be applied.111Recently, a distinct continuous formulation was recently introduced in Caravelli (2019). This formulation appears to be quite different from ours. One of the key features of our approach is that it does not involve quenched disorder averaging, self-averaging of disordered for very large , or the mean-field assumption of small fluctuations around the mean value of physical observable at the thermodynamic limit, nor any replicas will be introduced. Our approach is thus not a typical mean-field theory and can be applied to spin-glass systems of intermediate dimensions and arbitrary distributions of interactions and local fields. However, we demonstrate that under certain additional assumptions our probability density formulation reduces to the known techniques such as mean-field theory or Thouless, Anderson, and Palmer (TAP) formalism. In particular, we evaluate physical properties of spin-glass models including SK model, 2D Ising model, spins on random Erdős-Rényi graphs, and random restricted Boltzmann machines. In a separate manuscript, we apply our formulation to build and train deep generative spin-glass models via normalizing flows over non-local latent continuous variables Hartnett and Mohseni (2019b).

The layout of this paper is as follows. In Sec. 2 we describe the dual continuous distribution of general spin-glasses expressed by Eq. 1, and introduce these distributions in terms of Hamiltonian densities. This represents an energy landscape over (continuous) spin-glass configurations, which we explore for general couplings. We then analyze the low-temperature limit in Sec. 4 and show that the description of the landscape simplifies. In Sec. 5 we consider the SK model Sherrington and Kirkpatrick (1975), random restricted Boltzmann machines, and and spin-glasses on random Erdős-Rényi graphs as three illustrative examples. We conclude with a discussion in Sec. 8. In the appendices, we present in-depth technical proofs and derivations of the main results and also provide an additional example on 2D Ferromagnetic Ising model.

2 The Hamiltonian density

In this section, we employ continuous relaxation method introduced in Zhang et al. (2012) to show that the thermodynamic properties of discrete spin-glass systems of the form Eq. 1

may be equivalently formulated in terms of a probability density over a continuous random variable

. This distribution is itself a Boltzmann distribution, , with continuous partition given by Eq. 3 and where is the Hamiltonian density given by222We call a density in order to emphasize how it transforms under a change of variable. For a change of variable , the Hamiltonian density transforms as , where is the Jacobian matrix. The term density is not meant to indicate that is a per-site quantity.


Here we have introduced a shifted coupling matrix,


where is a shift parameter chosen to ensure that the probability density is integrable, and is an effective local field given by


At large radius , the first term in dominates and the energy landscape is quadratic in . The probability density will therefore be integrable if is chosen to ensure that the shifted coupling matrix is positive-definite, which is achieved for any , with



denote the ordered eigenvalues of the matrix

. In other words, if is already positive definite, then can be set to zero. Otherwise, must be at least as large as the smallest eigenvalue of . Throughout this work, we will set , where is an arbitrarily small positive number meant to ensure positive definiteness, rather than positive semi-definiteness.

The central starting assumption used in the derivation of Eq. 3 is that the distribution of the continuous variable , given a discrete Ising spin configuration , be given by a multi-variate Gaussian centered around and with covariance matrix proportional to the inverse of the shifted coupling matrix, i.e.


with mean and covariance matrix given by


Therefore, the joint distribution

is itself a multi-variate Gaussian with a prior given by the original discrete distribution. Moreover, the above choice of covariance matrix has the important property that the quadratic term in vanishes in the exponent of the joint distribution expression:


where we have combined the normalization of the Gaussian with the discrete partition function in order to form , and where we have employed matrix notation for convenience. This allows to be trivially marginalized over, which leads to , with given by Eq. 4 above. In general, temperature-dependent interactions can arise when degrees of freedom are integrated out, and this happens here as well - hence the subscript.

Similarly, the conditional distribution factorizes over each site, , with


Just as the joint distribution may be interpreted as a mixture of Gaussians by writing , the above expression allows for an additional interpretation where the joint distribution is given by a product of the sigmoidal-like per-site distributions with a prior given by the continuous probability density, i.e. .

Rather than starting with a Gaussian conditional distribution and then calculating the joint distribution, an alternative derivation would be to employ the Hubbard-Stratonovich transformation with now interpreted as the integration variable. The joint distribution may then be defined as the product of the original distribution and the Gaussian integrand:


Typically, the Hubbard-Stratonovich transformation is associated with mean-field and/or the replica approach and is applied only after averaging over the disorder. Here, no disorder average has been performed, no replicas have been introduced, and no approximations have been made.

There is a probabilistic map between the continuous and discrete formulations. Importantly, both conditional distributions and are easy to sample from. Given a collection of discrete spin configurations , perhaps obtained through Monte Carlo techniques, a corresponding collection of continuous spin distributions may be obtained using the conditional distribution (and vice versa). The free energies of each formulation may be related to one another via


This expression may be derived by equating the joint distribution (Eq. 10) with . The second and third terms are simply due to the normalization of the multi-variate Gaussian in , and last term is due to the fact that for Ising spins. The fact that the continuous and discrete formulations are related in this way indicates that there is no “free lunch” here - one cannot use the continuous formulation to circumvent the problems associated with complex spin-glass distributions - for example, the hardness in sampling or the evaluation of the partition function.

The partition function is a generating function for the -point correlation functions. Denoting the usual thermodynamic ensemble over discrete spins as , and the analogous ensemble over continuous configurations as , then by applying to each side of Eq. 13 the connected correlation functions of the two ensembles may be related through:


where the subscript denotes connected.333Since this relation holds for all , it follows that Eq. 14 also holds for the unconnected correlation functions (i.e. the subscript may be dropped):

In particular, the average local magnetization at site is related to the continuous variable via . The marginal probability of the spin pointing up at site is


This expression allows for an interpretation of the effective field as a global input signal that determines the local spin polarization after averaging over all possible

configurations. In this expression, the hyperbolic tangent plays the role of an activation function, commonly used in artificial neural networks, that determines the polarization of the spin.

We can express the overlap distribution of the original discrete system in terms of the continuous variable using Eq. 14. If the thermodynamic (Gibbs) measure decomposes into a sum over pure states, each with weight , then the disorder-dependent overlap distribution is


This distribution can be regarded as the order parameter of mean field spin-glasses in our continuous formulation, and the moments of this distribution may be expressed in terms of spin correlation functions as

Dotsenko (2005):


By using Eq. 14, this may be equivalently written as:


This relation shows how the spin-glass order parameter is encoded in the continuous formulation.

This concludes the derivation of our continuous formulation. One important aspect of using continuous variables is that they provide a geometric encoding of the problem. In particular, encodes the likelihood that a given spin will point up or down for a given point in . This probability is in turn determined by the inverse temperature and the strength of the effective local field at that point, . The contours of constant are given by shifted ellipsoids, with the shift given by the external local field and the shape and scale of the ellipsoid determined by the . The conditional distribution

can be used to obtain the marginal probability distribution

by integrating over all and weighting each point according to it probability under the continuous Boltzmann distribution . The -shaped activation function that appears in Eq. 15 implies that the spins will be frozen if the regions of large local effective field are assigned a low energy in the energy landscape given by . In subsequent sections, we will explore the geometric structure of this landscape further, both for general coupling matrices , and for some well-known examples such as the SK model.

3 Geometry of the energy landscape

The probability density formulation affords several advantages over the original discrete formulation as well as some additional mathematical subtleties. Continuous variables allow alternative sampling methods such as Hamiltonian Monte Carlo Duane et al. (1987) to be applicable, and indeed, this was one of the motivations for the continuous relaxation method Zhang et al. (2012). Another benefit is that the continuous formulation provides a geometric encoding of of the combinatorial optimization problems which may be represented in terms of spin-glass systems. In this section we will derive basic properties of the geometry of the energy landscape for arbitrary values of the couplings and graph topology. Later, in Sections 5, 6, and 7 we will further explore our formulation of the the SK model, random restricted Boltzmann machines, and spin on random Erdős-Rényi graphs as specific examples.

One of our main results is that the energy landscape defined by is convex above a disorder-dependent critical temperature , given in terms of the largest eigenvalue of the shifted coupling matrix:


The proof is given in Appendix A. One of the most important mathematical properties of the energy landscape is whether it is convex or not. In particular, convexity of implies that the log probability density

is log-concave, and log-concave probability densities enjoy a number of useful properties, such as the fact that the cumulative distribution function (CDF) is also log-concave, as well as the fact that the marginal density over any subset of the

variables will also be concave. Convexity of also implies practical consequences, for example it means that certain algorithms such as adaptive rejection sampling may be used to efficiently sample Gilks and Wild (1992).

As the temperature is lowered past , the Hamiltonian density becomes non-convex. In order to understand this transition, it will be useful to set the external magnetic field to zero, . We first note that the expression for in Eq. 4 is the sum of two terms. The first is quadratic in , and is guaranteed to be positive for any since was chosen to make positive-definite. Conversely, the second term is negative for any , and it scales linearly in at large radii, i.e. as . Thus, at large radii the first term dominates and the Hamiltonian density is:


which ensures that is integrable. In contrast, near the origin the expression simplifies to


The linear term in the expansion vanishes, and therefore the origin is a critical point of the Hamiltonian density. If is also positive-definite, then is a minimum. This condition is equivalent to , and so in this case is the unique global minimum. As is lowered below , the matrix develops negative eigenvalues, and becomes a saddle.

In addition to the origin becoming unstable, the convex/non-convex transition is also characterized by the appearance of a pair of additional critical points. The critical points of solve


As the temperature approaches from below, any critical points that exist will merge with the critical point , since is the sole critical point for . We may therefore linearize the critical point equation around . In this case, Eq. 22 simplifies to

. A non-trivial solution of this equation is just an eigenvector of

with eigenvalue 1, which corresponds to . If is the largest eigenvector of with corresponding eigenvalue , then so is for any non-zero - in other words the scale is not fixed in the linear treatment. Going beyond linear order will fix up to a reversal , since . Thus, a pair of critical points will appear as is reached from above.

The convex/non-convex transition experienced by the continuous distribution has no counter-part in the original discrete distribution . For every example we study below, does not correspond to a phase transition in the discrete system. In fact, may be varied without changing the physical content of the theory by using a shift larger than the minimum, i.e. . However, there is some physical significance of the minimal value of , which can be seen by noting that is the critical temperature predicted by the naive mean-field equation


Defining , we may then write


Moreover, if the eigenvalues of lie in a symmetric interval, with , then , and . With our choice of we have that so that as the inequality is saturated.

To summarize the results of this section: as the temperature is lowered past a , the Hamiltonian density becomes non-convex, the critical point at becomes unstable, and a pair of non-trivial critical points with appears. It is difficult to go much further than this description and make more detailed statements about the geometry of the energy landscape without specifying the couplings . This is to be expected, since our formalism applies to all spin-systems of the form Eq. 1, which includes both spin-glasses and ferromagnetic systems like the 2d Ising model. Below in Sections 5, 6 and 7 we will further analyze the landscape for the Sherrington-Kirkpatrick model, random Restricted Boltzmann Machines, and spin-glasses on random Erdős-Rényi graphs respectively. The case of 2D ferromagnetic Ising model system is explored in Appendix C.

4 Geometry of the energy landscape deep in the spin-glass phase

In this section we will discuss the low-temperature limit of our formalism. Our goal will be to provide some insight into the geometry of energy landscapes of systems which are deep in the spin-glass phase (when such a phase exists), and to show how the metastable spin-glass states are geometrically encoded in the Hamiltonian density, . We will leave the coupling matrices and disorder distribution unspecified, and as a result, our discussion will be somewhat general.

We begin by taking the low-temperature expansion of the Hamiltonian density: , where


The equation governing the zero critical points may be obtained from directly or from the limit of Eq. 22:


Additionally, the Hessian (matrix of second derivatives) of is simply the shifted coupling matrix: . There is a subtlety here, which is that the Hessian is not defined for points which satisfy for any because the absolute value function is not differentiable at the origin. This is an important observation, since without it one would conclude that is convex, which it certainly is not.

With these ingredients, the integral defining the partition function may then be formally written as a sum over the critical points using Laplace’s method:


where are the critical points of the Hamiltonian density, and the prefactor is due to the Gaussian integration around each critical point. In writing the above expression we have assumed that all critical points are minima, and so the Gaussian integration converges. Without specifying the coupling matrix it is difficult to say much about the existence or non-existence of saddles, beyond the fact that is always both a solution of the critical point equation and a point for which the Hessian is not defined. This and any other similar points will require some special treatment, for example by rotating the integration contours and including sub-leading corrections in . Ignoring such complications, general correlation functions may also be formally written as a sum over critical points as:


where is the Boltzmann weight of each critical point, and is an arbitrary function.444We have ignored the Gaussian prefactor here, since 1) it is subleading in , and 2) all critical points receive the same prefactor since the Hessian is just the constant matrix . Thus, the critical points can be seen to encode almost all of the physics of the problem in the low-temperature limit. Applying the saddle-point method to the 1-point function (i.e. the case of Eq. 14), the saddle point coordinates are related to the average per-site magnetizations via


Therefore, in our continuous formulation the critical points are very analogous to the pure states of spin-glass theory. Pure states of spin-glasses are sub-regions in the state space which are separated by large energy barriers, and the system is sub-ergodic in those regions even though global ergodicity is broken Mézard et al. (1987). Indeed, if the sum over critical points is restricted to just a single critical point (or if there is only one such dominant critical point in the thermodynamic limit), then all connected correlation functions vanish, for example


This property is also known as cluster decomposition.

The pure states have a simple geometric interpretation in our formalism. Recall that the continuous probability density may be written as a weighted sum of Gaussians, each centered around one of the spin configurations, . The covariance matrix of each Gaussian is , and so the level sets of are -dimensional ellipsoids whose shape is determined by the eigenvectors and eigenvalues of . In general, the density clouds due to each spin configuration will overlap - for example above the Gaussians are so broad that the resulting continuous distribution is log-concave. At low-temperatures, the distribution will “fragment” into a number of distinct modes. An example of this is shown in Fig. 1 for the simple case of just two spins, .

Figure 1: Contour plot of the Hamiltonian density for a system of two spins with with shift and for (a) (b) (c) (d) . The density clouds due to spin configurations overlap above ; that is the Gaussians are sufficiently broad that the resulting continuous distribution is log-concave. At low-temperatures, the distributions become fragmented into several distinct modes. Blue regions correspond to low energy configurations.

The nature of this fragmentation depend on how the limit is taken. Suppose that the original coupling matrix has both positive and negative eigenvalues, so that the eigenvalues of the shifted matrix satisfy (recall that the purpose of introducing the small positive constant was to guarantee positive-definiteness of ). If is chosen to be temperature-independent, then the limit pushes all the eigenvalues of to infinity, and consequently all the eigenvalues of approach zero. In this case is composed of distinct delta functions with different weights, and the pure states are rather trivially just the spin configurations. However, if instead is held fixed as , then the eigenvalue spectrum of will range from 0 to the finite value . Thus, the shape of the ellipsoid defining the level sets of the Gaussians will shrink to a point in some directions, and remain finite in others. In this case the fragmentation of will be more interesting. Groups of spin configurations will merge to form pure states as determined by the geometry of the zero-temperature ellipsoids in relation to the vertices of the hypercube.

The pure states of non-disorder averaged spin-glasses can be associated with solutions of a modified mean-field equation known as the Thouless, Anderson, and Palmer (TAP) equation, which was derived in order to correct the failure of naive mean-field theory to describe the spin-glass phase of the SK model. The naive mean-field equation is given in Eq. 23, whereas the TAP equation is given by Thouless et al. (1977):


We have argued that the critical points of the continuous probability density may be interpreted as pure states at low temperature, and thus there should be a connection between these and the solutions of the TAP equation. Here we will establish such a connection at zero temperature, for which the TAP equation simplifies considerably: . Importantly, this is also the zero-temperature limit of the naive mean-field equation Eq. 23. The zero-temperature limit of the TAP/naive mean-field equations may be compared with the equation governing the critical points of :


Note that the TAP/naive mean-field equation depends on the original coupling matrix , whereas the critical points of the Hamiltonian density depend on the shifted coupling matrix . A key result is that solutions of the zero-temperature naive mean-field equation/TAP equation are also critical points of the zero temperature Hamiltonian density:

Proposition 1.

If is a solution of the zero temperature TAP equation , then is also a solution of the zero temperature critical point equation .


Suppose . The result holds for any , so we will consider the cases and separately. If , then clearly the mean-field and critical point equations are identical. If , then . Thus,


This establishes that for every solution of the TAP equation is also a critical point of . The converse does not hold: there are critical points of the Hamiltonian density which are not solutions of the TAP equation. To understand the significance of these points, recall that the nature of the pure states depends on whether is held fixed as , or if instead is held fixed. In the first case, the pure states of the continuous formulation are somewhat trivial, as any of the spin configurations will be a pure state according to the above discussion. There may additionally be critical points with for some which will not correspond to any Ising spin configuration. For example, the point is always a critical point. Since the TAP solutions are a subset of all possible spin configurations, the zero-temperature critical points will include both the TAP solutions as well as all other spin configurations and any saddle-like points such as . If the zero-temperature limit is instead taken while holding fixed, then the critical points will include just a subset of all spin configurations. That subset will include the TAP states, and possibly other spin configurations and saddle-like points.

5 Sherrington-Kirkpatrick Model

In order to build intuition for the probability density formulation of general spin-glass systems, in this section we consider as an example the Sherrington-Kirkpatrick (SK) model Sherrington and Kirkpatrick (1975). By specifying the coupling matrix (or rather, the disorder distribution from which is drawn), we may further explore the geometry of the energy landscape and the nature of both the spin-glass and convexity transitions in our formulation.

The Sherrington-Kirkpatrick (SK) model is defined by specifying that the couplings

be drawn from an iid Gaussian distribution

Sherrington and Kirkpatrick (1975):


where the values are fixed by symmetry of to be the same as the values, and the diagonal entries are zero. The coupling parameter

controls the variance of the disorder. The eigenvalue distribution of

in the large- limit is simply the Wigner semi-circle distribution, so that the probability density of the eigenvalues of is


where is the indicator function. The radius of the semi-circle is related to the coupling parameter via . Since the eigenvalues of are restricted to the strip , the eigenvalues of the shifted coupling matrix will be shifted to lie within the strip . The eigenvalues of both and are depicted in Fig. 2

Figure 2: The eigenvalue distribution of the coupling matrix and the shifted coupling matrix for the SK model. Both distributions are described by the Wigner semi-circle distribution, shown in black for both and . The size of the shift has been chosen so that the shifted distribution has support on the positive real numbers, .

Using the radius of the Wigner semi-circle (and disregarding for now by setting it to zero), we have


These may be contrasted with the critical temperature below which the system is in a spin-glass phase:


Therefore, we have found that


This indicates that the Hamiltonian density becomes non-convex due to the appearance of multiple critical points well before any transition to an ordered phase occurs.555It is worth noting that these results are strictly only valid for . For finite- both the eigenvalues of and the critical temperature will exhibit fluctuations due to finite-sized effects. The fluctuation of the critical temperature due to finite-size effects is investigated in Castellana and Zarinelli (2011).

5.1 High-temperature limit

The fact that is intriguing. Naively one might have thought that the two temperatures would have coincided because the transition from a convex to non-convex Hamiltonian density represents a real and significant change in the corresponding Boltzmann distribution. Moreover, the minimal value of does not appear to have been previously identified as having any particular importance for the well-studied SK model. The mathematical transformation from the original discrete variables to the continuous variables was exact and involved no approximation; however, one still needs to verify that spin-glass transition has not been shifted and still occurs at not at when the convexity is no longer guaranteed. To this end, we carried out a high-temperature expansion in terms of the continuous variables and find exact agreement with the expansion in terms of the original discrete variables carried out by Thouless, Anderson, and Palmer in Thouless et al. (1977). In both cases, the expansion breaks down at the spin-glass phase transition temperature and not at the higher temperature . We will provide an outline of the calculation here, and a more detailed treatment can be found in Appendix B.

Using Eq. 13, the partition function may be written in terms of the continuous variables as


The expectation value is taken over a properly normalized Gaussian distribution with zero-mean and covariance matrix . A high-temperature expansion may then be performed by expanding around . At each order the Gaussian integrals may be performed by Wick contractions, which introduces an increasing number of terms as the order of the expansion increases. The calculation simplifies dramatically if the disorder is averaged over. Denoting the disorder average as , the final result is


For the sub-extensive terms may be neglected, but as the temperature of the spin-glass transition is approached from above the logarithm becomes singular, indicating a breakdown of the perturbative expansion. Not only is the free energy analytic at the minimal convexity transition temperature , but any dependence on the shift cancels out, since was only introduced as part of our formulation.

The above result was first derived in Thouless et al. (1977) by considering the expansion of in terms of the original discrete spin variables. In both cases - the expansion in terms of and the expansion in terms of , the singular logarithm term is obtained by summing an infinite number of terms. In terms of Feynman diagrams, the terms that contribute to the singularity correspond to double-sided regular -gons for :

The fact that both expansions agree and yield no non-analyticity at indicates that the convex/non-convex transition is not associated with a thermodynamic phase transition. It also provides a consistency check that the continuous formulation does not break down below .

5.2 Zero-temperature limit

Lastly, we investigated the zero-temperature limit of the SK model by studying the critical points. These are solutions of the equation . We generated a large number of such solutions by randomly initializing and then applying the iterative update rule below until a solution was found (or the algorithm failed to converge after a set number of iterations):


This update rule corresponds to performing gradient descent on , using a learning rate of and in place of the usual gradient .666We thank Dan Ish for pointing this out to us. We also generated a large number of solutions of the zero-temperature mean-field/TAP equation using the same procedure with update rule given by:


In agreement with Proposition 1 above, we found that every mean-field/TAP solution also solved the critical point equation. Interestingly, we also found that none of the critical point solutions generated this way also solved the mean-field/TAP equation. This is consistent with our earlier observation (that held for large that the saddle-like critical points exponentially out-numbered the minima. We also found that the solutions produced by the iterative method applied to each equation had widely separated energies. Fig. 3 plots the distribution of energies of each set of solutions.777It should be emphasized that the iterative update rule/gradient descent method we used almost certainly does not generate solutions uniformly. We generated 10,000 unique solutions using each approach, but for the SK model we expect an exponential (in ) number of solutions Bray and Moore (1980) and the solutions plotted in Fig. 3 may not be representative of the overall distribution. Rather, they are representative of the distribution obtained when solutions are generated using the iterative update rule/gradient descent.

Figure 3: The distribution of the per-site Hamiltonian densities at zero temperature, i.e. obtained by an application of the iterative procedure discussed in the text. This procedure was applied to two equations, the mean-field/TAP equation and the critical point equation , and in both cases . Note that every solution of the first equation is also a solution of the second, although the converse is not true. Here we have set in . This plot shows that the typical critical point has a much higher energy than typical solutions of the mean-field equation (when both sets of solutions are obtained using the iterative procedure).

6 Random Restricted Boltzmann Machines

As a second example, we study the bipartite SK model, which is the natural extension of the SK model to bipartite complete graphs. This example also has significance in machine learning as it represents a randomly initialized Restricted Boltzmann Machine (RBM) Smolensky (1986). In particular, the bipartite SK model describes a random initialization of RBMs where the biases have been set to zero. The connection between the bipartite SK model and RBMs has been recently studied in Decelle et al. (2017, 2018); Hartnett et al. (2018).

In this case, the coupling matrix takes on the block form:


with a matrix, where is the number of visible spins and is the number of hidden spins. The total number of spins is

, and the spin vector may be written as

. As in the SK model, the weights in the bipartite SK model will be iid normally distributed:


Using the relation


for block matrices , the characteristic equation for , , is equivalent to the condition


provided that . Thus, the non-zero eigenvalues of are related to the eigenvalues of via


The eigenvalue distribution of in the large- limit defined by with

held fixed is given by the Marchenko–Pastur distribution

Marčenko and Pastur (1967), which in our conventions is:




In Fig. 4 we plot the eigenvalue distribution for both and .

As a result of this analysis, we conclude that in the large- limit , and also (again neglecting ). Thus, we have that


Moreover, as in the SK model, both of these temperatures are higher than the critical temperature of the spin-glass phase transition, which in our conventions is Hartnett et al. (2018):


Similar to the case of the SK model, the convex/non-convex transition happens well before the phase transition occurs as the temperature is lowered.

Figure 4: (a) The eigenvalue distribution for a random draw of for , and . The dashed line corresponds to the large- analytic prediction given by the Marchenko-Pastur distribution. (b) The eigenvalue distribution for the coupling matrix constructed using the matrix used in (a). The dashed line corresponds to the large- analytic prediction, which may be obtained from the Marchenko-Pastur analytic prediction and Eq. 47.

7 Spin-glasses on Erdős-Rényi random graphs

As a final example, we examine another prototypical spin-glass model by placing spins on Erdős-Rényi random graphs. We will consider to be a Bernoulli random variable, by which we mean that


As before, the entries are fixed to be , and the diagonal entries are zero. Thus, is proportional to the adjacency matrix for an Erdős-Rényi random graph Erdős and Rényi (1960).

In the limit where , the eigenvalues of have been shown to obey a semi-circle law Erdős et al. (2013). The first eigenvalues form the bulk of the spectrum, and lie within the strip , with


where and .888Although as , we include it here to bring the analytic prediction closer to the numerical results we find for finite . The largest eigenvalue is separated from the bulk eigenvalues due to the fact that the matrix has a non-zero mean, and the average value is


Thus, in this limit the mean-field and convex temperatures may be worked out to be:


Note that in this example , which is due to the fact that the eigenvalues of do not lie in a symmetric interval. The first eigenvalues do, but the largest eigenvalue spoils the symmetry.

Figure 5: (a) The eigenvalue distribution for a random draw the coupling matrix from an Erdős-Rényi distribution with , , and . The analytic prediction of Erdős et al. (2013) is depicted by a dashed line. There is a single eigenvalue separated from the bulk at the location of the vertical dashed line. (b) The largest eigenvalue of the coupling matrix for 2000 draws of the Erdős-Rényi random graph, with , , and . The dashed line represents the analytic prediction.

8 Discussion

In this work, we have presented a probability density theory of general spin-glass systems. This formulation builds off of the continuous relaxation method of Zhang et al. (2012), who introduced the method as a way to apply Hamiltonian Monte Carlo to energy-based models defined over discrete variables. Our original motivation for this work was to adopt a similar approach, and use their continuous relaxation method as a way to recast spin-glasses in terms of continuous variables so that normalizing flows could then be trained to model and better sample spin-glass physics. This is done in our companion paper Hartnett and Mohseni (2019a), and we have instead devoted this work to the task of further developing the continuous relaxation method for spin-glass physics and characterizing their physical properties in this picture.

One of the main advantages of our probability density formulation is the fact that furnishes a geometric encoding of complex spin-glass distributions. The critical points are of paramount importance for understanding this encoding. For example, the topology of the manifold defined by considering all points with energy equal to or below some reference value, i.e. , can be determined using knowledge of the critical points and Morse theory Milnor (2016). As the energy is varied, the topology of is unchanged unless a critical point is crossed, in which case the topology changes by the addition of a -cell, where is the index of the critical point. Unfortunately, in none of the examples considered here were we able to identify all the critical points. It may be possible to make progress on this for the mean-field models like the SK model.

While we have not fully mapped out the geometry of the energy landscape, our results combine to form an interesting picture. For , the Hamiltonian density is convex and there is a single global minimum at the origin. As the temperature is lowered below , the energy landscape becomes non-convex. We have shown that this transition is accompanied by 1) the appearance of a pair of new critical points which split off from the origin, and 2) the the transition of the origin from being a minimum to being an unstable critical point (i.e. the Hessian at acquires a negative eigenvalue). In all the examples we considered exceeds the critical temperature of any phase transition which may be present. We suspect that this statement is universally true unless and the phase transition is able to be captured by naive mean-field theory, since in that case , otherwise . Assuming that the model possesses a spin-glass phase transition, then as the temperature is lowered further, as far as , a number of metastable states appear. For the SK model, these are given by solutions of the TAP equation, and they are clearly distinct from the critical points. However, in the limit , the equation satisfied by the metastable states simplifies to , and we showed that solutions of this equation are also critical points of the zero-temperature Hamiltonian density. Therefore, at zero-temperature the energy landscape of the Hamiltonian density encodes the metastable states of the spin-glasss.

Our probability density formulation is quite general and applies to any system of Ising spin variables whose Hamiltonian consists of just 1- and 2-spin interactions (i.e. the family represented by Eq. 1). It would be interesting to extend our approach to more general Hamiltonians consisting of higher spin couplings. However, it is worth noting that even the more restrictive family of Hamiltonians considered here is already universal. In particular, it includes the energy function of RBMs, which were shown to be universal approximators capable of approximating any discrete distribution to arbitrary accuracy, provided there are enough hidden units Le Roux and Bengio (2008). Therefore, our formulation can be used to convert arbitrarily complex distributions over discrete variables into continuous probability densities. Moreover, a whole suite of algorithms and numerical methods designed for systems of continuous variables may now be applied to discrete problems. It will be interesting to understand what properties of a spin-glass system determine whether a continuous or discrete representation needs to be employed. We hope to report progress on this question in the future.


We would like to thank S. Isakov, D. Ish, K. Najafi, and E. Parker, for useful discussions and comments on this manuscript.

Appendix A Convexity of the Hamiltonian density

In this appendix we prove that the Hamiltonian density is convex if and only if , with . Using our conventions, in Zhang et al. (2012) it was proven that for is log-concave if and only if the eigenvalue spectrum of is sufficiently narrow, by which we mean


Note that the left-hand inequality of Eq. 56 is true by construction, since the shift was chosen so as to make positive definite. Thus, the spectrum will be narrow if we additionally have that . Here we repeat the proof of Zhang et al. (2012) for the case of Ising spin variables and our parametrization. Also, rather than considering the log-concavity of , we shall instead consider the equivalent condition of the convexity of .

In the proof we will make use of the relations and , which hold for and for , and we will also use the following eigenvalue inequalities for two matrices , :


We will also need the Hessian , which is


where is the diagonal matrix given by . We will find it useful to work with the matrix , which is equal to


Since and are congruent, they have the same numbers of positive, negative, and zero eigenvalues according to Sylvester’s Law of Inertia Sylvester (1852).

Proposition 2.

The Hamiltonian density is convex if and only if has a narrow spectrum, by which we mean . This is equivalent to .


For the forward direction, assume that . Then


where we have used in the penultimate step, and the last inequality follows by assumption. Since the smallest eigenvalue of the Hessian is everywhere positive, the Hamiltonian density is convex.

To prove the reverse direction, assume that and let . Then,


Therefore, . ∎

Appendix B High-temperature expansion of the SK model

According to Eq. 13, the discrete partition function is related to the continuous partition function via:


The denominator is just the normalization of a Gaussian distribution with zero mean and covariance matrix . Similarly, may also be written in terms of an un-normalized Gaussian integral with the same mean and covariance: