Learning graphical models from the Glauber dynamics

10/28/2014
by   Guy Bresler, et al.
MIT
0

In this paper we consider the problem of learning undirected graphical models from data generated according to the Glauber dynamics. The Glauber dynamics is a Markov chain that sequentially updates individual nodes (variables) in a graphical model and it is frequently used to sample from the stationary distribution (to which it converges given sufficient time). Additionally, the Glauber dynamics is a natural dynamical model in a variety of settings. This work deviates from the standard formulation of graphical model learning in the literature, where one assumes access to i.i.d. samples from the distribution. Much of the research on graphical model learning has been directed towards finding algorithms with low computational cost. As the main result of this work, we establish that the problem of reconstructing binary pairwise graphical models is computationally tractable when we observe the Glauber dynamics. Specifically, we show that a binary pairwise graphical model on p nodes with maximum degree d can be learned in time f(d)p^2 p, for a function f(d), using nearly the information-theoretic minimum number of samples.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

05/22/2012

Learning Mixed Graphical Models

We consider the problem of learning the structure of a pairwise graphica...
04/02/2021

Exponential Reduction in Sample Complexity with Learning of Ising Model Dynamics

The usual setting for learning the structure and parameters of a graphic...
11/21/2013

Learning Pairwise Graphical Models with Nonlinear Sufficient Statistics

We investigate a generic problem of learning pairwise exponential family...
05/03/2019

Learning Some Popular Gaussian Graphical Models without Condition Number Bounds

Gaussian Graphical Models (GGMs) have wide-ranging applications in machi...
09/12/2014

Hardness of parameter estimation in graphical models

We consider the problem of learning the canonical parameters specifying ...
10/30/2009

Which graphical models are difficult to learn?

We consider the problem of learning the structure of Ising models (pairw...
07/04/2012

Maximum Margin Bayesian Networks

We consider the problem of learning Bayesian network classifiers that ma...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Examples of data one might usefully model as being generated according to a Markov process include the dynamics of agents in a coordination game, the fluctuations of stocks or other financial data, behavior of users in a social network, and spike patterns in neural networks.

The focus of this paper is on learning the nature of Markovian dynamics from observed data governed by local interactions. Concretely, we suppose that such local interactions are represented by a graphical model. We observe a single-site dynamics, specifically the so-called Glauber dynamics, and wish to learn the graph underlying the model.

This work fits within a broader theme of learning graphical models from data, a problem traditionally posed assuming access to i.i.d. samples from the model. While the assumption of i.i.d. samples makes sense as an abstraction (as well as in some practical scenarios), observations in many settings are correlated over time and in this case it is more natural to assume that samples are generated according to a Markov process. In general the distribution of such samples can be far from i.i.d.

The problems of learning and of generating samples are known to be related. On one hand, learning graphical models from i.i.d. samples is algorithmically challenging [1, 2, 3], and on the other hand, generating samples from distributions represented by graphical models is hard in general [4]. In the literature, much work has focused on trying to find low-complexity algorithms, both for learning as well as for generating samples, under various restrictions to the graphical model. Interestingly, related conditions (such as spatial and temporal mixing) have turned out to be central to most approaches.

Learning graphical models from i.i.d. samples appears to be challenging when there are correlations between variables on a global scale, as this seems to require a global procedure. Our results show that observing a local process allows to learn distributions with global correlations by temporally isolating the local structure.

I-a Complexity of graphical model learning

A number of papers, including [5], [6], and [7] have suggested finding each node’s neighborhood by exhaustively searching over candidate neighborhoods and checking conditional independence. For graphical models on nodes of maximum degree , such a search takes time (at least) on the order of . As grows, the computational cost becomes prohibitive, and much effort by the community has focused on trying to find algorithms with lower complexity.

Writing algorithm runtime in the form , for high-dimensional (large ) models the exponent is of primary importance, and we will think of low-complexity algorithms as having an exponent that is bounded by a constant independent of .

Previous works proposing low-complexity algorithms either restrict the graph structure or the nature of the interactions between variables. The seminal paper of Chow and Liu [8] makes a model restriction of the first type, assuming that the graph is a tree; generalizations include to polytrees [9], hypertrees [10], tree mixtures [11], and others. Among the many possible assumptions of the second type, the correlation decay property (CDP) is distinguished: nearly all existing low-complexity algorithms require the CDP [3]. An exception is [12], which shows a family of antiferromagnetic models that can be learned with low complexity despite strongly violating the CDP.

Informally, a graphical model is said to have the correlation decay property (CDP) if any two variables and are asymptotically independent as the graph distance between and increases. The CDP is known to hold for a number of pairwise graphical models in the so-called high-temperature regime, including Ising, hard-core lattice gas, Potts (multinomial), and others (see the survey article [13] as well as, e.g., [14, 15, 16, 17, 18, 19, 20]).

It was first observed in [6] that it is possible to efficiently learn models with (exponential) decay of correlations, under the additional assumption that neighboring variables have correlation bounded away from zero. A variety of other papers including [21, 22, 23, 24] give alternative low-complexity algorithms, but also require the CDP. A number of structure learning algorithms are based on convex optimization, such as Ravikumar et al.’s [25]

approach using regularized node-wise logistic regression. While this algorithm is shown to work under certain incoherence conditions and does not explicitly require the CDP, Bento and Montanari

[3] showed through a careful analysis that the algorithm provably fails to learn ferromagnetic Ising models on simple families of graphs without the CDP. Other convex optimization-based algorithms such as [26, 27, 28] require similar incoherence or restricted isometry-type conditions that are difficult to interpret in terms of model parameters, and likely also require the CDP.

Most computationally efficient sampling algorithms (which happen to be based on the Markov Chain Monte Carlo method) require a notion of temporal mixing and this is closely related to spatial mixing or a version of the CDP (see, e.g., [29, 30, 31]). Thus, under a class of “mixing conditions”, we can both generate (i.i.d.) samples efficiently as well as learn graphical models efficiently from such i.i.d. samples.

I-B Main results

We give an algorithm that learns the graph structure underlying an arbitrary undirected binary pairwise graphical model from the Glauber dynamics, even without any mixing or correlation decay property. Concretely, in Theorem III-C we show that the algorithm learns the graph underlying any undirected binary pairwise graphical model over nodes with maximum vertex degree , given updates of the Glauber dynamics per node, starting from any initial state, with runtime . The number of samples required by the algorithm is nearly information-theoretically optimal, as shown in the lower bound of Theorem IV.

I-C Other related work

Several works have studied the problem of learning the graph underlying a random process for various processes. These include learning from epidemic cascades [32, 33, 34] and learning from delay measurements [35]. Another line of research asks to find the source of infection of an epidemic by observing the current state, where the graph is known [36, 37].

More broadly, a number of papers in the learning theory community have considered learning functions (or concepts) from examples generated by Markov chains, including [38, 39, 40, 41]. The present paper is similar in spirit to that of Bshouty et al. [40] showing that it is relatively easy to learn DNF formulas from examples generated according to a random walk as compared to i.i.d. samples.

The literature on the Glauber dynamics is enormous and we do not attempt to summarize it here. However, we remark that the Glauber dynamics is equivalent to a model of noisy coordination games and has been studied in that context by various authors: Saberi and Montanari [42] studied the impact of graph structure on rate of adoption of innovations, Berry and Subramanian [43] studied the problem of inferring the early adopters from an observation at a later time.

I-D Outline

The rest of the paper is organized as follows. In Section II we define the model and formulate the learning problem. In Section III we present our structure learning algorithm and analysis. Then in Section IV

we give an information-theoretic lower bound on the number of samples necessary in order to reconstruct with high probability.

Ii Problem statement

Ii-a Ising model.

We consider the Ising model on a graph with . The notation is used to denote the set of neighbors of node , and the degree of each node is assumed to be bounded by . To each node

is associated a binary random variable (spin)

. Each configuration of spins is assigned probability according to the Gibbs distribution

(1)

Here

is the partition function and serves to normalize the distribution. The distribution is parameterized by the vector of edge couplings

, assumed to satisfy

for some constants . We can alternatively think of , with if . For a graph , let

be the set of parameter vectors corresponding to .

The model (1) does not have node-wise parameters (that is, the external field is zero); while we restrict to this case for simplicity, similar results to those presented hold with suitable minor modifications to accommodate nonzero external fields.

The distribution specified in is a Markov random field, and an implication is that each node is conditionally independent of all other nodes given the values of its neighbors. This allows to define a natural Markov chain known as the Glauber dynamics.

Ii-B The Glauber dynamics.

The Glauber dynamics (also sometimes called the Gibbs Sampler) is a natural and well-studied reversible Markov chain defined for any Markov random field. For mathematical convenience we use both the continuous-time and discrete-time versions. We describe here the continuous-time dynamics, writing for the configuration at time . The process is started at some arbitrary (possibly random) initial configuration , and each node is updated at times given by an independent Poisson process of rate one. If spin is updated at time , it takes on value with probability

(2)

and is otherwise. Notably, each spin update depends only on neighboring spins. Equation (2) and the bounded coupling assumption implies that for any ,

(3)

This is a lower bound on the randomness in each spin update and will be used later.

The Glauber dynamics can be simulated efficiently for any bounded-degree undirected graphical model, and it is a plausible generating process for observed samples in various settings. One can check that the Gibbs distribution (1) is stationary for the Glauber dynamics. If the dynamics quickly approaches stationarity (that is, the mixing time is small), then it can be used to simulate i.i.d. samples from (1). But there are families of graphs for which any local Markov chain, including the Glauber dynamics, is known to converge exponentially slowly (see, e.g., [4]), and moreover the availability of i.i.d. samples violates conjectures in complexity theory in that it allows to approximate the partition function [44]. While it is difficult to imagine nature producing i.i.d. samples from such models, there is no such issue with the Glauber dynamics (or any other local Markov chain).

Ii-C Graphical model learning

Our goal is to learn the graph underlying a graphical model of the form (1), given access to observations from the Glauber dynamics. We assume that the identity of nodes being updated is known; learning without this data is potentially much more challenging, because in that case information is obtained only when a spin flips sign, which may occur only in a small fraction of the updates.

For the purposes of recording the node update sequence it is more convenient to work with a discrete time (heat-bath) version of the chain, where each sample is taken immediately after a node is updated. In this case we denote the sequence of samples by and the corresponding node identities at which updates occur by . The value of is arbitrarily set to (say) one since the first configuration does not arise from a node update. The sequence of samples is denoted by

(4)

and is therefore an element of the product space

We suppose that the continuous-time chain is observed for units of time, so there are in expectation spin updates. This number is tightly concentrated around the mean, and our arguments are not sensitive to a small amount of randomness in the number of samples , so for convenience we deterministically set .

As mentioned before, the underlying graph is assumed to have maximum node degree bounded by , and we denote the set of all such graphs on nodes by . A structure learning algorithm is a (possibly randomized) map

The performance of a structure learning algorithm is measured using the zero-one loss, and the risk under some vector of parameters corresponding to a graph is given by

The minimax risk is the best algorithm’s worst-case risk (probability of error) over graphs and corresponding parameter vectors, namely

The basic questions we seek to address are what triples result in the minimax risk tending to zero as these parameters tend to infinity, and can we find an efficient algorithm.

Iii Structure learning algorithm

Iii-a Idealized test

We determine the presence of edges in a decoupled manner, focusing on a single pair of nodes and . Our test is based on the identity (derived via Eq. (2))

(5)

where for an arbitrary assignment we define

(We will often leave implicit the dependence of and on .) The identity (5) holds whether or not the edge is present, since implies , and this agrees with and being conditionally independent given (in which case ).

Instead of attempting to estimate the right-hand side of (

5) from samples, we claim that if our goal is merely to decide between and , it suffices to estimate the much simpler quantity . This will be justified using the following bound.

Let and be real numbers with and . Then

Let . Then for , . It follows that for and also from we get . Combining these ingredients gives

Let us momentarily assume that and . The conditional probability lower bound (3) implies . Lemma III-A together with (5) gives

The assumption is without loss of generality by replacing and by and , respectively. If , which happens if and only if , we get a similar sequence of inequalities:

These can be combined to give

(6)

We emphasize that this inequality holds for any assignment .

It turns out to be possible to crudely estimate the quantity in (6) to determine if it is equal to zero. It is important that the sign of does not depend on the configuration , as this allows to accumulate contributions from many samples. The following scenario gives intuition for why sequential updates allow to do this. Suppose that is updated, followed by flipping sign, followed by yet another update of , with no other spins updated. Since only has changed in between updates to , we can hope to get an estimate of the effect of on . To produce the sequence of events just described requires observing the process for time; we next show how to achieve a similar outcome sufficient for learning the structure, in time only .

Iii-B Estimating edges

We define a few events to be used towards estimating the effect of an edge, as captured by . To this end, consider restriction of the process to an interval, written as

For a positive number , let be the event that node is selected at least once in the first time-units but not node , node is selected at least once in the second time-units but not node , and node is selected at least once in the final time-units but not node . It is immediate from the Poisson update times that

(7)

(We denote this quantity by since it will be used often.) Next, define the event that is opposite at time versus ,

and take the intersection of the two events,

Whenever is updated, by Equation (3) both the probabilities of flipping or staying the same are at least , regardless of the states of its neighbors. It follows that the last update of in the interval has at least probability of being opposite to , so

(8)

Note that determining the occurrence of does not require knowing anything about the graph.

We now define the statistic that will be used to estimate presence of a given edge: For each and , let

The value can be computed by an algorithm with access to the process . The idea is that gives a rough estimate of the effect of spin on spin by counting the number of times has differing updates when has changed. It is necessary that few or no neighbors of are updated during the time-interval, as these changes could overwhelm the effect due to . We will see later that choosing sufficiently small ensures this is usually the case.

Iii-C Structure learning algorithm

We now present the structure learning algorithm. In order to determine presence of edge the algorithm divides up time into intervals of length , estimates from the intervals, and compares to a threshold .

1: Let and .
2: For
3:  If
4:  Then add edge to .
5: Output .

Algorithm 1 GlauberLearn

Consider the Ising model (1) on a graph with maximum degree and couplings bounded as . Let denote the continuous-time Glauber dynamics started from any configuration . If

where then GlauberLearn outputs the correct edge set with probability with runtime .

In the remainder of this section we work towards proving Theorem III-C. We first bound the runtime of the algorithm. Suppose that when the samples are collected, they are stored as a list for each node giving the times the node is updated and the new value. Each computation in Line 3 takes time , and this is done for pairs , which gives the stated runtime.

Since the Glauber dynamics is time-homogeneous (and Markov), does not depend on the index . Hence, we use the shorthand for and similarly for .

Let be the event that none of the neighbors of , aside from possibly , are selected in time-interval . Since depends on disjoint Poisson clocks from those determining , the two events are independent (however, is not necessarily independent of ). It is immediate, again from the Poisson times of updates, that

(9)

At this point it is possible to make a connection to the idealized edge test formula (6). Conditioning on , our edge statistic has expectation

(10)

Of course, without knowing the neighbors of it is not clear whether or not event has occurred, but as shown next in Lemma III-C, if is small enough, then occurs frequently and gives a good estimate.

We have the following estimates:

  1. If , then for any ,

  2. If , then for any ,

To begin, conditioning on gives

(11)

In both cases (i) and (ii) we have

(12)
(13)

Inequality (a) is by the crude estimate , (b) follows from the containment , (c) is by independence of and , (d) is obtained by plugging in (7) and (9), and (e) follows from the inequality .

We first prove case (ii). If edge is not in the graph, then flipping only spin does not change the conditional distribution of spin , assuming the neighbors of remain unchanged, and it follows from (10) that

Plugging (III-C) into (11) proves case (ii).

We now turn to case (i). Suppose . Eq. (11) implies

(14)

The second term has already been bounded in (III-C). We estimate the first term on the right-hand side of (14):

Here (a) uses (10), (b) is by (6) and because the reasoning from (8) applies also conditioned on and using the fact that and are independent, and (c) follows from the inequality , the definition , and (9). This proves part (i).

We will use the following Bernstein-type submartingale concentration inequality, which can be found for example as an implication of Theorem 27 in [45]. Let be a submartingale adapted to the filtration with almost surely and . Then for all and real ,

We now prove Theorem III-C.

Recall that . Suppose that . Let denote the lower bound quantity in case (i) of Lemma III-C. The inequality implies that

Here we used the bound so .

The sequence , , is a submartingale adapted to the filtration , since by Lemma III-C,

Let . Define as in Lemma III-C, and note that . Recalling the choice , by Lemma III-C

It remains to bound . For this, we observe that (since ). Now, , so . We therefore obtain

where we used the crude bound .

If then we can take a union bound over the at most edges to see that with probability at least we have . We can translate this value for to the time stated in the theorem by taking larger than

The inequality used the estimate which holds for :

Next, suppose that . Lemma III-C states that As before this implies that , , is a supermartingale and , , is a submartingale. Lemma III-C gives

The same bound on applies as before, and a union bound over at most non-edges shows that the same (and hence ) specified earlier suffices in order that with probability .

Iv A lower bound on the observation time

Our lower bound derivation is a modification of the proof of Santhanam and Wainwright [46] for the i.i.d. setting. Their construction was based on cliques of size , with a single edge removed. When the interaction is ferromagnetic (i.e., ), at low temperatures ( large enough) the removal of a single edge is difficult to detect and leads to a lower bound.

We use a similar (but not identical) family of models as in [46] to lower bound the observation time required. Start with a graph consisting of cliques of size . Suppose that

is odd, and fix a perfect matching on each of the cliques (each matching has cardinality

). The vector of parameters corresponding to is obtained by setting for edges in the matchings, and for edges not in the matchings.

Now for each in a matching (where ) we form the graph by removing the edge from . There are

graphs with one edge removed.

This construction is a refinement of the one in [46]: their construction had all edge parameters equal to a single value , and therefore did not fully capture the effect of some edges being dramatically weaker.

[Sample complexity lower bound] Suppose the minimax risk is . Then satisfies

In the remainder of this section we prove Theorem IV. We use the following version of Fano’s inequality, which can be found, for example, as Corollary 2.6 in [47]. It gives a lower bound on the error probability (minimax risk in our case) in terms of the KL-divergence between pairs of points in the parameter space, where KL-divergence between two distributions and on a space is defined as

[Fano’s inequality] Assume that and that contains elements . Let denote the probability law of the observation under model . If

(15)

for , then the minimax risk for the zero-one loss is bounded as

Iv-a Bound on KL divergence

In this section we upper bound the KL divergence between the models parameterized by and any (by symmetry of the construction this is the same for every ). It suffices to consider the projection (i.e., marginal) onto the size clique containing and , since the KL divergence between these projections is equal to the entire KL divergence. We therefore abuse notation slightly and write and for the Gibbs distributions after projecting onto the relevant clique. Similarly, using as a placeholder for either or , we let represent the distribution of the observation , which now consists of samples as well as node update indices . (We only project the Gibbs measure to the clique, keeping node update indices over the entire original graph.) The initial configuration is drawn according to the stationary measure for each model . Concretely, with representing either or ,