# MCMC Learning

The theory of learning under the uniform distribution is rich and deep, with connections to cryptography, computational complexity, and the analysis of boolean functions to name a few areas. This theory however is very limited due to the fact that the uniform distribution and the corresponding Fourier basis are rarely encountered as a statistical model. A family of distributions that vastly generalizes the uniform distribution on the Boolean cube is that of distributions represented by Markov Random Fields (MRF). Markov Random Fields are one of the main tools for modeling high dimensional data in many areas of statistics and machine learning. In this paper we initiate the investigation of extending central ideas, methods and algorithms from the theory of learning under the uniform distribution to the setup of learning concepts given examples from MRF distributions. In particular, our results establish a novel connection between properties of MCMC sampling of MRFs and learning under the MRF distribution.

## Authors

• 15 publications
• 25 publications
• ### Order statistics on the spacings between order statistics for the uniform distribution

Closed-form expressions for the distributions of the order statistics on...

09/13/2019 ∙ by Iosif Pinelis, et al. ∙ 0

• ### Linear and Parallel Learning of Markov Random Fields

We introduce a new embarrassingly parallel parameter learning algorithm ...

08/29/2013 ∙ by Yariv Dror Mizrahi, et al. ∙ 0

• ### Aliasing effects for random fields over spheres of arbitrary dimension

In this paper, aliasing effects are investigated for random fields defin...

11/28/2018 ∙ by Claudio Durastanti, et al. ∙ 0

• ### Multi-Item Mechanisms without Item-Independence: Learnability via Robustness

We study the sample complexity of learning revenue-optimal multi-item au...

11/06/2019 ∙ by Johaness Brustle, et al. ∙ 0

• ### Markov random fields factorization with context-specific independences

Markov random fields provide a compact representation of joint probabili...

06/10/2013 ∙ by Alejandro Edera, et al. ∙ 0

• ### On the Hardness of Robust Classification

It is becoming increasingly important to understand the vulnerability of...

09/12/2019 ∙ by Pascale Gourdeau, et al. ∙ 0

• ### Interactions of Computational Complexity Theory and Mathematics

[This paper is a (self contained) chapter in a new book, Mathematics an...

10/26/2017 ∙ by Avi Wigderson, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The theory of learning under the uniform distribution is well developed and has rich and beautiful connections to discrete Fourier analysis, computational complexity, cryptography and combinatorics to name a few areas. However, these methods are very limited since they rely on the assumption that examples are drawn from the uniform distribution over the Boolean cube or other product distributions. In this paper we make a first step in extending ideas, techniques and algorithms from this theory to a much broader family of distributions, namely, to Markov Random Fields.

### 1.1 Learning Under the Uniform Distribution

Since the seminal work of Linial et al. (1993), the study of learning under the uniform distribution has developed into a major area of research; the principal tool is the simple and explicit Fourier expansion of functions defined on the boolean cube ():

 f(x)=∑S⊆[n]^f(S)χS(x),χS(x)=∏i∈Sxi.

This connection allows a rich class of algorithms that are based on learning coefficients of for several classes of functions. Moreover, this connection allows application of sophisticated results in the theory of Boolean functions including hyper-contractivity, number theoretic properties and invariance, e.g.  (O’Donnell and Servedio, 2007, Shpilka and Tal, 2011, Klivans et al., 2002). On the other hand, the central role of the uniform distribution in computational complexity and cryptography relates learning under the uniform distribution to key themes in theoretical computer science including de-randomization, hardness and cryptography, e.g.  (Kharitonov, 1993, Naor and Reingold, 2004, Dachman-Soled et al., 2008).

Given the elegant theoretical work in this area, it is a little disappointing that these results and techniques impose such stringent assumptions on the underlying distribution. The assumption of independent examples sampled from the uniform distribution is an idealization that would rarely, if ever, be applicable in practice. In real distributions, features are correlated and correlations deem the analysis of algorithms that assume independence useless. Thus, it is worthwhile to ask the following question:
Question 1: Can the Fourier Learning Theory extend to correlated features?

### 1.2 Markov Random Fields

Markov random fields are a standard way of representing high dimensional distributions (see e.g.  (Kinderman and Snell, 1980)). Recall that a Markov random field on a finite graph and taking values in a discrete set

, is a probability distribution on

of the form , where the product is over all cliques in the graph, are some non-negative valued functions and is the normalization constant. Here is an assignment from .

Markov Random Fields are widely used in vision, computational biology, biostatistics, spatial statistics and several other areas. The popularity of Markov Random Fields as modeling tools is coupled with extensive algorithmic theory studying sampling from these models, estimating their parameters and recovering them. However, to the best of our knowledge the following question has not been studied.

Question 2: For an unknown function from a class and labeled samples from the Markov Random Field, can we learn the function?
Of course the problem stated above is a special case of learning a function class given a general distribution (Valiant, 1984, Kearns and Vazirani, 1994). Therefore, a learning algorithm that can be applied for a general distribution can be also applied to MRF distributions. However, the real question that we seek to ask above is the following: Can we utilize the structure of the MRF to obtain better learning algorithms?

### 1.3 Our Contributions

In this paper we begin to provide an answer to the questions posed above. We show how methods that have been used in the theory of learning under the uniform distribution can be also applied for learning from certain MRF distributions.

This may sound surprising as the theory of learning under the uniform distribution strongly relies on the explicit Fourier representation of functions. Given an MRF distribution, one can also imagine expanding a function in terms of a Fourier basis

for the MRF, the eigenvectors of the transition matrix of the Gibbs Markov Chain associated with the MRF, which are orthogonal with respect to the MRF distribution. It seems however that this approach is naïve since:

1. Each eigenvector is of size ; how does one store them?

2. How does one find these eigenvectors?

3. How does one find the expansion of a function in terms of these eigenvectors?

MCMC Learning: The main effort in this paper is to provide an answer to the questions above. For this we use Gibbs sampling, which is a Markov chain Monte Carlo (MCMC) algorithm that is used to sample from an MRF. We will use this MCMC method as the main engine in our learning algorithms. The Gibbs MC is reversible and therefore its eigenvectors are orthogonal with respect to the MRF distribution. Also, the sampling algorithm is straightforward to implement given access to the underlying graph and potential functions. There is a vast literature studying the convergence rates of this sampling algorithm; our results require that the Gibbs samplers are rapidly mixing.

In Section 4

, we show how the eigenvectors of the transition matrix of the Gibbs MC can be computed implicitly. We focus on the eigenvectors corresponding to the higher eigenvalues. These eigenvectors correspond to the stable part of the spectrum,

i.e. the part that is not very sensitive to small perturbation. Perhaps surprisingly, despite the exponential size of the matrix, we show that it is possible to adapt the power iteration method to this setting.

A function from can be viewed as a

dimensional vector and thus applying powers of the transition matrix to it results in another function from

. Observe that the powers of a transition matrix define distributions in time over the state space of the the Gibbs MC. Thus, the value of the function obtained by applying powers of a transition matrix can be approximated by sampling using the Gibbs Markov chain. Our main technical result (see Theorem 1) shows that any function approximated by “top” eigenvectors of the transition matrix of the Gibbs MC can be expressed a linear combination of powers of the the transition matrix applied to a suitable collection of “basis” functions, whenever certain technical conditions hold.

The reason for focusing on the part of the spectrum corresponding to stable eigenvectors is twofold. First, it is technically easier to access this part of the spectrum. Furthermore, we think of eigenvectors corresponding to small eigenvalues as unstable. Consider Gibbs sampling as the true temporal evolution of the system and let be an eigenvector corresponding to a small eigenvalue. Then calculating provides very little information on where is obtained from after a short evolution of the Gibbs sampler. The reasoning just applied is a generalization of the classical reasoning for concentrating on the low frequency part of the Fourier expansion in traditional signal processing.

Noise Sensitivity and Learning: In the case of the uniform distribution, the noise sensitivity (with parameter ) of a boolean function , is defined as the probability that , where is chosen uniformly at random and is obtained from by flipping each bit with probability . Klivans et al. (2002) gave an elegant characterization of learning in terms of noise sensitivity. Using this characterization, they showed that intersections and thresholds of halfspaces can be elegantly learned with respect to the uniform distribution. In Section 4.3, we show that the notion of noise sensitivity and the results regarding functions with low noise sensitivity can be generalized to MRF distributions.

Learning Juntas: We also consider the so-called junta learning problem. A junta is a function that depends only on a small subset of the variables. Learning juntas from i.i.d. examples is a notoriously difficult problem, see (Blum, 1992, Mossel et al., 2004). However, if the learning algorithm has access to labeled examples that are received from a Gibbs sampler, these correlated examples can be useful for learning juntas. We show that under standard technical conditions on the Gibbs MC, juntas can be learned in polynomial time by a very simple algorithm. These results are presented in Section 5.

Relation to Structure Learning: In this paper, we assume that learning algorithms have the ability to sample from the Gibbs Markov Chain corresponding to the MRF. While such data would be hard to come by in practice, we remark that there is a vast literature regarding learning the structure and parameters of MRFs using unlabeled data and that it has recently been established that this can be done efficiently under very general conditions (Bresler, 2014). Once the structure of the underlying MRF is known, Gibbs sampling is an extremely efficient procedure. Thus, the methods proposed in this work could be used in conjunction with the techniques for MRF structure learning. The eigenvectors of the transition matrix could be viewed as features for learning, thus the methods proposed in this paper can be viewed as feature learning.

### 1.4 Related Work

The idea of considering Markov Chains or Random Walks in the context of learning is not new. However, none of the results and models considered before give non-trivial improvements or algorithms in the context of MRFs. Work of Aldous and Vazirani (1995) studies a Markov chain based model where the main interest was in characterizing the number of new nodes visited. Gamarnik (1999) observed that after the mixing time a chain can simulate i.i.d. samples from the stationary distribution and thus obtained learning results for general Markov chains. Bartlett et al. (1994) and Bshouty et al. (2005) considered random walks on the discrete cube and showed how to utilize the random walk model to learn functions that cannot be easily learned from i.i.d. examples from the uniform distribution on the discrete cube. In this same model, Jackson and Wimmer (2014) showed that agnostic learning parities and PAC-learning thresholds of parities (TOPs) could be performed in quasi-polynomial time.

## 2 Preliminaries

Let be an instance space. In this paper, we will assume that is finite and in particular we are mostly interested in the case when , where is some finite set. For , let denote the Hamming distance between and , i.e. .

Let denote a time-reversible discrete time ergodic Markov chain with transition matrix . When , we say that has single-site transitions if for any legal transition it is the case that , i.e.  when . Let denote the starting state of a Markov chain . Let denote the distribution over states at time , when starting from . Let denote the stationary distribution of . Denote by the quantity:

 τM(x0)=min{t:∥Pt(x0,⋅)−π∥\footnotesize{TV}≤14}

Then, define the mixing time of as . We say that a Markov chain with state space is rapidly mixing if .

While all the results in this paper are general, we describe two basic graphical models that will aid the discussion.

### 2.1 Ising Model

Consider a collection of nodes, , and for each pair , there is an associated interaction energy, . Suppose denotes the graph, where for . A state of the system consists of an assignment of spins, , to the nodes . The Hamiltonian of configuration is defined as

 H(σ)=−∑(i,j)∈Eβijσiσj−B∑i∈[n]σi,

where is the external field. The energy of a configuration is .

The Glauber dynamics on the Ising model defines the Gibbs Markov Chain , where the transitions are defined as follows:
(i) In state , pick a node uniformly at random. With probability do nothing, otherwise
(ii) Let be obtained by flipping the spin at node . Then, with probability , the state at the next time-step is . Otherwise the state at the next time-step remains unchanged.

The stationary distribution of the above dynamics is the Gibbs distribution, where . It is known that there exists a such that for all graphs of maximal degree , if then the dynamics above is rapidly mixing (Dobrushin and Shlosman, 1985, Mossel and Sly, 2013).

### 2.2 Graph Coloring

Let be a graph. For any , a valid -coloring of the graph is a function such that for every , . For a node , let denote the set of neighbors of . Consider the Markov chain defined by the following transition:
(i) In state (valid coloring) , choose a node uniformly at random. With probability do nothing, otherwise:
(ii) Let be the subset of colors defined by . Define to be the coloring obtained by choosing a random color and set , for . The state at the next time-step is .

The stationary distribution of the above Markov chain is uniform over the valid colorings of the graph. It is known that the above chain is rapidly mixing when the condition is satisfied, where is the maximal degree of the graph (in fact much better results are known (Jerrum, 1995, Vigoda, 1999)).

## 3 Learning Models

Let be a finite instance space and let be an irreducible discrete-time reversible Markov chain, where is the transition matrix. Let denote the stationary distribution of , the mixing time. We assume that the Markov chain is rapidly mixing, i.e.  (note that if , ).

We consider the problem of learning with respect to stationary distributions of rapidly mixing Markov chains (e.g. defined by an MRF). The two graphical models described in the previous section serve as examples of such settings. The learning algorithm has access to the one-step oracle, , that when queried with a state , returns the state after one step. Thus,

is a random variable with distribution

and can be used to simulate the Markov chain.

Let be a class of boolean functions over . The goal of the learning algorithm is to learn an unknown function, , with respect to the stationary distribution of the Markov chain . As described above, the learning algorithm has the ability to simulate the Markov chain using the one-step oracle. We will consider both PAC learning and agnostic learning. Let be a (possibly randomized) labeling function. In the case of PAC learning is just the target function ; in the case of agnostic learning is allowed to be completely arbitrary. Let denote the distribution over , where for any , and .
PAC Learning (Valiant, 1984): In PAC learning the labeling function is the target function . The goal of the learning algorithm is to output a hypothesis, , which with probability at least satisfies .
Agnostic Learning (Kearns et al., 1994, Haussler, 1992): In agnostic the labeling function may be completely arbitrary. Let be the distribution as defined above. Let . The goal of the learning algorithm is to output a hypothesis, , which with probability at least satisfies,

 err(h)=Pr(x,y)∼D[h(x)≠y]≤opt+ϵ

Typically, one requires that the learning algorithm have time and sample complexity that is polynomial in , and . So far, we have not mentioned what access the learning algorithm has to labeled examples. We consider two possible settings.

Learning with i.i.d. examples only: In this setting, in addition to having access to the one-step oracle, , the learning algorithm has access to the standard example oracle, which when queried returns an example , where and is the (possibly randomized) labeling function.

Learning with labeled examples from MC: In this setting, the learning algorithm has access to a labeled random walk, of the Markov chain. Here is the (random) state one time-step after and is the labeling function. Thus, the learning algorithm can potentially exploit correlations between consecutive examples.

The results in Section 4 only require access to i.i.d. examples. Note that these are sufficient to compute inner products with respect to the underlying distribution, a key requirement for Fourier analysis. The result in Section 5 is only applicable in the stronger setting where the learning algorithm receives examples from a labeled Markov chain. Note that since the chain is rapidly mixing, the learning algorithm by itself is able to (approximately) simulate i.i.d. random examples.

## 4 Harmonic Analysis using Eigenvectors

In this section, we show that the eigenvectors of the transition matrix can be (approximately) expressed as linear combinations of a suitable collection of basis functions and powers of the transition matrix applied to them.

Let be a time-reversible discrete Markov chain. Let be the stationary distribution of . We consider the set of right-eigenvectors of the matrix . The largest eigenvalue of is and the corresponding eigenvector has in each co-ordinate. The left-eigenvector in this case is the stationary distribution. For simplicity of analysis we assume that for all which implies that all the eigenvalues of are non-negative. We are interested in identifying as many as possible of the remaining eigenvectors with eigenvalues less than .

For functions, , define the inner-product, , and the norm . Throughout this section, we will always consider inner products and norms with respect to the distribution .

Since is reversible, the right eigenvectors of are orthogonal with respect to . Thus, these eigenvectors can be used as a basis to represent functions from . First, we briefly show that this approach generalizes the standard Fourier analysis on the Boolean cube, which is commonly used in uniform-distribution learning.

### 4.1 Fourier Analysis over the Boolean Cube

Let denote the boolean cube. For , the parity function over is defined as . With respect to the uniform distribution over , the set of parity functions form an orthonormal Fourier basis, i.e. for , and .

We can view the uniform distribution over as arising from the stationary distribution of the following simple Markov chain. For , such that and for , let ; . The remaining values of the matrix are set to . This chain is rapidly mixing with mixing time and the stationary distribution is the uniform distribution over . It is easy to see and well known that every parity function is an eigenvector of with eigenvalue . Thus, Fourier-based learning under the uniform distribution can be seen as a special case of Harmonic analysis using eigenvectors of the transition matrix.

### 4.2 Representing Eigenvectors Implicitly

As in the case of the uniform distribution over the boolean cube, we would like to find the eigenvectors of the transition matrix of a general Markov chain, , and use these as an orthonormal basis for learning. Unfortunately, in most cases of interest explicit succinct representations of eigenvectors don’t necessarily exist and the size of the set is likely to be prohibitively large, typically exponential in , where is the length of the vectors in . Thus, it is not possible to use standard techniques to obtain eigenvectors of . Here, we show how these eigenvectors may be computed implicitly.

An eigenvector of the transition matrix is a function . Throughout this section, we will view any function as an -dimensional vector with value at position . As such, even writing down such a vector corresponding to an eigenvector is not possible in polynomial time. Instead, our goal is to show that whenever a suitable collection of basis functions exists, the eigenvectors have a simple representation in terms of these basis functions and powers of the transition matrix applied to them, as long as the underlying Markov chain satisfies certain conditions. The condition we require is that the spectrum of the transition matrix be discrete, i.e. eigenvalues show sharp drops. Between these drops, the eigenvalues may be quite close to each other, and in fact even equal. Figure 1 shows the spectrum of the transition matrix of the Ising model on a cycle of nodes for various values of , the inverse temperature parameter. The case when corresponds to the uniform distribution on . One notices that the spectrum is discrete for small values of (high-temperature regime).

Next, we formally define the requirements of a discrete spectrum.

###### Definition 1 (Discrete Spectrum).

Let be the transition matrix of a Markov chain and let be the eigenvalues of in non-increasing order. We say that has an -discrete spectrum, if there exists a sequence such that the following are true

1. Between and , there is a non-trivial gap, i.e. for ,

2. Let , we refer to as the block (of eigenvalues and eigenvectors). Then the size of each block,

3. The eigenvalue is not too small (with respect to the gap at the end of each block),

In general, the parameter will depend on and we require that in order to separate eigenvectors from the various blocks. One would expect to have dependence on both and and to have some dependence on . As an example, we note that the spectrum corresponding to the Markov chain discussed in Section 4.1 is indeed discrete with the following parameters: can be any integer, , and .

In order to extract eigenvectors of , we start with a collection of functions which have significant Fourier mass on the top eigenvectors. For an eigenvector , it’s Fourier coefficient in any function is simply . Condition 2 in Definition 2 implicitly requires that the inner product be large for with a large eigenvalue for some in the set. In addition, since eigenvalues corresponding to different eigenvectors may be equal or close together, we require a set of functions where the matrix corresponding to the Fourier coefficients of such eigenvectors is well-conditioned. Formally, we define the notion of a useful basis of functions with respect to a transition matrix which has an -discrete spectrum.

###### Definition 2 (Useful Basis).

Let be a collection of functions from . We say that is -useful for an -discrete if the following hold:

1. For every ,

2. Let , then for any , if (the size of the block), there exist functions , such that the matrix defined by , where and

, has smallest singular value at least

. Alternatively, the operator norm of , is at most .

The parameter will have dependence on —a polynomial dependence on would result in efficient algorithms. In general, it is not known which Markov chains admit a useful basis that has a succinct representation. In the case of the uniform distribution, clearly the collection of parity functions already is such a useful basis. However, we observe that there are other useful bases as well. For example if for some , one wished to extract all eigenvectors with eigenvalues at least (parities of size at most ), one can start with the collection of functions that is disjunctions (or conjunctions) on at most variables. Note that in this case, there is no contribution from eigenvectors with low eigenvalues (i.e. noise) in the basis functions. However, one would not expect to find such a useful basis without any contributions from eigenvectors with low eigenvalues when the stationary distribution is not product.

We now show how functions from a useful basis for a transition matrix with a discrete spectrum can be used to extract eigenvectors. First by applying powers of to some function , the contributions of eigenvectors in different blocks can be separated. However, to separate eigenvectors within a block we require an incoherence condition among the various s (which is the second condition in Definition 2). We first show that the eigenvectors can be approximately represented in the following form:

 ν≈∑t,mβt,mPtgm,

where indexes the functions in .

###### Theorem 1.

Let be a transition matrix with an discrete spectrum and let be an -useful basis for . Then for any , there exists and such that every eigenvector with can be expressed as:

 νℓ=∑t,mβℓt,mPtgm+ηi

where , and . Furthermore,

 B =(2αNk)Θ((1+c)k+1)ϵ−(1+c)k τmax =O(k(1+c)k−1(log(N)+log(k)+log(α)++1log(1/γ)+log(1ϵ)))

The proof of the above theorem is somewhat delicate and is provided in Appendix A.1. Notice that the bounds on and have a relatively mild (polynomial) dependence on most parameters except and . Thus, when and are relatively small, for example both of them constant, both and are bounded by polynomials in the other parameters. Also, may be somewhat large, in the case of the uniform distribution —though this is still polynomial if is constant.

We can now use the above Theorem to devise a simple learning algorithm with respect to stationary distribution of the Markov chain. In fact, the learning algorithm does not even need to explicitly estimate the values of in the statement of Theorem 1—the result shows that any linear combination of the eigenvectors can also be represented as a linear combination of the collection of functions

. Thus, we can treat this collection as “features” and simply perform linear regression (either

or ) as part of the learning algorithm. The algorithm is given in Figure 2. The key idea is to show that can be approximately computed for any with blackbox access to and the one-step oracle . This is because , where is the distribution over obtained by starting from and taking steps of the Markov chain. The functions in the algorithm are computing approximations to and then using them as features for learning. Formally, we can prove the following theorem.

###### Theorem 2.

Let be a Markov chain and let denote the eigenvalues of and the eigenvector corresponding to . Let be the stationary distribution of . Let be a class of boolean functions. Suppose for some , there exists such that for every ,

 ∑ℓ>ℓ∗⟨f,νℓ⟩2≤ϵ24,

i.e. every can be approximated (up to ) by the top eigenvectors of . Suppose has a -discrete spectrum as defined in Definition 1, with and that is an -useful basis for . Then, there exists a learning algorithm that with blackbox access to functions , the one-step oracle for Markov chain , and access to random examples where and is an arbitrary labeling function, agnostically learns , up to error .

Furthermore, the running time, sample complexity and the time required to evaluate the output hypothesis are bounded by a polynomial in . In particular, if is a constant, and depend only on (and not on ), and , where may be an arbitrary function, the algorithm runs in polynomial time.

We give the proof this theorem in Appendix A.2; the proof uses the -regression technique of Kalai et al. (2005). We comment that the learning algorithm (Fig. 2) is a generalization of the low-degree algorithm of Linial et al. (1993). Also, when applied to the Markov chain corresponding to the uniform distribution over , this algorithm works whenever the low-degree algorithm does (albeit with slightly worse bounds). As an example, we consider the algorithm of Klivans et al. (2002) to learn arbitrary functions of halfspaces. As a main ingredient of their work, they showed that halfspaces can be approximated by the first levels of the Fourier spectrum. The running time of our learning algorithm run with a useful basis consisting of parities, or conjunctions of size is polynomial (for constant ).

### 4.3 Noise Sensitivity Analysis

In light of Theorem 2, one can ask which function classes are well-approximated by top eigenvectors and for which MRFs. A generic answer is functions that are “noise-stable” with respect to the underlying Gibbs Markov chain. Below, we generalize the definition of noise sensitivity in the case of product distributions to apply under MRF distributions. In words, the noise sensitivity (with parameter ) of a boolean function is the probability that and are different, where is drawn from the stationary distribution and is obtained by taking steps of the Markov chain starting at .

###### Definition 3.

Let from the stationary distribution of and , the distribution obtained by taking steps of the Gibbs MC starting at . For a boolean function , define its noise sensitivity with respect to parameter and the transition matrix of the Gibbs MC as

 NSt(f)=Prx∼π,y∼Pt(x,⋅)[f(x)≠f(y)].

One can derive an alternative form for the noise sensitivity as follows. Let denote the eigenvalues of and the corresponding eigenvectors. Let . Then,

 NSt(f) =Prx∼π,y∼Pt(x,⋅)[f(x)≠f(y)] =12Ex∼π,y∼Pt(x,⋅)[1−f(x)f(y)] =12−12⟨f,Ptf⟩ =12−12∑ℓλtℓ^f2ℓ (2)

The notion of noise-sensitivity has been fruitfully used in the theory of learning under the uniform distribution (see for example Klivans et al. (2002)). The main idea is that functions that have low noise sensitivity have most of their mass concentrated on “lower order Fourier coefficients”, i.e. eigenvectors with large eigenvalues. We show that this idea can be easily generalized in the context of MRF distributions. The proof of the following theorem is provided in Appendix A.3.

###### Theorem 3.

Let be the transition matrix of the Gibbs MC of an MRF and let be a boolean function. Let be the largest index such that , then:

 ∑ℓ>ℓ∗^f2ℓ≤ee−1NS−1lnρ(f)

Thus, it is of interest to study which function classes have low noise-sensitivity with respect to certain MRFs. As an example, we consider the Ising model on graphs with bounded degrees; the Gibbs MC in this case is the Glauber dynamics. We show that the class of halfspaces have low noise sensitivity with respect to this family of MRFs. In particular, the noise sensitivity with parameter , only depends on .

###### Proposition 1.

For every , there exists such that the following holds: For every graph with maximum degree , the Ising model with and any function of the form , it holds that , for some constant that depends only on .

The proof of the above proposition follows from Lemma 1 in Appendix A.4. As a corollary we get.

###### Corollary 1.

Let be the transition matrix of the Gibbs MC of an Ising model with bounded degree . Suppose that for some , has an -discrete spectrum such that depends only on and , (where is as in Proposition 1), , is and a constant, for constant . Furthermore, suppose that admits an -useful basis with , for the parameters as above. Then the class of halfspaces , is agnostically learnable with respect to the stationary distribution of up to error .

###### Proof.

Let , where is from Proposition 1. Thus, . Let (as in Theorem 3); by the assumption on , admits an -distribution where depends only on , such that .

Now, the algorithm in Figure 2 together with the parameter settings from Theorems 1, 2 and 3 give the desired result. ∎

### 4.4 Discussion

In this section, we proposed that approximation using eigenvectors of the transition matrix of an appropriate Markov chain may be better than just polynomial approximation, when learning with respect to distributions defined by Markov random fields (not product). We checked this for a few different Ising models to approximate the majority function. Since the computations required are fairly intensive, we could only do this for relatively small models. However, we point that the methods proposed in this paper are highly-parallelizable and not beyond the reach of large computing systems. Thus, it may be of interest to run methods proposed here on larger datasets and real-world data.

Approximation of Majority: We look at three different graphs: a cycle of length 11, the complete graph on 11 nodes and an Erdős-Rényi random graph with and . We looked at the Ising model on these graphs with various different values of . In each case, we looked at degree- polynomial approximations for and also with using top eigenvectors of the majority function. We see that the approximation using eigenvectors is consistently better, except possibly for very low values of , where polynomial approximations are also quite good. The values reported in the table are squared error for the approximation.

## 5 Learning Juntas

In this section, we consider the problem of learning the class of -juntas. Suppose is the instance space. A -junta is a boolean function that depends on only out of the possible co-ordinates of . In this section, we consider the model in which we receive labeled examples from a random walk of a Markov chain (see Section 3.2).111In the model where labeled examples are received from the only from stationary distribution, it seems unlikely that any learning algorithm can benefit from access to the oracle. The problem of learning juntas in time is a long-standing open problem even when the distribution is uniform over the Boolean cube, where the oracle can easily be simulated by the learner itself. In this case the learning algorithm can identify the relevant variables by keeping track of which variables caused the function to change its value.

For a subset, of the variables and a function , let denote the event, , i.e. it fixes the assignment on the variables in as given by the function . A set is the junta of function , if the variables in completely determine the value of . In this case, for , every satisfying has the same value and by slight abuse of notation we denote this common value by .

Figure 3 describes the simple algorithm for learning juntas. Theorem 4 gives conditions under which Algorithm 3 is guaranteed to succeed. Later, we show that the Ising model and graph coloring satisfy these conditions.

###### Theorem 4.

Let and let be a time-reversible rapidly mixing MC. Let denote the stationary distribution of and its mixing time. Furthermore, suppose that has single-site dynamics, i.e.  if and that the following conditions hold:
(i) For any , either or , where is a constant.
(ii) For any such that , and , .
Then Algorithm 3 exactly learns the class of -junta functions with probability at least and the running time is polynomial in .

###### Proof.

Let be the unknown target -junta function. Let be the set of variables that influence , . The set is called the junta for . Note that a variable is in the junta for , if and only if there exist such that , , , differ only at co-ordinate and . Otherwise, can have no influence in determining the value of (under the distribution ).

We claim that Algorithm 3 identifies every variable in the junta of . Let , be any assignment of values to variables in . Since is the junta for , any that satisfies for all , has the same value . By slight abuse of notation, we denote this common value by .

The fact that implies that there exist assignments, , , such that , , such that , and which satisfy the following: , . Consider the following event: is drawn from , is the state after exactly one transition, satisfies the event and satisfies the event . By our assumptions, the probability of this event is at least . Let . Then, if we draw from the distribution for , instead of the true stationary distribution , the probability of the above event is still at least . This is because when , the . Thus, by observing a long enough random walk, i.e. one with transitions, except with probability , the variable will be identified as a member of the junta. Since there are at most such variables, by a union bound all of will be identified. Once the set has been identified, the unknown function can be learned exactly by observing an example of each possible assignments to the variables in . The above argument shows that all such assignments with non-zero measure under already exist in the observed random walk. ∎

###### Remark 1.

We observe that the condition that the MC be rapidly mixing alone is sufficient to identify at least one variable of the junta. However, unlike in the case of learning from i.i.d. examples, in this learning model, identifying one variable of the junta is not equivalent to learning the unknown junta function. In fact, it is quite easy to construct rapidly mixing Markov chains where the influence of some variables on the target function can be hidden, by making sure that the transitions that cause the function to change value happen only on a subset of the variables of the junta.

We now show that the Ising model and graph coloring satisfy the conditions of Theorem 4 as long as the underlying graphs have constant degree.

Ising Model: Recall that the state space is . Let be the inverse critical temperature, which is a constant independent of as long as , the maximal degree, is constant. Let and let and be two distinct assignments to variables in . Let be two configurations of the Ising system such that for all , , and for , . Let and . Then, since the maximum degree of the graph is constant and each is also bounded by some constant, . Then, by definition (see Section 2), . By summing over possible pairs that satisfy the constraints, we have . But, since there are only possible assignments of variables in , the first assumption of Theorem 4 follows immediately. The second assumption follows from the definition of the transition rate matrix, i.e. each non-zero entry in the transition rate matrix is at least .

Graph Coloring: Let be the number of colors. The state space is and invalid colorings have mass under the stationary distribution. We assume that , where is the maximum degree in the graph. This is also the assumption that ensures rapid mixing. Let be an subset of nodes. Let and be two assignments of colors to the nodes in . Let and be the set of valid colorings such that for each , , and for each , , . We define a map from to as follows:

1. Starting from , first for all , set . This may in fact result in an invalid coloring.

2. The invalid coloring is switched to a valid coloring by only modifying neighbors of nodes in . The condition that ensures that this can always be done.

The above map has the following properties. Let . Then, the nodes that are not in do not change the color. Thus, even though the map may be a many to one map, at most elements in may be mapped to a single element in . Note that . Thus, we have . This implies the first condition of Theorem 4. The second condition follows from the definition of the transition matrix, each non-zero entry is at least .

## References

• Aldous and Vazirani (1995) David Aldous and Umesh Vazirani. A Markovian extension of Valiant’s learning model. Inf. Comput., 117(2):181–186, 1995.
• Bartlett et al. (1994) Peter L. Bartlett, Paul Fischer, and Klaus-Uwe Höffgen. Exploiting random walks for learning. In

Proceedings of the seventh annual conference on Computational learning theory

, COLT ’94, pages 318–327, 1994.
• Blum (1992) Avrim Blum. Learning boolean functions in an infinite attribute space. Mach. Learn., 9(4):373–386, 1992.
• Bresler (2014) Guy Bresler. Efficiently learning Ising models on arbitrary graphs. arXiv preprint arXiv:1411.6156, 2014.
• Bshouty et al. (2005) Nader H. Bshouty, Elchanan Mossel, Ryan O’Donnel, and Rocco A. Servedio. Learning DNF from random walks. J. Comput. Syst. Sci., 71(3):250–265, Oct 2005.
• Dachman-Soled et al. (2008) Dana Dachman-Soled, Homin Lee, Tal Malkin, Rocco Servedio, Andrew Wan, and Hoeteck Wee. Optimal cryptographic hardness of learning monotone functions. In ICALP ’08: Proceedings of the 35th international colloquium on Automata, Languages and Programming, Part I, pages 36–47, 2008.
• Dobrushin and Shlosman (1985) R. L. Dobrushin and S. B. Shlosman. Constructive criterion for uniqueness of a Gibbs field. In J. Fritz, A. Jaffe, and D. Szasz, editors, Statistical Mechanics and dynamical systems, volume 10, pages 347–370. 1985.
• Gamarnik (1999) David Gamarnik. Extension of the PAC framework to finite and countable Markov chains. In Proceedings of the twelfth annual conference on Computational learning theory, COLT ’99, pages 308–317, 1999.
• Haussler (1992) David Haussler. Decision theoretic generalizations of the PAC model for neural net and other learning applications. Information and Computation, 100(1):78–150, 1992. ISSN 0890-5401.
• Jackson and Wimmer (2014) Jeffrey C. Jackson and Karl Wimmer. New results for random walk learning. Journal of Machine Learning Research (JMLR), 15:3635–3666, November 2014.
• Jerrum (1995) Mark Jerrum. A very simple algorithm for estimating the number of -colorings of a low-degree graph. Random Structures and Algorithms, 7(2):157–165, 1995.
• Kakade et al. (2008) Sham M. Kakade, Karthik Sridharan, and Ambuj Tewari. On the complexity of linear prediction: Risk bounds, margin bounds, and regularization. In NIPS, 2008.
• Kalai et al. (2005) Adam Tauman Kalai, Adam R. Klivans, Yishay Mansour, and Rocco A. Servedio. Agnostically learning halfspaces. In FOCS, pages 11–20, 2005.
• Kearns et al. (1994) Michael Kearns, Robert E. Schapire, and Linda M. Sellie. Toward efficient agnostic learning. In Machine Learning, pages 341–352, 1994.
• Kearns and Vazirani (1994) Michael J. Kearns and Umesh Vazirani. An Introduction to Computational Learning Theory. The MIT Press, 1994.
• Kharitonov (1993) Michael Kharitonov. Cryptographic hardness of distribution-specific learning. In

Proceedings of the twenty-fifth annual ACM symposium on Theory of computing

, pages 372–381, 1993.
• Kinderman and Snell (1980) Ross Kinderman and J. Laurie Snell. Markov Random Fields and Their Applications. AMS, 1980.
• Klivans et al. (2002) Adam R. Klivans, Ryan O’Donnell, and Rocoo A. Servedio. Learning intersections and thresholds of halfspaces. In FOCS, 2002.
• Linial et al. (1993) Nathan Linial, Yishay Mansour, and Noam Nisan.

Constant depth circuits, Fourier transform, and learnability.

J. ACM, 40(3):607–620, 1993.
• Mossel and Sly (2013) Elchanan Mossel and Allan Sly. Exact thresholds for Ising-Gibbs samplers on general graphs. The Annals of Probability, 41(1):294–328, 2013.
• Mossel et al. (2004) Elchanan Mossel, Ryan O’Donnell, and Rocco A. Servedio. Learning functions of k relevant variables. J. Comput. Syst. Sci., 69(3):421–434, 2004.
• Naor and Reingold (2004) Moni Naor and Omer Reingold. Number-theoretic constructions of efficient pseudo-random functions. Journal of the ACM (JACM), 51(2):231–262, 2004.
• O’Donnell and Servedio (2007) Ryan O’Donnell and Rocco A. Servedio.

Learning monotone decision trees in polynomial time.

SIAM Journal on Computing, 37(3):827–844, 2007.
• Shpilka and Tal (2011) Amir Shpilka and Avishay Tal. On the minimal Fourier degree of symmetric boolean functions. In IEEE Conference on Computational Complexity, pages 200–209, 2011.
• Valiant (1984) Leslie G. Valiant. A theory of the learnable. Commun. ACM, 27(11):1134–1142, Nov 1984.
• Vigoda (1999) E. Vigoda. Improved bounds for sampling coloring. In 40th Annual Symposium on Foundations of Computer Science (FOCS), pages 51–59, 1999.

## Appendix A Proofs from Section 4

### a.1 Proof of Theorem 1

###### Proof.

We divide the spectrum of into blocks. Let and be as in Definition 1; furthermore define for notational convenience. For , let . Throughout this proof we use the letter to index eigenvectors of —so is an eigenvector with eigenvalue . We want to find in order to (approximately) represent the eigenvector as

 νℓ =∑t,mβℓt,mPtgm+ηℓ (3) Also, we use the notation, ¯νℓ =∑t,mβℓt,mPtgm, (4)

We will show that such representations exist block by block. To begin define

 ϵ1 =⎛⎜⎝ϵ(2αN)1+cc(Nk)12c⎞⎟⎠(1+c)k−1 (5) and define ϵj according to the following recurrence, ϵj =2αN(Nk)12(1+c)ϵ11+cj−1 (6) It is an easy calculation to verify that the solution for ϵj is given by ϵj =(2αN(Nk)12(1+c))1+cc(1−1(1+c)j−1)ϵ1(1+c)j−11 (7) Also, define B1 =(Nα)c+1ϵ−c1 (8) and let Bj be defined according the following recurrence: Bj =2αN(Nk)12(1+c)(ϵj−1)−c1+cBj−1 (9) It is an easy calculation to verify that the solution for Bj is given by Bj =(2Nα(Nk)12(1+c))j−1⋅⎛⎝j−1∏j′=1ϵj′⎞⎠−c1