 # Ising Models with Latent Conditional Gaussian Variables

Ising models describe the joint probability distribution of a vector of binary feature variables. Typically, not all the variables interact with each other and one is interested in learning the presumably sparse network structure of the interacting variables. However, in the presence of latent variables, the conventional method of learning a sparse model might fail. This is because the latent variables induce indirect interactions of the observed variables. In the case of only a few latent conditional Gaussian variables these spurious interactions contribute an additional low-rank component to the interaction parameters of the observed Ising model. Therefore, we propose to learn a sparse + low-rank decomposition of the parameters of an Ising model using a convex regularized likelihood problem. We show that the same problem can be obtained as the dual of a maximum-entropy problem with a new type of relaxation, where the sample means collectively need to match the expected values only up to a given tolerance. The solution to the convex optimization problem has consistency properties in the high-dimensional setting, where the number of observed binary variables and the number of latent conditional Gaussian variables are allowed to grow with the number of training samples.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The principle of maximum entropy was proposed by

Jaynes (1957)

for probability density estimation. It states that from the probability densities that represent the current state of knowledge one should choose the one with the largest entropy, that is, the one which does not introduce additional biases. The state of knowledge is often given by sample points from a sample space and some fixed functions (sufficient statistics) on the sample space. The knowledge is then encoded naturally in form of constraints on the probability density by requiring that the expected values of the functions equal their respective sample means. Here, we assume the particularly simple multivariate sample space

and functions

 φij:x↦xixj for i,j∈[d]={1,…,d}.

Suppose we are given sample points . Then formally, for estimating the distribution from which the sample points are drawn, the principle of maximum entropy suggests solving the following entropy maximization problem

 maxp∈PH(p) s.t. E[φij]=1nn∑k=1φij(x(k)) for all i,j∈[d],

where is the set of all probability distributions on , the expectation is with respect to the distribution , and is the entropy. We denote the -matrix of sample means compactly by and the matrix of functions by . Then, the entropy maximization problem becomes

 maxp∈PH(p) s.t. E[Φ]−Φn=0.

Dudík et al. (2004) observed that invoking the principle of maximum entropy tends to overfit when the number of features is large. Requiring that the expected values of the functions equal their respective sample means can be too restrictive. Consequently, they proposed to relax the constraint using the maximum norm as

 ∥E[Φ]−Φn∥∞≤c

for some . That is, for every function the expected value only needs to match the sample mean up to a tolerance of

. The dual of the relaxed problem has a natural interpretation as a feature-selective

-regularized log-likelihood maximization problem

 maxS∈Sym(d)ℓ(S)−c∥S∥1,

where is the set of symmetric -matrices, is the matrix of dual variables for the constraint , and

 ℓ(S)=⟨S,Φn⟩−a(S)

is the log-likelihood function for pairwise Ising models with the standard matrix dot product and normalizer (log-partition function)

 a(S)=log∑x∈X⟨S,Φ(x)⟩.

In this paper, we are restricting the relaxation of the entropy maximization problem by also enforcing the alternative constraint

 ∥E[Φ]−Φn∥≤λ,

where and denotes the spectral norm on . A difference to the maximum norm constraint is that now the expected values of the functions only need to collectively match the sample means up to a tolerance of instead of individually. The dual of the more strictly relaxed entropy maximization problem

 maxp∈PH(p) s.t. ∥E[Φ]−Φn∥∞≤c and ∥E[Φ]−Φn∥≤λ

is the regularized log-likelihood maximization problem

 maxS,L1,L2∈Sym(d)ℓ(S+L1−L2)−c∥S∥1−λtr(L1+L2) s.t. L1,L2⪰0,

see Appendix A. Here, the regularization term promotes a low rank of the positive-semidefinite matrix . This implies that the matrix in the log-likelihood function also has low rank. Thus, a solution of the dual problem is the sum of a sparse matrix and a low-rank matrix . This can be interpreted as follows: the variables interact indirectly through the low-rank matrix , while some of the direct interactions through the matrix are turned off by setting entries in to zero. We get a more intuitive interpretation of the dual problem if we consider a weakening of the spectral norm constraint. The spectral norm constraint is equivalent to the two constraints

 E[Φ]−Φn⪯λId and Φn−E[Φ]⪯λId

that bound the spectrum of the matrix from above and below. If we replace the spectral norm constraint by only the second of these two constraints in the maximum-entropy problem, then the dual problem becomes

 maxS,L∈Sym(d)ℓ(S+L)−c∥S∥1−λtr(L) s.t. L⪰0.

This problem also arises as the log-likelihood maximization problem for a conditional Gaussian model (see Lauritzen (1996)) that exhibits observed binary variables and unobserved, latent conditional Gaussian variables. The sample space of the full mixed model is , where is the sample space for the unobserved variables. We want to write down the density of the conditional Gaussian model on this sample space. For that we respectively denote the interaction parameters between the observed binary variables by , the ones between the observed binary and latent conditional Gaussian variables by , and the ones between the latent conditional Gaussian variables by , where . Then, for and up to normalization, the density of the conditional Gaussian model is given as

 p(x,y)∝exp(x⊤Sx+y⊤Rx−12y⊤Λy).

One can check, see also Lauritzen (1996), that the conditional densities are -variate Gaussians on . Here, we are interested in the marginal distribution

 p(x)∝exp(⟨S+12R⊤Λ−1R,Φ(x)⟩)

on that is obtained by integrating over the unobserved variables in , see Appendix B. The matrix is symmetric and positive semidefinite. The log-likelihood function for the marginal model and the given data is thus given as

 ℓ(S+L)=⟨S+L,Φn⟩−a(S+L),

where , and is once again the normalizer of the density.

If only a few of the binary variables interact directly, then is sparse, and if the number of unobserved variables is small compared to , then is of low rank. Hence, one could attempt to recover and from the data using the regularized log-likelihood maximization problem

 maxS,L∈Sym(d)ℓ(S+L)−c∥S∥1−λtr(L) s.t. L⪰0 (ML)

that we encountered before.

We are now in a similar situation as has been discussed by Chandrasekaran et al. (2012) who studied Gaussian graphical models with latent Gaussian variables. They were able to consistently estimate both the number of latent components, in our case , and the conditional graphical model structure among the observed variables, in our case the zeroes in . Their result holds in the high-dimensional setting, where the number of variables (latent and observed) may grow with the number of observed sample points. Here, we show a similar result for the Ising model with latent conditional Gaussian variables, that is, the one that we have introduced above.

## 2 Related Work

Graphical Models. The introduction of decomposed sparse + low-rank models followed a period of quite extensive research on sparse graphical models in various settings, for example Gaussians (Meinshausen and Bühlmann (2006), Ravikumar et al. (2011)), Ising models (Ravikumar et al. (2010)), discrete models (Jalali et al. (2011)), and more general conditional Gaussian and exponential family models (Lee and Hastie (2015), Lee et al. (2015), Cheng et al. (2017)). All estimators of sparse graphical models maximize some likelihood including a -penalty that induces sparsity.

Most of the referenced works contain high-dimensional consistency analyses that particularly aim at the recovery of the true graph structure, that is, the information which variables are not conditionally independent and thus interact. A prominent proof technique used throughout is the primal-dual-witness method originally introduced in Wainwright (2009) for the LASSO, that is, sparse regression. Generally, the assumptions necessary in order to be able to successfully identify the true interactions for graphical models (or rather the active predictors for the LASSO) are very similar. For example, one of the conditions that occurs repeatedly is irrepresentability, sometimes also referred to as incoherence. Intuitively, this condition limits the influence the active terms (edges) can have on the inactive terms (non-edges), see Ravikumar et al. (2011).

Sparse + low-rank models. The seminal work of Chandrasekaran et al. (2012) is the first to propose learning sparse + low-rank decompositions as an extension of classical graphical models. As such it has received a lot of attention since then, putting forth various commentators, for example Candès and Soltanolkotabi (2012), Lauritzen and Meinshausen (2012), and Wainwright (2012). Notably, Chandrasekaran et al. (2012)’s high-dimensional consistency analysis generalizes the proof-technique previously employed in graphical models. Hence, unsurprisingly, one of their central assumptions is a generalization of the irrepresentability condition.

Astoundingly, not so much effort has been undertaken in generalizing sparse + low-rank models to broader domains of variables. The particular case of multivariate binary models featuring a sparse + low-rank decomposition is related to Item Response Theory (IRT, see for example Hambleton et al. (1991)). In IRT the observed binary variables (test items) are usually assumed to be conditionally independent given some continuous latent variable (trait of the test taker). Chen et al. (2018) argued that measuring conditional dependence by means of sparse + low-rank models might improve results from classical IRT. They estimate their models using pseudo-likelihood, a strategy that they also proposed in an earlier work, see Chen et al. (2016).

Chen et al. (2016) show that their estimator recovers the algebraic structure, that is, the conditional graph structure and the number of latent variables, with probability tending to one. However, their analysis only allows a growing number of sample points whereas they keep the number of variables fixed. Their result thus severs from the tradition to analyze the more challenging high-dimensional setting, where the number of variables is also explicitly tracked.

Placement of our work.

Our main contribution is a high-dimensional consistency analysis of a likelihood estimator for multivariate binary sparse + low-rank models. Furthermore, our analysis is the first to show parametric consistency of the likelihood-estimates and to provide explicit rates for this type of models. It thus complements the existing literature. Our other contribution is the connection to a particular type of relaxed maximum-entropy problems that we established in the introduction. We have shown that this type of relaxation leads to an interpretation as the marginal model of a conditional Gaussian distribution. Interestingly, this has not drawn attention before, though our semidefiniteness constraints can be obtained as special cases of the general relaxed maximum-entropy problem discussed in

Dudík and Schapire (2006).

## 3 Parametric and Algebraic Consistency

This section constitutes the main part of this paper. Here, we discuss assumptions that lead to consistency properties of the solution to the likelihood problem ML and state our consistency result. We are interested in the high-dimensional setting, where the number of samples , the number of observed binary variables , and the number of latent conditional Gaussian variables  are allowed to grow simultaneously. Meanwhile, there are some other problem-specific quantities that concern the curvature of the problem that we assume to be fixed. Hence, we keep the geometry of the problem fixed.

For studying the consistency properties, we use a slight reformulation of Problem ML from the introduction. First, we switch from a maximization to a minimization problem, and let be the negative log-likelihood from now on. Furthermore, we change the representation of the regularization parameters, namely

 (Sn,Ln)=argminS,Lℓ(S+L)+λn(γ∥S∥1+trL)s.t.L⪰0, (SL)

where controls the trade-off between the two regularization terms and controls the trade-off between the negative log-likelihood term and the regularization terms.

We want to point out that our consistency proof follows the lines of the seminal work in Chandrasekaran et al. (2012) who investigate a convex optimization problem for the parameter estimation of a model with observed and latent Gaussian variables. The main difference to the Ising model is that the Gaussian case requires a positive-definiteness constraint on the pairwise interaction parameter matrix that is necessary for normalizing the density. Furthermore, in the Gaussian case the pairwise interaction parameter matrix is the inverse of the covariance matrix. This is no longer the case for the Ising model, see Loh and Wainwright (2012).

In this work, we want to answer the question if it is possible to recover the parameters from data that has been drawn from a hypothetical true model distribution parametrized by and . We focus on two key concepts of successful recovery in an asymptotic sense with high probability. The first is parametric consistency. This means that should be close to w.r.t. some norm. Since the regularizer is the composed norm , a natural norm for establishing parametric consistency is its dual norm

 ∥(S,L)∥γ =max{∥S∥∞γ,∥L∥}.

The second type of consistency that we study is algebraic consistency. It holds if recovers the true sparse support of , and if has the same rank as .

In the following we discuss the assumptions for our consistency result. For that we proceed as follows: First, we discuss the requirements for parametric consistency of the compound matrix in Section 3.1. Next, we work out the three central assumptions that are sufficient for individual recovery of and in Section 3.2. We state our consistency result in Section 3.3. Finally, in Section 3.4 we outline the proof, the details of which can be found in Section 5.

### 3.1 Parametric consistency of the compound matrix

In this section, we briefly sketch how the negative log-likelihood part of the objective function in Problem SL drives the compound matrix that is constructed from the solution to parametric consistency with high probability. We only consider the negative log-likelihood part because we assume that the relative weight of the regularization terms in the objective function goes to zero as the number of sample points goes to infinity. This implies that the estimated compound matrix is not affected much by the regularization terms since they contribute mostly small (but important) adjustments. More specifically, the -norm regularization on shrinks entries of such that entries of small magnitude are driven to zero such that

will likely be a sparse matrix. Likewise, the trace norm (or nuclear norm) can be thought of diminishing the singular values of the matrix

such that small singular values become zero, that is, will likely be a low-rank matrix.

The negative log-likelihood function is strictly convex and thus has a unique minimizer . We can assume that . Let and . Then, consistent recovery of the compound matrix is essentially equivalent to the estimation error being small. Now, consider the Taylor expansion

 ℓ(Θ⋆+ΔΘ)=ℓ(Θ⋆)+∇ℓ(Θ⋆)⊤ΔΘ+12Δ⊤Θ∇2ℓ(Θ⋆)ΔΘ+R(ΔΘ)

with remainder . It turns out that if the number of samples is sufficiently large, then the gradient is small with high probability, and if is small, then the remainder is also small. In this case, the Taylor expansion implies that locally around the true parameters the negative log-likelihood is well approximated by the quadratic form induced by its Hessian, namely

 ℓ(Θ⋆+ΔΘ)≈ℓ(Θ⋆)+12Δ⊤Θ∇2ℓ(Θ⋆)ΔΘ.

This quadratic form is obviously minimized at , which would entail consistent recovery of in a parametric sense. However, this does not explain how the sparse and low-rank components of can be recovered consistently. In the next section we elaborate sufficient assumptions for the consistent recovery of these components.

### 3.2 Assumptions for individual recovery

Consistent recovery of the components, more specifically parametric consistency of the solutions and , requires the two errors and to be small (in their respective norms). Both errors together form the joint error . Note though that the minimum of the quadratic form from the previous section at does not imply that the individual errors and are small. We can only hope for parametric consistency of and if they are the unique solutions to Problem SL.

For uniqueness of the solutions we need to study optimality conditions. Problem SL is the Lagrange form of the constrained problem

 minℓ(S+L) s.t. ∥S∥1≤cn and ∥L∥∗≤tn

for suitable regularization parameters and , where we have neglected the positive-semidefiniteness constraint on . The constraints can be thought of as convex relaxations of constraints that require to have a certain sparsity and require to have at most a certain rank. That is, should be contained in the set of symmetric matrices of a given sparsity and should be contained in the set of symmetric low-rank matrices. To formalize these sets we briefly review the varieties of sparse and low-rank matrices.

##### Sparse matrix variety.

For the support is defined as

 supp(M)={(i,j)∈[d]×[d]:Mij≠0},

and the variety of sparse symmetric matrices with at most non-zero entries is given as

 S(s)={S∈Sym(d):|supp(S)|≤s}.

Any matrix with is a smooth point of with tangent space

 Ω(S) ={M∈Sym(d):supp(M)⊆supp(S)}.
##### Low-rank matrix variety.

The variety of matrices with rank at most is given as

 L(r)={L∈Sym(d):rank(L)≤r}.

Any matrix with rank is a smooth point of with tangent space

 T(L)={UX⊤+XU⊤:X∈Rd×r},

where

is the restricted eigenvalue decomposition of

, that is, has orthonormal columns and is diagonal.

Next, we formulate conditions that ensure uniqueness in terms of the tangent spaces of the introduced varieties.

##### Transversality.

Remember that we understand the constraints in the constrained formulation of Problem SL as convex relaxations of constraints of the form and . Because the negative log-likelihood function is a function of , its gradient with respect to and its gradient with respect to coincide at . Hence, the first-order optimality conditions for the non-convex problem require that the gradient of the negative log-likelihood function needs to be normal to and at any (locally) optimal solutions and , respectively. If the solution is not (locally) unique, then basically the only way to get an alternative optimal solution that violates (local) uniqueness is by translating and by an element that is tangential to at and tangential to at , respectively. Thus, it is necessary for (local) uniqueness of the optimal solution that such a tangential direction does not exist. Hence, the tangent spaces and need to be transverse, that is, . Intuitively, if we require that transversality holds for the true parameters , that is, , then provided that is close to , the tangent spaces and should also be transverse.

We do not require transversality explicitly since it is implied by stronger assumptions that we motivate and state in the following. In particular, we want the (locally) optimal solutions and not only to be unique, but also to be stable under perturbations. This stability needs some additional concepts and notation that we introduce now.

##### Stability assumption.

Here, stability means that if we perturb and in the respective tangential directions, then the gradient of the negative log-likelihood function should be far from being normal to the sparse and low-rank matrix varieties at the perturbed and , respectively. As for transversality, we require stability for the true solution and expect that it carries over to the optimal solutions and , provided they are close. More formally, we consider perturbations of in directions from the tangent space , and perturbations of in directions from tangent spaces to the low-rank variety that are close to the true one . The reason for considering tangent spaces close to is that there are low-rank matrices close to that are not contained in because the low-rank matrix variety is locally curved at any smooth point.

Now, in light of a Taylor expansion the change of the gradient is locally governed by the data-independent Hessian of the negative log-likelihood function at . To make sure that the gradient of the tangentially perturbed (true) solution cannot be normal to the respective matrix varieties we require that it has a significant component in the tangent spaces at the perturbed solution. This is achieved if the minimum gains of the Hessian in the respective tangential directions

 αΩ =minM∈Ω,∥M∥∞=1∥PΩH⋆M∥∞,and αT,ε =minρ(T,T′)≤εminM∈T′,∥M∥=1∥PT′H⋆M∥

are large, where are tangent spaces to the low-rank matrix variety that are close to in terms of the twisting

 ρ(T,T′)=max∥M∥=1∥[PT−PT′](M)]∥

between these subspaces given some . Here, we denote projections onto a matrix subspace by subindexed by the subspace.

Note though that only requiring and to be large is not enough if the maximum effects of the Hessian in the respective normal directions

 δΩ =maxM∈Ω,∥M∥∞=1∥PΩ⊥H⋆M∥∞,and δT,ε =maxρ(T,T′)≤εmaxM∈T′,∥M∥=1∥PT′⊥H⋆M∥

are also large, because then the gradient of the negative log-likelihood function at the perturbed (true) solution could still be almost normal to the respective varieties. Here, is the normal space at orthogonal to , and is the space orthogonal to .

Overall, we require that is bounded away from zero and that the ratio is bounded from above, where . Note that in our definitions of the minimum gains and maximum effects we used the - and the spectral norm, which are dual to the - and the nuclear norm, respectively. Ultimately, we want to express the stability assumption in the -norm which is the dual norm to the regularization term in Problem SL. For that we need to compare the - and the spectral norm. This can be accomplished by using norm compatibility constants that are given as the smallest possible and such that

 ∥M∥∞≤ξ(T(L))∥M∥ for all M∈T(L), and ∥N∥≤μ(Ω(S))∥N∥∞ for all N∈Ω(S),

where and are the tangent spaces at points and from the sparse matrix variety and the low-rank matrix variety , respectively. Let us now specify our assumptions in terms of the stability constants from above.

###### Assumption 1 (Stability)

We set and assume that

1. , and

2. there exists such that , where .

The second assumption is essentially a generalization of the well-known irrepresentability condition, see for example Ravikumar et al. (2011). The next assumption ensures that there are values of for which stability can be expressed in terms of the -norm, that is, a coupled version of stability.

##### γ-feasibility assumption.

The norm compatibility constants and allow further insights into the realm of problems for which consistent recovery is possible. First, it can be shown, see Chandrasekaran et al. (2011), that , where is the maximum number of non-zero entries per row/column of , that is, constitutes a lower bound for . Intuitively, if is large, then the non-zero entries of the sparse matrix could be concentrated in just a few rows/columns and thus would be of low rank. Hence, in order not to confuse with a low-rank matrix we want the lower bound on the maximum degree to be small.

Second, constitutes a lower bound on the incoherence of the matrix . Incoherence measures how well a subspace is aligned with the standard coordinate axes. Formally, the incoherence of a subspace is defined as where the are the standard basis vectors of . It is known, see again Chandrasekaran et al. (2011), that

 ξ(T)=ξ(T(L⋆))≤2coh(L⋆),

where is the incoherence of the subspace spanned by the rows/columns of the symmetric matrix . A large value means that the row/column space of is well aligned with the standard coordinate axes. In this case, the entries of do not need to be spread out and thus could have many zero entries, that is, it could be a sparse matrix. Hence, in order not to confuse with a sparse matrix we want the lower bound on the incoherence , or equivalently , to be small.

Altogether, we want both and to be small to avoid confusion of the sparse and the low-rank parts. Now, in Problem SL, the parameter controls the trade-off between the regularization term that promotes sparsity, that is, the -norm term, and the regularization term that promotes low rank, that is, the nuclear norm term. It turns out that the range of values for that are feasible for our consistency analysis becomes larger if and are small. Indeed, the following assumption ensures that the range of values of that are feasible for our consistency analysis is non-empty.

###### Assumption 2 (γ-feasibility)

The range with

 γmin=3β(2−ν)ξ(T)να and γmax=να2β(2−ν)μ(Ω).

is non-empty. Here, we use the additional problem-specific constant with

 βΩ =maxM∈Ω,∥M∥=1∥H⋆M∥,and βT =maxρ(T,T′)≤ξ(T)2maxM∈T′,∥M∥∞=1∥H⋆M∥∞.

The -feasibility assumption is equivalent to

 μ(Ω)ξ(T)≤16(ναβ(2−ν))2.

Note that this upper bound on the product is essentially controlled by the product . It is easier to satisfy when the latter product is large. This is well aligned with the stability assumption, because in terms of the stability assumption the good case is that the product is large, or more specifically that is large and is close to .

##### Gap assumption.

Intuitively, if the smallest-magnitude non-zero entry of is too small, then it is difficult to recover the support of . Similarly, if the smallest non-zero eigenvalue of is too small, then it is difficult to recover the rank of . Hence, we make the following final assumption.

###### Assumption 3 (Gap)

We require that

 smin≥CSλnμ(Ω) and σmin≥CLλnξ(T)2,

where and are problem-specific constants that are specified more precisely later.

Recall that the regularization parameter controls how strongly the eigenvalues of the solution and the entries of the solution are driven to zero. Hence, the required gaps get weaker as the number of sample points grows, because the parameter goes to zero as goes to infinity.

### 3.3 Consistency theorem

We state our consistency result using problem-specific data-independent constants , and . Their exact definitions can be found alongside the proof in Section 5.1. Also note that the norm compatibility constant is implicitly related to the number of latent variables . This is because as we have seen above and , see Chandrasekaran et al. (2011). Hence, the smaller , the better can the upper bound on be. Therefore, we track and explicitly in our analysis.

[Consistency] Let be a sparse and let be a low-rank matrix. Denote by and the tangent spaces at and , respectively to the variety of symmetric sparse matrices and to the variety of symmetric low-rank matrices. Suppose that we observed samples drawn from a pairwise Ising model with interaction matrix such that the stability assumption, the -feasibility assumption, and the gap assumption hold. Moreover let , and assume that for the number of sample points it holds that

 n>C1κξ(T)4dlogd,

and that the regularization parameter it set as

 λn=C2ξ(T)√κdlogdn.

Then, it follows with probability at least that the solution to the convex program SL is

• parametrically consistent, that is, , and

• algebraically consistent, that is, and have the same support (actually, the signs of corresponding entries coincide), and and have the same ranks.

### 3.4 Outline of the proof

The proof of Theorem 3.3 is similar to the one given in Chandrasekaran et al. (2012) for latent variable models with observed Gaussians. More generally, it builds on a version of the primal-dual-witness proof technique. The proof consists of the following main steps:

• First, we consider the correct model set whose elements are all parametrically and algebraically consistent under the stability, -feasibility, and gap assumptions. Hence, any solution to our problem, if additionally constrained to , is consistent.

• Second, since the set is non-convex, we consider a simplified and linearized version of the set and show that the solution to the problem constrained to the linearized model space is unique and equals . Since it is the same solution, consistency follows from the first step.

• Third, we show that the solution also solves Problem SL. More precisely, we show that this solution is strictly dual feasible and hence can be used as a witness as required for the primal-dual-witness technique. This implies that it is also the unique solution, with all the consistency properties from the previous steps.

• Finally, we show that the assumptions from Theorem 3.3 entail all those made in the previous steps with high probability. Thereby, the proof is concluded.

## 4 Discussion

Our result, that constitutes the first high-dimensional consistency analysis for sparse + low-rank Ising models, requires slightly more samples (in the sense of an additional logarithmic factor , and polynomial probability) than were required for consistent recovery for the sparse + low-rank Gaussian models considered by Chandrasekaran et al. (2012). This is because the strong tail properties of multivariate Gaussian distributions do not hold for multivariate Ising distributions. Hence, it is more difficult to bound the sampling error

of the second-moment matrices, which results in weaker probabilistic spectral norm bounds of this sampling error. Under our assumptions, we believe that the sampling complexity, that is, the number of samples required for consistent recovery of sparse + low-rank Ising models, cannot be improved. We also provided a detailed discussion of why all of our assumptions are important.

It would be interesting to test for consistency experimentally, but this is better done using a pseudo-likelihood approach because it avoids the problem of computing costly normalizations. We believe that likelihood and pseudo-likelihood behave similarly, but so far only much weaker guarantees are known for the pseudo-likelihood approach than the ones that we prove here.

We gratefully acknowledge financial support from the German Science Foundation (DFG) grant (GI-711/5-1) within the priority program (SPP 1736) Algorithms for Big Data.

## References

• Bach et al. (2012) Francis Bach, Rodolphe Jenatton, Julien Mairal, and Guillaume Obozinski. Optimization with sparsity-inducing penalties.

Foundations and Trends in Machine Learning

, 4(1):1–106, 2012.
• Candès and Soltanolkotabi (2012) Emmanuel J. Candès and Mahdi Soltanolkotabi. Discussion: Latent variable graphical model selection via convex optimization. The Annals of Statistics, 40(4):1996–2004, 2012.
• Chandrasekaran et al. (2011) Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo, and Alan S. Willsky. Rank-sparsity incoherence for matrix decomposition. SIAM Journal on Optimization, 21(2):572–596, 2011.
• Chandrasekaran et al. (2012) Venkat Chandrasekaran, Pablo A. Parrilo, and Alan S. Willsky. Latent variable graphical model selection via convex optimization. The Annals of Statistics, 40(4):1935–1967, 2012.
• Chen et al. (2016) Yunxiao Chen, Xiaoou Li, Jingchen Liu, and Zhiliang Ying. A fused latent and graphical model for multivariate binary data. Technical report, arXiv preprint arXiv:1606.08925, 2016.
• Chen et al. (2018) Yunxiao Chen, Xiaoou Li, Jingchen Liu, and Zhiliang Ying. Robust measurement via a fused latent and graphical item response theory model. Psychometrika, pages 1–25, 2018.
• Cheng et al. (2017) Jie Cheng, Tianxi Li, Elizaveta Levina, and Ji Zhu. High-dimensional mixed graphical models. Journal of Computational and Graphical Statistics, 26(2):367–378, 2017.
• Dudík and Schapire (2006) Miroslav Dudík and Robert E. Schapire. Maximum entropy distribution estimation with generalized regularization. In Conference on Learning Theory (COLT), pages 123–138, 2006.
• Dudík et al. (2004) Miroslav Dudík, Steven J. Phillips, and Robert E. Schapire. Performance guarantees for regularized maximum entropy density estimation. In Conference on Learning Theory (COLT), pages 472–486, 2004.
• Hambleton et al. (1991) Ronald K. Hambleton, Hariharan Swaminathan, and H. Jane Rogers. Fundamentals of item response theory. Sage, 1991.
• Jalali et al. (2011) Ali Jalali, Pradeep Ravikumar, Vishvas Vasuki, and Sujay Sanghavi. On learning discrete graphical models using group-sparse regularization. In

Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS)

, pages 378–387, 2011.
• Jaynes (1957) Edwin T. Jaynes. Information theory and statistical mechanics. Physical Review, 106(4):620–630, 1957.
• Lauritzen (1996) Steffen L. Lauritzen. Graphical models. Oxford University Press, 1996.
• Lauritzen and Meinshausen (2012) Steffen L. Lauritzen and Nicolai Meinshausen. Discussion: Latent variable graphical model selection via convex optimization. The Annals of Statistics, 40(4):1973–1977, 2012.
• Lee and Hastie (2015) Jason D. Lee and Trevor J. Hastie. Learning the structure of mixed graphical models. Journal of Computational and Graphical Statistics, 24(1):230–253, 2015.
• Lee et al. (2015) Jason D. Lee, Yuekai Sun, and Jonathan E. Taylor. On model selection consistency of regularized -estimators. Electronic Journal of Statistics, 9(1):608–642, 2015.
• Loh and Wainwright (2012) Po-Ling Loh and Martin J. Wainwright. Structure estimation for discrete graphical models: Generalized covariance matrices and their inverses. In Conference on Neural Information Processing Systems (NIPS), pages 2096–2104, 2012.
• Meinshausen and Bühlmann (2006) Nicolai Meinshausen and Peter Bühlmann. High-dimensional graphs and variable selection with the lasso. The Annals of Statistics, 34(3):1436–1462, 2006.
• Ravikumar et al. (2010) Pradeep Ravikumar, Martin J. Wainwright, and John D. Lafferty. High-dimensional Ising model selection using

-regularized logistic regression.

The Annals of Statistics, 38(3):1287–1319, 2010.
• Ravikumar et al. (2011) Pradeep Ravikumar, Martin J. Wainwright, Garvesh Raskutti, and Bin Yu. High-dimensional covariance estimation by minimizing -penalized log-determinant divergence. Electronic Journal of Statistics, 5:935–980, 2011.
• Vershynin (2010) Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. Technical report, arXiv preprint arXiv:1011.3027, 2010.
• Wainwright (2009) Martin J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using -constrained quadratic programming (Lasso). IEEE Trans. Information Theory, 55(5):2183–2202, 2009.
• Wainwright (2012) Martin J. Wainwright. Discussion: Latent variable graphical model selection via convex optimization. The Annals of Statistics, 40(4):1978–1983, 2012.
• Watson (1992) G. Alistair Watson. Characterization of the subdifferential of some matrix norms. Linear Algebra and its Applications, 170:33–45, 1992.

## 5 Proof of the Consistency Theorem

In this section, we prove Theorem 3.3.

### 5.1 Preliminaries

Here, we give an overview of basic definitions and constants that are used throughout the paper. The constants are also necessary to refine the problem-specific constants that appear in the assumptions and claims of Theorem 3.3.

##### Duplication operator.

Throughout we use the duplication operator

 D:Sym(d)→Sym(d)×Sym(d),M↦(M,M).
##### Norms.

During the course of the paper we use several matrix norms. For , the - and the nuclear norm are given by

 ∥M∥1=∑(i,j)∈[d]×[d]|Mij|, and ∥M∥∗=∑i∈[d]|σi|,

where are the singular values of . Note that for it holds . We also use the respective dual norms. They are the - and spectral norm given by

 ∥M∥∞=max(i,j)∈[d]×[d]|Mij|, and ∥M∥=max∥v∥2=1∥Mv∥2=maxi∈[d]|σi|,

where is the standard Euclidean norm for vectors.

##### Second-moment matrices and norm of the Hessian.

In the introduction we used and which actually are the empirical and the population version of the second-moment matrix, that is, and , where the expectation is taken w.r.t. the true Ising model distribution with parameter matrix . Note that the gradient of the negative log-likelihood satisfies , where we denoted . Moreover, we denote the Hessian as and its operator norm is given by

 ∥H⋆∥=maxM∈Sym(d):∥M∥=1∥H⋆M∥.
##### Norm compatibility constant.

Since we will encounter the following constant several times in the proof we give it its own symbol

 ω=max{να3β(2−ν),1}.

Later in Section 5.4, we will show that is essentially a norm compatibility constant between the -norm and the spectral norm.

##### Problem-specific constants.

Besides the norm compatibility constant we frequently use some problem-specific constants that we define below. Here, and are defined in Lemma 5.5.2.

 c0 =2l(r0)ωmax{1,να2β(2−ν)}2 c1 =max{1,να2β(2−ν)}−1r02 c2 =40α+1∥H⋆∥ c3 =(6(2−ν)ν+1)c22∥H⋆∥ω c4 =c2+3αc22(2−ν)16(3−ν) c5 =max{c3,c4} c6 =ναc22β(2−ν).

We now refine the statements of Theorem 3.3.

##### Minimum number of samples required (precise).

We require at least

 n>cκdlogdmax⎧⎨⎩∥Φ⋆∥−1,∥Φ⋆∥ω2ξ(T)2[αν32(3−ν)min{c12,ανξ(T)128c0(3−ν)}]−2⎫⎬⎭.

samples for consistent recovery, where is a positive constant that is used to control the probability with which consistent recovery is possible.

##### Choice of λn (precise).

For our consistency analysis we choose the following value for the trade-off parameter between the negative log-likelihood and the regularization terms

 λn=6(2−ν)ν√cκdlogd∥Φ⋆∥nωξ(T).
##### Gap assumption (precise).

The precise gap assumptions on the smallest-magnitude non-zero entry of and the smallest non-zero eigenvalue of is given by

 smin≥c6λnμ(Ω), and σmin≥c5λnξ(T)2.

### 5.2 Tangent space lemmas

Low-rank tangent spaces play a fundamental role throughout the proof. Therefore we characterize the tangent spaces at smooth points of the low-rank variety before moving on.

Suppose is a rank- matrix. Then, the tangent space to at is given by

 T(L)={UX⊤+XU⊤:X∈Rd×r}⊂Sym(d)

where is the (restricted) eigenvalue decomposition of , that is, has orthonormal columns and is diagonal with the eigenvalues on the diagonal. The tangent space at is given by the span of all tangent vectors at zero to smooth curves initialized at , that is, . Because has rank it is a smooth point of and we can assume that with rank- matrices for all and is the diagonal matrix whose diagonal entries are the signs of the eigenvalues of , that is, they are in . We can assume the signs of the eigenvalues along the curve to be fixed because we only consider smooth curves. In particular , so it must hold

. Now, by the chain rule it holds

 γ′(0) =A(0)sign(D)A′(0)⊤+A′(0)sign(D)A(0)⊤ =U|D|1/2sign(D)A′(0)⊤+A′(0)sign(D)|D|1/2U⊤

We still need to show that can take arbitrary values. To do so, for any consider which has rank for sufficiently small since has rank and the curve is smooth. Moreover, it holds . Now with the particular choice of , since

 A′(0)sign(D)|D|1/2=X|D|−1/2sign(D)sign(D)|D|1/2=X

the tangential vector of the corresponding curve at zero is .

Note that the variety of symmetric low-rank matrices has dimension . Since is a smooth point in the tangent space has the same dimension.

One consequence of the form of the tangent spaces is the following lemma that concerns the norms of projections on certain tangent spaces and their orthogonal complements.

For any two tangent spaces and at any smooth points w.r.t. the varieties and , respectively, we can bound the norms of projections of matrices in the following manner:

 ∥PΩM∥∞ ≤∥M∥∞ and ∥PΩ⊥M∥∞≤∥M∥∞ ∥PTN∥ ≤2∥N∥ and ∥PT⊥N∥≤∥N∥.

In particular, for we have

 ∥PY(M,N)∥γ≤2∥(M,N)∥γ and ∥∥PY⊥(M,N)∥∥γ≤∥(M,N)∥γ.

Recall that from Lemma 5.2 we have for smooth points that

 T=T(L)={UX⊤+XU⊤:X∈Rd×r}⊂Sym(d)

where

is the (restricted) singular value decomposition of

. Then, we have more explicitly that

 PTN=PUN+NPU−PUNPU=PUN+(I−PU)NPU

where projects onto the column space of . Note that since

 PUN+NPU−PUNPU =PU(N−12NPU)+(N⊤−12PUN⊤)PU =U(U⊤(N−12NPU))+((N⊤−12PUN⊤)U)U⊤,

and that is orthogonal to since

 N−PTN=N−PUN−NPU+PUNPU=(I−PU)N(I−PU)

and since for any we have

 ⟨(I−PU)N(I−PU),UX⊤+XU⊤⟩ =tr((I−PU)N(I−PU)UX⊤)+tr((I−PU)N(I−PU)XU⊤) =tr((I−PU)N(I−PU)XU⊤) =tr(XU⊤(I−PU)N(I−PU)) =tr(((I−PU)UX⊤)⊤N(I−PU)) =0

where the second and last inequality follow from , and we used in the third equality. Thus, is indeed the orthogonal projection of onto . Now, by sub-multiplicativity of the spectral norm

 ∥PTN∥ ≤∥PUN∥+∥(I−PU)NPU∥ ≤∥PU∥∥N∥+∥(I−PU)∥∥N∥∥PU∥≤2∥N∥

since and