# Graphical Models for Non-Negative Data Using Generalized Score Matching

A common challenge in estimating parameters of probability density functions is the intractability of the normalizing constant. While in such cases maximum likelihood estimation may be implemented using numerical integration, the approach becomes computationally intensive. In contrast, the score matching method of Hyvärinen (2005) avoids direct calculation of the normalizing constant and yields closed-form estimates for exponential families of continuous distributions over R^m. Hyvärinen (2007) extended the approach to distributions supported on the non-negative orthant R_+^m. In this paper, we give a generalized form of score matching for non-negative data that improves estimation efficiency. We also generalize the regularized score matching method of Lin et al. (2016) for non-negative Gaussian graphical models, with improved theoretical guarantees.

• 5 publications
• 45 publications
• 36 publications
12/26/2018

### Generalized Score Matching for Non-Negative Data

A common challenge in estimating parameters of probability density funct...
09/15/2018

### Learning high-dimensional graphical models with regularized quadratic scoring

Pairwise Markov Random Fields (MRFs) or undirected graphical models are ...
03/18/2022

### Generalized Score Matching for Regression

Many probabilistic models that have an intractable normalizing constant ...
09/10/2021

### Interaction Models and Generalized Score Matching for Compositional Data

Applications such as the analysis of microbiome data have led to renewed...
01/23/2019

### High order concentrated non-negative matrix-exponential functions

Highly concentrated functions play an important role in many research fi...
09/24/2020

### Generalized Score Matching for General Domains

Estimation of density functions supported on general domains arises when...
10/28/2008

### A non-negative expansion for small Jensen-Shannon Divergences

In this report, we derive a non-negative series expansion for the Jensen...

## 1 Introduction

Graphical models (Lauritzen, 1996)

characterize the relationships among random variables

indexed by the nodes of a graph ; here, is the set of edges in . When the graph is undirected, two variables and are required to be conditionally independent given all other if there is no edge between and . The smallest graph such that this property holds is called the conditional independence graph

of the random vector

. See Drton and Maathuis (2017) for a more detailed introduction to these and other graphical models.

Largely due to their tractability, Gaussian graphical models (GGMs) have gained great popularity. The conditional independence graph of a multivariate normal vector is determined by the inverse covariance matrix , also known as the concentration matrix. More specifically, and are conditionally independent given all other variables in if and only if the -th and the -th entry of are both zero. This simple relation underlies a rich literature on GGMs, including Drton and Perlman (2004), Meinshausen and Bühlmann (2006), Yuan and Lin (2007) and Friedman et al. (2008), among others.

Recent work has provided tractable procedures also for non-Gaussian graphical models. This includes Gaussian copula models (Liu et al., 2009; Dobra and Lenkoski, 2011; Liu et al., 2012), Ising and other exponential family models (Ravikumar et al., 2010; Chen et al., 2014; Yang et al., 2015), as well as semi- or non-parametric estimation techniques (Fellinghauer et al., 2013; Voorman et al., 2013). In this paper, we focus on non-negative Gaussian random variables, as recently considered by Lin et al. (2016) and Yu et al. (2016).

The probability density function of a non-negative Gaussian random vector is proportional to that of the corresponding Gaussian vector, but restricted to the non-negative orthant. More specifically, let and be a mean vector and a covariance matrix for an -variate random vector, respectively. Then

follows a truncated normal distribution with parameters

and if it has density on , where is the inverse covariance parameter. We denote this as . The conditional independence graph of a truncated normal vector is determined by just as in the Gaussian case: and are conditionally independent given all other variables if .

Suppose is a continuous random vector with distribution , density with respect to Lebesgue measure, and support , so for all . Let be a family of distributions with twice differentiable densities that we know only up to a (possibly intractable) normalizing constant. The score matching estimator of using as a model is the minimizer of the expected squared distance between the gradients of and a log-density from . Formally, we minimize the loss from (1) below. Although the loss depends on , partial integration can be used to rewrite it in a form that can be approximated by averaging over the sample without knowing . The key advantage of score matching is that the normalizing constant cancels from the gradient of log-densities. Furthermore, for exponential families, the loss is quadratic in the parameter of interest, making optimization straightforward.

When dealing with distributions supported on a proper subset of , the partial integration arguments underlying the score matching estimator may fail due to discontinuities at the boundary of the support. To circumvent this problem, Hyvärinen (2007) introduced a modified score matching estimator for data supported on by minimizing a loss in which boundary effects are dampened by multiplying gradients elementwise with the identity functions ; see (3) below. Lin et al. (2016) estimate truncated GGMs based on this modification, with an penalty on the entries of added to the loss. In this paper, we show that elementwise multiplication with functions other than can lead to improved estimation accuracy in both simulations and theory. Following Lin et al. (2016), we will then use the proposed generalized score matching framework to estimate the matrix .

The rest of the paper is organized as follows. Section 2 introduces score matching and our proposed generalized score matching. In Section 3

, we apply generalized score matching to exponential families, with univariate truncated Gaussian distributions as an example.

Regularized generalized score matching for graphical models is formulated in Section 4. Simulation results are given in Section 5.

### 1.1 Notation

Subscripts are used to refer to entries in vectors and columns in matrices. Superscripts are used to refer to rows in matrices. For example, when considering a matrix of observations , each row being a sample of measurements/features for one observation/individual, is the -th feature for the -th observation. For a random vector , refers to its -th component.

The vectorization of a matrix is obtained by stacking its columns:

 vec(K)=(κ11,…,κq1,κ12,…,κq2,…,κ1r,…,κqr)⊤.

For , the -norm of a vector is denoted and the -norm is defined as . The - operator norm for matrix is written as with shorthand notation . By definition, . We write the Frobenius norm of as , and its max norm

For a scalar function , we define as its partial derivative with respect to the -th component evaluated at , and the corresponding second partial derivative. For vector-valued , , let be the vector of derivatives and likewise.

Throughout the paper, refers to a vector of all ’s of length . For , , . Moreover, when we speak of the “density” of a distribution, we mean its probability density function w.r.t. Lebesgue measure. When it is clear from the context, denotes the expectation under the true distribution.

## 2 Score Matching

### 2.1 Original Score Matching

Suppose is a random vector taking values in with distribution and density . Suppose , a family of distributions with twice differentiable densities supported on . The score matching loss for , with density , is

 J(P)=∫Rmp0(x)∥∇logp(x)−∇logp0(x)∥22dx. (1)

The gradients in (1) can be thought of as gradients with respect to a hypothetical location parameter, evaluated at (Hyvärinen, 2005). The loss is minimized if and only if , which forms the basis for estimation of . Importantly, since the loss depends on only through its log-gradient, it suffices to know up to a normalizing constant. Under mild conditions, (1) can be rewritten as

 J(P)=∫Rmp0(x)m∑j=1⎡⎣∂jjlogp(x)+(∂jlogp(x))22⎤⎦dx (2)

plus a constant independent of . Clearly, the integral in (2) can be approximated by its corresponding sample average without knowing the true density , and can thus be used to estimate .

### 2.2 Score Matching for Non-Negative Data

When the true density is only supported on a proper subset of , the integration by parts underlying the equivalence of (1) and (2) may fail due to discontinuity at the boundary. For distributions supported on the non-negative orthant , Hyvärinen (2007) addressed this issue by instead minimizing the non-negative score matching loss

 J+(P)=∫Rm+p0(x)∥∇logp(x)∘x−∇logp0(x)∘x∥22dx. (3)

This loss can be motivated by considering gradients of the true and model log-densities w.r.t. a hypothetical scale parameter (Hyvärinen, 2007). Under regularity conditions, it can again be rewritten as the expectation (under ) of a function independent of , thus allowing one to estimate by minimizing the corresponding sample loss.

### 2.3 Generalized Score Matching for Non-Negative Data

We consider the following generalization of the non-negative score matching loss (3).

###### Definition 1.

Suppose random vector has true distribution with density that is twice differentiable and supported on . Let be the family of all distributions with twice differentiable densities supported on , and suppose . Let be a.e. positive functions that are differentiable almost everywhere, and set . For with density , the generalized -score matching loss is

 Jh(p)=∫Rm+12p0(x)∥∇logp(x)∘h(x)1/2−∇logp0(x)∘h(x)1/2∥22dx, (4)

where .

Choosing all recovers the loss from (3). The key intuition for our generalized score matching is that we keep the

increasing but instead focus on functions that are bounded or grow rather slowly. This will result in reliable higher moments, leading to better practical performance and improved theoretical guarantees. We note that our approach could also be presented in terms of transformations of data; compare to Section 11 in

Parry et al. (2012). In particular, log-transforming positive data into all of and then applying (1) is equivalent to (3).

We will consider the following assumptions:

 (A1) p0(x)hj(xj)∂jlogp(x)→0 as xj↗+∞ and as xj↘0+,∀x−j∈Rm−1+,∀p∈P+, (A2) Ep0∥∇logp(X)∘h1/2(X)∥22<+∞, Ep0∥(∇logp(X)∘h(X))′∥1<+∞,∀p∈P+,

where is a shorthand for “for all being the density of some ”, and the prime symbol denotes component-wise differentiation.

Assumption (A1) validates integration by parts and (A2) ensures the loss to be finite. We note that (A1) and (A2) are easily satisfied when we consider exponential families with .

The following theorem states that we can rewrite as an expectation (under ) of a function that does not depend on , similar to (2).

###### Theorem 2.

Under (A1) and (A2),

 Jh(p) =C+∫Rm+p0m∑j=1[h′j(xj)∂j(logp(x)) (5) +hj(xj)∂jj(logp(x))+12hj(xj)(∂j(logp(x)))2]dx,

where is a constant independent of .

Given a data matrix with rows , we define the sample version of (5) as

We first clarify estimation consistency, in analogy to Corollary 3 in Hyvärinen (2005).

###### Theorem 3.

Consider a model with parameter space , and suppose that the true data-generating distribution with density . Assume that if and only if . Then the generalized -score matching estimator obtained by minimization of over converges in probability to as the sample size goes to infinity.

## 3 Exponential Families

In this section, we study the case where is an exponential family comprising continuous distributions with support . More specifically, we consider densities that are indexed by the canonical parameter and have the form

 logpθ(x)=θ⊤t(x)−ψ(θ)+b(x),x∈Rm+.

It is not difficult to show that under assumptions (A1) and (A2) from Section 2.3, the empirical generalized -score matching loss above can be rewritten as

 ^Jh(pθ)=12θ⊤Γ(x)θ−g(x)⊤θ+const, (6)

where and are sample averages of functions of the data matrix only; the detailed expressions are omitted here.

Define , , and .

###### Theorem 4.

Suppose that

• (C1)   is invertible almost surely, and

• (C2)  , , and exist and are finite entry-wise.

Then the minimizer of (6) is a.s. unique with solution , and

 ^θ→a.s.θ0% and√n(^θ−θ0)→dNr(0,Γ−10Σ0Γ−10).

We note that (C1) holds if a.e. and has rank a.e. for some .

In the following examples, we assume (A1)–(A2) and (C1)–(C2).

###### Example 5.

Consider univariate () truncated Gaussian distributions with unknown mean parameter

and known variance parameter

, so

 pμ(x)∝exp(−(x−μ)2/(2σ2)),x∈R+.

Then, given i.i.d. samples , the generalized -score matching estimator of is

 ^μh≡∑ni=1h(Xi)Xi−σ2h′(Xi)∑ni=1h(Xi).

If , and the expectations are finite,

 √n(^μh−μ0)→dN(0,E0[σ2h2(X)+σ4h′2(X)]E20[h(X)]).
###### Example 6.

Consider univariate truncated Gaussian distributions with known mean parameter and unknown variance parameter . Then, given i.i.d. samples , the generalized -score estimator of is

 ^σ2h≡∑ni=1h(Xi)(Xi−μ)2∑ni=1h(Xi)+h′(Xi)(Xi−μ).

If, in addition to the assumptions in Example 5, , then with

 τ2≡2σ60E0[h2(X)(X−μ)2]+σ80E0[h′2(X)(X−μ)2]E20[h(X)(X−μ)2].

When , also satisfies (A1)–(A2) and (C1)–(C2), and the resulting estimator corresponds to the sample variance, which obtains the Cramér-Rao lower bound.

###### Remark 7.

In the case of univariate truncated Gaussian, an intuitive explanation that using a bounded gives better results than Hyvärinen (2007) goes as follows. When , there is effectively no truncation to the Gaussian distribution, and our method automatically adapts to using low moments in (4), since a bounded and increasing becomes almost constant as it gets close to its asymptote for large. When becomes constant, we get back to the original score matching for distributions on . In other cases, the truncation effect is significant, and similar to Hyvärinen (2007), our estimator uses higher moments accordingly.

Figure 1 shows the theoretical asymptotic variance of as given in Example 5, with known. Efficiency curves measured by the Cramér-Rao lower bound divided by the asymptotic variance are also shown. We see that two truncated versions of have asymptotic variance close to the Cramér-Rao bound. This asymptotic variance is also reflective of the variance for small finite samples.

Figure 2 is an analog of Figure 1 for assuming . For demonstration we choose a nonzero since when

one can simply use the sample variance (degree of freedom unadjusted) which achieves the Cramér-Rao bound.

Here, the truncated versions of and have similar performance when is not too small. In fact, when is small, the truncation effect is small and one does not lose much by using the sample variance.

## 4 Regularized Generalized Score Matching

We now turn to high-dimensional problems, where the number of parameters is larger than the sample size . Targeting sparsity in the parameter , we consider regularization (Tibshirani, 1996), which has also been used in graphical model estimation (Meinshausen and Bühlmann, 2006; Yuan and Lin, 2007; Voorman et al., 2013).

###### Definition 8.

Assume the data matrix comprises i.i.d. samples from distribution with density that belongs to an exponential family , where . Define the regularized generalized -score matching estimator as

 ^θ∈ argminθ∈Θ^Jh,r(θ) (7) ≡ argminθ∈Θ12θ⊤Γ(x)θ−g(x)⊤θ+λ∥θ∥1,

where is a tuning parameter, and and are from (6).

Discussion of the general conditions for almost sure uniqueness of the solution is omitted here, but in practice we multiply the diagonals of by a constant slightly larger than that ensures strict convexity and thus uniqueness of solution. Details about estimation consistency after this operation will be presented in future work. When the solution is unique, the solution path is piecewise linear; compare Lin et al. (2016).

### 4.1 Truncated GGMs

Using notation from the introduction, let be a truncated normal random vector with mean parameter and inverse covariance/precision matrix parameter . Recall that the conditional independence graph for corresponds to the support of , defined as . This support is our target of estimation.

### 4.2 Truncated Centered GGMs

Consider the case where the mean parameter is zero, i.e., , and we want to estimate the inverse covariance matrix . Assume that for all there exist constants and that bound and its derivative a.e. Assume further that a.e, , and . Boundedness here is for ease of proof in the main theorems; reasonable choices of unbounded are also valid. Then, (A1)–(A2) are satisfied, and the loss can be written as

 ^Jr+(K)=12vec(K)⊤Γ(x)vec(K)−g(x)⊤vec(K)+λ∥K∥1, (8)

with the block of the block-diagonal matrix being

 n−1x⊤diag(hj(Xj))x,

where , denotes a diagonal matrix with diagonal entries , and

 g(x) ≡vec(u)+vec(diag(V)), u≡n−1h′(x)⊤x, V ≡n−1h(x)⊤1n, h(x)≡[hj(X(i)j)]i,j, h′(x) ≡[h′j(X(i)j)]i,j, 1n=[1,…,1]⊤.

The regularized generalized -score matching estimator of in the truncated centered GGM is

 ^K≡argminK∈Rm2,K=K⊤^Jr+(K), (9)

where and are defined above.

###### Definition 9.

For true inverse covariance matrix , let and . Denote the support of a precision matrix as Write the true support of as . Suppose the maximum number of non-zero entries in rows of is . Let be the entries of corresponding to edges in . Define

 cΓ0≡|||(Γ0,S0S0)−1|||∞,∞,cK0≡|||K0|||∞,∞.

We say the irrepresentability condition holds for if there exists an such that

 |||Γ0,Sc0S0(Γ0,S0S0)−1|||∞,∞≤(1−α). (10)
###### Theorem 10.

Suppose and is as discussed in the opening paragraph of this section. Suppose further that is invertible and satisfies the irrepresentability condition (10) with . Let . If the sample size and the regularization parameter satisfy

 n ≥O⎛⎝d2K0τlogmmax⎧⎨⎩c2Γ0c4Xα2,1⎫⎬⎭⎞⎠, (11) λ >O[(cK0c2X+cX+1)(√τlogmn+τlogmn)], (12)

where , then the following statements hold with probability :

1. [(a)]

2. The regularized generalized -score matching estimator defined in (9) is unique, has its support included in the true support, , and

 ∥^K−K0∥∞
3. Moreover, if

 minj,k|κ0,jk|>cΓ02−αλ,

then and for all .

The theorem is proved in the supplement. A key ingredient of the proof is a tail bound on , which is composed of products of the ’s. In Lin et al. (2016), the products are up to fourth moments. Using a bounded our products automatically calibrate to a quadratic polynomial when the observed values are large, and resort to higher moments only when they are small. Using this scaling, we obtain improved bounds and convergence rates, underscored in the new requirement on the sample size , which should be compared to in Lin et al. (2016).

### 4.3 Truncated Non-centered GGMs

Suppose now with both and unknown. While our main focus is still on the inverse covariance parameter , we now also have to estimate a mean parameter . Instead, we estimate the canonical parameters and . Concatenating , the corresponding -score matching loss has a similar form to (8) and (9) with replaced by , and different and . As a corollary of the centered case, we have an analogous bound on the error in the resulting estimator ; we omit the details here. We note, however, that we can have different tuning penalty parameters and for and , respectively, as long as their ratio is fixed, since we can scale the parameter by the ratio accordingly. To avoid picking two tuning parameters, one may also choose to remove the penalty on altogether by profiling out . We leave a detailed analysis of the profiled estimator to future research.

### 4.4 Tuning Parameter Selection

By treating the loss as the mean negative log-likelihood, we may use the extended Bayesian information Criterion (eBIC) to choose the tuning parameter (Chen and Chen, 2008; Foygel and Drton, 2010). Let , where be the estimate associated with tuning parameter . The eBIC is then

 eBIC(λ)=−nvec(^K)⊤Γ(x)vec(^K)+2ng(x)⊤vec(^K)+|^Sλ|logn+2log(p(p−1)/2|^Sλ|),

where can be either the original estimate associated with , or a refitted solution obtained by restricting the support to .

## 5 Numerical Experiments

We present simulation results for non-negative GGM estimators with different choices of . We use a common for all columns of the data matrix . Specifically, we consider functions such as and as well as truncations of these functions. In addition, we try MCP (Fan and Li, 2001) and SCAD penalty-like (Zhang, 2010) functions.

### 5.1 Implementation

We use a coordinate-descent method analogous to Algorithm 2 in Lin et al. (2016), where in each step we update each element of based on the other entries from the previous steps, while maintaining symmetry. Warm starts using the solution from the previous , as well as lasso-type strong screening rules (Tibshirani et al., 2012) are used for speedups. In our simulations below we always scaled the data matrix by column norms before proceeding to estimation.

### 5.2 Truncated Centered GGMs

For data from a truncated centered Gaussian distribution, we compared our truncated centered Gaussian estimator (9) with various choices of , to SpaCE JAM (SJ, Voorman et al., 2013), which estimates graphs without assuming a specific form of distribution, a pseudo-likelihood method SPACE (Peng et al., 2009) with CONCORD reformulation (Khare et al., 2015), graphical lasso (Yuan and Lin, 2007; Friedman et al., 2008), the neighborhood selection estimator (NS) in Meinshausen and Bühlmann (2006), and finally nonparanormal SKEPTIC (Liu et al., 2012) with Kendall’s . While we ran all these competitors, only the top performing methods are explicitly shown in our reported results. Recall that the choice of corresponds to the estimator in Lin et al. (2016), using score matching from Hyvärinen (2007).

A few representative ROC (receiver operating characteristic) curves for edge recovery are plotted in Figures 3 and 4, using and or , respectively. Each curve corresponds to the average of 50 ROCs obtained from estimation of from generated using 5 different true precision matrices , each with 10 trials. The averaging method is mean AUC-preserving and is introduced as vertical averaging and outlined in Algorithm 3 in Fawcett (2006). The construction of is the same as in Section 4.2 of Lin et al. (2016): a graph with nodes with 10 disconnected subgraphs containing the same number of nodes, i.e.  is block-diagonal. In each sub-matrix, we generate each lower triangular element to be with probability

, and from a uniform distribution on interval

with probability . The upper triangular elements are set accordingly by symmetry. The diagonal elements of

are chosen to be a common positive value so that the minimum eigenvalue of

is . We choose for and for .

For clarity, we only plot some top-performing representatives of the functions we considered. However, all of the alternative functions we considered perform better than from Hyvärinen (2007) and Lin et al. (2016).

### 5.3 Truncated Non-Centered GGMs

Next we generate data from a truncated non-centered Gaussian distribution with both parameters and unknown. Consider as part of the estimated as discussed in Section 4.3. In each trial we form the true as in Section 5.2, and we generate each component of independently from the normal distribution with mean

.

As discussed in Section 4.3, we assume the ratio of the tuning parameters for and to be fixed. Shown in Figure 5 are average ROC curves (over 50 trials as in Section 5.2) for truncated non-centered GGM estimators with ; each curve corresponds to a different ratio , where “Inf” indicates . Here, and .

Clearly, as the ratio increases, the performance improves, and after a certain threshold it deteriorates. The AUC for the profiled estimator with is among the worst, so there indeed is a lot to be gained from tuning an extra tuning parameter, although there is a tradeoff between time and performance.

In Figure 6 we compare the performance of the profiled estimator with different , to SPACE and GLASSO, each with 50 trials as before. It can be seen that even without tuning the extra parameter, the estimators, except for , still work as well as SPACE and GLASSO, and outperform Lin et al. (2016).

## 6 Discussion

In this paper we proposed a generalized version of the score matching estimator of Hyvärinen (2007) which avoids the calculation of normalizing constants. For estimation of the canonical parameters of exponential families, our generalized loss retains the nice property of being quadratic in the parameters. Our estimator offers improved estimation properties through various scalar or vector-valued choices of function .

For high-dimensional exponential family graphical models, following the work of Meinshausen and Bühlmann (2006), Yuan and Lin (2007) and Lin et al. (2016), we add an penalty to the generalized score matching loss, giving a solution that is almost surely unique under regularity conditions and has a piecewise linear solution path.

In the case of multivariate truncated Gaussian distribution, where the conditional independence graph is given by the inverse covariance parameter, the sample size required for the consistency of our method is , where is the dimension and is the maximum node degree in the corresponding independence graph. This matches the rates for GGMs in Ravikumar et al. (2011) and Lin et al. (2016)

, and lasso with linear regression

(Bühlmann and van de Geer, 2011).

A potential problem for future work would be adaptive choice of the function from data, or to develop a summary score similar to eBIC that can be used to compare not just different tuning parameters but also across different models.

## References

• Bühlmann and van de Geer (2011) P. Bühlmann and S. van de Geer.

Statistics for high-dimensional data: methods, theory and applications

.
Springer Science & Business Media, 2011.
• Chen and Chen (2008) J. Chen and Z. Chen. Extended Bayesian information criteria for model selection with large model spaces. Biometrika, 95(3):759–771, 2008.
• Chen et al. (2014) S. Chen, D. M. Witten, and A. Shojaie. Selection and estimation for mixed graphical models. Biometrika, 102(1):47–64, 2014.
• Dobra and Lenkoski (2011) A. Dobra and A. Lenkoski. Copula Gaussian graphical models and their application to modeling functional disability data. The Annals of Applied Statistics, 5(2A):969–993, 2011.
• Drton and Maathuis (2017) M. Drton and M. H. Maathuis. Structure learning in graphical modeling. Annual Review of Statistics and Its Application, 4:365–393, 2017.
• Drton and Perlman (2004) M. Drton and M. D. Perlman. Model selection for Gaussian concentration graphs. Biometrika, 91(3):591–602, 2004.
• Fan and Li (2001) J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456):1348–1360, 2001.
• Fawcett (2006) T. Fawcett. An introduction to ROC analysis. Pattern Recognition Letters, 27(8):861–874, 2006.
• Fellinghauer et al. (2013) B. Fellinghauer, P. Bühlmann, M. Ryffel, M. Von Rhein, and J. D. Reinhardt.

Stable graphical model estimation with random forests for discrete, continuous, and mixed variables.

Computational Statistics & Data Analysis, 64:132–152, 2013.
• Foygel and Drton (2010) R. Foygel and M. Drton. Extended Bayesian information criteria for Gaussian graphical models. In Advances in Neural Information Processing Systems, pages 604–612, 2010.
• Friedman et al. (2008) J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432–441, 2008.
• Hyvärinen (2005) A. Hyvärinen. Estimation of non-normalized statistical models by score matching.

Journal of Machine Learning Research

, 6(4), 2005.
• Hyvärinen (2007) A. Hyvärinen. Some extensions of score matching. Computational Statistics & Data Analysis, 51(5):2499–2512, 2007.
• Khare et al. (2015) K. Khare, S.-Y. Oh, and B. Rajaratnam. A convex pseudolikelihood framework for high dimensional partial correlation estimation with convergence guarantees. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 77(4):803–825, 2015.
• Lauritzen (1996) S. L. Lauritzen. Graphical models, volume 17. Clarendon Press, 1996.
• Lin et al. (2016) L. Lin, M. Drton, and A. Shojaie. Estimation of high-dimensional graphical models using regularized score matching. Electronic Journal of Statistics, 10(1):806–854, 2016.
• Liu et al. (2009) H. Liu, J. Lafferty, and L. Wasserman. The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. Journal of Machine Learning Research, 10(Oct):2295–2328, 2009.
• Liu et al. (2012) H. Liu, F. Han, M. Yuan, J. Lafferty, L. Wasserman, et al. High-dimensional semiparametric gaussian copula graphical models. The Annals of Statistics, 40(4):2293–2326, 2012.
• Meinshausen and Bühlmann (2006) N. Meinshausen and P. Bühlmann. High-dimensional graphs and variable selection with the lasso. Ann. Statist., 34(3):1436–1462, 2006.
• Parry et al. (2012) M. Parry, A. P. Dawid, and S. Lauritzen. Proper local scoring rules. The Annals of Statistics, 40(1):561–592, 2012.
• Peng et al. (2009) J. Peng, P. Wang, N. Zhou, and J. Zhu. Partial correlation estimation by joint sparse regression models. Journal of the American Statistical Association, 104(486):735–746, 2009.
• Ravikumar et al. (2010) P. Ravikumar, M. J. Wainwright, and J. D. Lafferty. High-dimensional Ising model selection using

-regularized logistic regression.

Ann. Statist., 38(3):1287–1319, 2010.
• Ravikumar et al. (2011) P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by minimizing -penalized log-determinant divergence. Electronic Journal of Statistics, 5:935–980, 2011.
• Tibshirani (1996) R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996.
• Tibshirani et al. (2012) R. Tibshirani, J. Bien, J. Friedman, T. Hastie, N. Simon, J. Taylor, and R. J. Tibshirani. Strong rules for discarding predictors in lasso-type problems. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74(2):245–266, 2012.
• Voorman et al. (2013) A. Voorman, A. Shojaie, and D. Witten. Graph estimation with joint additive models. Biometrika, 101(1):85–101, 2013.
• Yang et al. (2015) E. Yang, P. Ravikumar, G. I. Allen, and Z. Liu. Graphical models via univariate exponential family distributions. Journal of Machine Learning Research, 16(1):3813–3847, 2015.
• Yu et al. (2016) M. Yu, M. Kolar, and V. Gupta. Statistical inference for pairwise graphical models using score matching. In Advances in Neural Information Processing Systems, pages 2829–2837, 2016.
• Yuan and Lin (2007) M. Yuan and Y. Lin. Model selection and estimation in the Gaussian graphical model. Biometrika, 94(1):19–35, 2007.
• Zhang (2010) C.-H. Zhang. Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics, 38(2):894–942, 2010.