Empirical Likelihood for Linear Structural Equation Models with Dependent Errors

10/06/2017
by   Y. Samuel Wang, et al.
0

We consider linear structural equation models that are associated with mixed graphs. The structural equations in these models only involve observed variables, but their idiosyncratic error terms are allowed to be correlated and non-Gaussian. We propose empirical likelihood (EL) procedures for inference, and suggest several modifications, including a profile likelihood, in order to improve tractability and performance of the resulting methods. Through simulations, we show that when the error distributions are non-Gaussian, the use of EL and the proposed modifications may increase statistical efficiency and improve assessment of significance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/29/2019

Identifiability of Gaussian Structural Equation Models with Homogeneous and Heterogeneous Error Variances

In this work, we consider the identifiability assumption of Gaussian str...
02/09/2021

On structural and practical identifiability

We discuss issues of structural and practical identifiability of partial...
06/21/2018

Identifiability of Gaussian Structural Equation Models with Dependent Errors Having Equal Variances

In this paper, we prove that some Gaussian structural equation models wi...
08/07/2015

Decomposition and Identification of Linear Structural Equation Models

In this paper, we address the problem of identifying linear structural e...
08/27/2018

Empirical likelihood for linear models with spatial errors

For linear models with spatial errors, the empirical likelihood ratio st...
05/14/2018

The Maximum Likelihood Threshold of a Path Diagram

Linear structural equation models postulate noisy linear relationships b...
05/06/2021

Longitudinal modeling of age-dependent latent traits with generalized additive latent and mixed models

We present generalized additive latent and mixed models (GALAMMs) for an...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Structural equation models (SEMs) are multivariate statistical models in which each considered variable is a function of other variables and a stochastic error term. Often, some of these other variables are latent (Bollen, 1989). This paper, however, focuses on SEMs in which the effects of latent variables are summarized. Adopting the dominant linear paradigm, we will thus be concerned with models in which linear functions relate only observed variables, but error terms may be dependent. Such models are sometimes referred to as semi-Markovian (Shpitser and Pearl, 2006). Avoiding any explicit specification of latent confounding, the models play an important role in exploration of cause-effect structures (Colombo et al., 2012; Pearl, 2009; Richardson and Spirtes, 2002; Spirtes et al., 2000; Wermuth, 2011). Much insight about the models can be gained from a natural graphical representation by mixed graphs/path diagrams that originates in work of Wright (1921).

Formally, let be a multivariate sample with each observation indexed by a set . So, with each real-valued. Now consider the system of structural equations

(1)

where the and are unknown parameters and the

are random errors. Define vectors

and , and a matrix with if . We assume that the error vectors are independent and identically distributed, have zero means, and have covariance matrix . However, we do not specify any parametric form for their distribution. For each , the equations in (1) can be written as . If is non-singular, then this system is solved uniquely by . This solution has mean vector and covariance matrix

(2)

Specific models are now obtained by hypothesizing that a particular collection of coefficients and error covariances is zero.

An SEM can be represented conveniently by a path diagram/mixed graph . Here, the vertex set yields a correspondence between the nodes of the graph and the observed variables. The set is a set of directed edges , which encode that variable may have a direct effect on variable . The set comprises bidirected edges that indicate that the errors and may be correlated. Define the set of parents of node as . Similarly, define a set of siblings as . Bidirected edges have no orientation, and if and only if . Now, the graph induces a model through the requirement that

(3)
(4)

We emphasize that our treatment allows the model to have feedback loops, that is, may have directed cycles.

1.1 Related Work

Frequently, the errors in a SEM, and consequently also the observations , are assumed to be multivariate Gaussian which yield

maximum likelihood estimates

(MLEs). The Gaussian likelihood is often maximized using generic optimization methods; as done in the popular packages sem (Fox et al., 2017) and lavaan (Rosseel, 2012) for R (R Core Team, 2017). The coordinate-descent methods proposed by Drton et al. (2009) and Drton et al. (2017) can be a useful computational alternative that largely avoids convergence issues.

As a less parametric method, generalized least squares (GLS) minimizes a discrepancy between the sample covariance and the covariance implied by the parameters. Although the estimates are slightly more robust to misspecification, they are still asymptotically equivalent to the Gaussian MLEs (Olsson et al., 2000)

. When multivariate Gaussianity is inappropriate, MLEs and GLS generally lose statistical efficiency and yield incorrectly calibrated confidence intervals.

Weighted least squares methods (WLS)—also called asymptotically distribution free

—weight the discrepancy between the observed and hypothesized covariance structure by explicitly estimated fourth moments. Although WLS estimates are consistent and produce asymptotically correct confidence intervals even with non-Gaussian errors, the estimation of higher order moments may come at a loss of statistical efficiency and cause convergence issues, which has limited their use

(Muthen and Kaplan, 1992).

Chaudhuri et al. (2007) propose using the empirical likelihood (EL) of Owen (2001) to estimate a covariance matrix with structural zeros. In our setup, this corresponds to the special case of a mixed graph with no directed edges. Kolenikov and Yuan (2009) use EL to estimate the parameters of a linear SEM. In contrast to the mixed graph formulation, Kolenikov and Yuan (2009) consider the case where the latent variable structure is explicitly modeled and all errors are independent. The EL approach is appealing as it gives consistent estimates and asymptotically correct confidence intervals even when the errors are not multivariate Gaussian. However, EL can present numerous practical difficulties when the sample size is small relative to the number of parameters or estimating equations used. Moreover, standard implementation of EL methods is computationally feasible only for systems with a handful of variables. We believe that these issues have prevented application of EL to linear SEMs beyond what was done by Kolenikov and Yuan (2009).

1.2 Contribution

In this article, we apply the empirical likelihood framework to SEMs represented by mixed graphs and propose several modifications to a naive approach which address the most salient practical concerns:

  1. [label=()]

  2. We show that in the mixed graph setting, the covariance parameters can be profiled out. This greatly reduces the computational burden by reducing the number of estimating equations imposed and parameters directly estimated. It also naturally encodes the positive definite constraint on and yields a positive definite estimate of for any point with a well defined empirical likelihood.

  3. When maximizing the empirical likelihood, we leverage a recent insight and directly incorporate gradient information in a quasi-Newton procedure instead of the typical derivative-free approaches to empirical likelihood optimization. This again yields substantial computational savings.

  4. We use the adjusted empirical likelihood (AEL), first proposed by Chen et al. (2008). This adjustment ensures that an empirical likelihood and corresponding gradient is well defined for every value in the parameter space.

  5. We apply the idea of extended empirical likelihood (EEL), which furnishes drastically improved coverage of confidence intervals at small sample sizes (Tsao and Wu, 2014).

Our simulations show that with these proposed modifications, empirical likelihood becomes an attractive alternative for practitioners concerned with non-Gaussianity in structural equation modeling.

2 Background on Empirical Likelihood

Let be a sample from an -variate distribution belonging to a non-/semiparametric statistical model . Let be the

dimensional probability simplex. For

, define the log-empirical likelihood . This is the log-likelihood of the sample under the discrete distribution with mass at each point . Suppose we are interested in a parameter taking values in such that for a map we have for all . The log-empirical likelihood at a given parameter value is then

(5)

where the feasible set

(6)

reflects that the expectation of vanishes for distributions compatible with .

The empirical likelihood (EL) from (5) provides a basis for statistical inference. Maximizing it over yields the maximum empirical likelihood estimator

(7)

that we refer to as MELE. Ratios of the EL yield empirical likelihood ratio statistics. Owen (1988) derives an EL analogue of Wilk’s Theorem, and the result was expanded to the general estimating equation framework by Qin and Lawless (1994). The specific regularity conditions needed are discussed in Section 3.3, and the results imply under very general conditions that the MELE is consistent and asymptotically normal. In addition, EL ratio statistics have limiting distributions that can be used to calibrate statistical tests and create confidence intervals or regions. For a detailed exposition of these ideas, we refer readers to Owen (2001).

The nice theoretical properties for EL, however, come at a high practical cost. The practical issues become particularly pressing for applications to linear SEMs, for which the number of parameters and estimating equations generally grow on the order of , where is the number of variables considered. We describe three difficulties that complicate the direct use of EL for SEMs:

  1. [label=()]

  2. For some values , the origin may be outside the convex hull of , in which case the feasible set from (6) is empty and the EL at

    is zero. This “convex hull problem” occurs more often when the sample size is small relative to the number of estimating equations or when the data is skewed. As discussed by

    Grendár and Judge (2009), it is possible that for all parameter vectors , which is known as the “empty set problem”. In addition, the log-EL is typically not a convex function (Chaudhuri et al., 2017), and finding an initial point that has well-defined EL and is in the basin of attraction of the MELE can be difficult.

  3. The optimization problem defining the log-EL from (5) is typically solved iteratively through its dual. Although this problem is convex, it can be computationally burdensome when the number of estimating equations, which corresponds to the number of dual variables, is large.

  4. Confidence intervals based on the asymptotic normal variance and

    likelihood ratio calibration have been shown to often undercover at small sample sizes (Tsao and Wu, 2014).

3 Empirical Likelihood for SEMs

We now turn to the application of EL to SEMs. For expository simplicity, we assume throughout that our observations are centered. In other words, the intercept parameter vector for (1) is zero, so that . However, our ideas extend straightforwardly to the case where we also make inference about .

3.1 Profiled Formulation

Consider the linear SEM given by a mixed graph , as defined in the Introduction. The general framework laid out in Section 2 can be applied directly to such a model by taking the covariance matrix of the observations as the general parameter . We may then define an EL at a pair of parameter matrices as the EL at the covariance matrix from (2). In such a direct application to the linear SEM, the log-EL function is the maximum of the log-EL over the set

(8)

Here, is the half-vectorization operator for symmetric matrices. Under this formulation, there are constraints for the mean and covariance constraints, and the MELE is computed by optimization with respect to the pair of matrices , with restricted to be positive definite.

Inspection of the covariance constraints reveals that a great simplification is possible by profiling out . Indeed, the covariance constraint yields an explicit solution for given , , and . Specifically, with , we have

(9)

The entries of are either constrained to be zero or freely varying. No constraints arise from the freely varying entries, and we may base estimation of on only the structural zeros in , that is,

Once a solution for is found, we may simply compute by setting for or . The profile log-EL in this approach is the function

(10)

obtained from the set of weight vectors

(11)

The MELE is found by maximizing over the set from (3), and then . We emphasize that there are now only covariance constraints, and only the matrix needs to be optimized.

Following a standard strategy, we evaluate , that is, solve the “inner maximization” in (10) at a fixed , through the dual problem. Strong duality holds because the constraints in (11) are linear in the weights . Let be the map with coordinates for and for each nonedge , where . With dual variables and , the Lagrangian for the inner optimization over is

(12)

Maximizing over the weights, with , we find

(13)

and substitution into yields a convex dual function of . We optimize it via Newton-Raphson with a backtracking line search to ensure .

In the “outer maximization”, we optimize with respect to using a gradient based quasi-Newton method. Although we can only evaluate numerically, once we have the optimal dual variables and the corresponding weights from (13), we can analytically compute the gradient of as

(14)

see Chaudhuri et al. (2017). The Hessian, however, cannot be computed in closed form, so we use BFGS which builds an approximate Hessian via the gradient.

Although both formulations yield the same MELE, the profile approach from (10) and (11) drastically eases difficulties (i) and (ii) discussed in Section 2 as the number of estimating equations for the covariance is reduced to . This reduces the number of dual variables to optimize in the inner maximization. Moreover, when profiled, the outer maximization searches over only while the naive direct formulation from (8) requires a search over both and ; in particular, positive definiteness of needs to be respected in the naive optimization. Finally, satisfying the convex hull condition for the error covariances typically requires a simultaneous good choice of and . The directed edge weights can be easily initialized with regression estimates, but the covariance parameters are typically more difficult to specify. In Section 4, we show that the computational advantages produce substantial gains in computation time and converge to a valid stationary point at a much higher proportion of the time even when the sample size is small.

3.2 Small Sample Improvements

In addition to reformulating the optimization problem, we make two modifications to improve the performance of EL for SEMs. We apply adjusted empirical likelihood (AEL) to improve the search for a MELE and use extended empirical likelihood (EEL) to improve the coverage of confidence intervals.

Chen et al. (2008) proposed AEL to alleviate the convex hull problem mentioned in difficulty (i) above. The adjustment amounts to adding a pseudo-observation whose contribution to the estimating equations is for a choice of . Adding this term ensures that no matter the value of , the set of feasible weight vectors, now in , is non-empty. Hence, the log-AEL and its gradient

(15)

are well defined across the entire parameter space. Chen et al. (2008) show that AEL retains the asymptotic properties of the original EL when , and suggest . We adopt this choice.

The terms in our covariance constraint are products, . This is generally not true for the added term and is not clear how to define an appropriately sparse and positive definite matrix using AEL weights. Thus, we propose finding an estimate that maximizes the AEL and computing based on weights from recalculating the original EL at . As demonstrated in our numerical experiments, this approach alleviates some convergence issues but, of course, the original EL may be zero at the AEL maximizer , in which case we do not have an estimate of and say that the AEL procedure has not converged.

To address undercoverage of confidence regions for smaller samples, as described in difficulty (iii), we adopt the EEL of Tsao and Wu (2014) who show that their -calibrated EEL confidence regions outperform those from the original EL. Assuming the MELE exists, a positive EEL may be defined for any matrix by taking the original EL at a convex combination of and . Specifically, the log-EEL suggested by Tsao and Wu (2014) is

(16)

for with .

3.3 Asymptotic Distribution of Empirical Likelihood Estimators

It follows from Qin and Lawless (1994, Theorem 1) that under the following assumptions, MELEs are asymptotically normal and empirical likelihood ratios converge to limits. The same is true for the modifications from Section 3.2.

Proposition 1.

Let be a mixed graph, let and . Let be a zero-mean random vector with covariance matrix . Assume that:

  1. [label=()]

  2. The Jacobian of the parametrization defined on has full rank at .

  3. The joint distribution of

    and is non-degenerate and has finite third moments.

If is an i.i.d. sample from the distribution determined by , i.e., the distribution of , then the MELE is asymptotically normal with

(17)

Here, is given by the estimating equations corresponding to the naive formulation in (8). Furthermore, EL ratio statistics have limits. In particular, for and , we have

(18)

We sketch the proof of the proposition in the appendix.

If the rank condition from (a) holds, then the rational map has full rank Jacobian at almost all choices of , and the map is generically finite-to-one. There is thus a connection to local/finite identifiability of from the covariance matrix. For state-of-the-art methods for determining identifiability see Foygel et al. (2012); Chen (2016); Drton and Weihs (2016).

4 Numerical Simulations

We now show a series of numerical experiments to evaluate the effectiveness of the proposed methods and compare the results to existing methods.

4.1 Convergence of Optimizers for Naive vs Profile Formulation

We first compare the naive/direct procedure which explicitly estimates and to the profiled procedure which only involves . For both procedures, we use the original EL and adjusted EL. We also consider a hybrid method, which first finds the maximum AEL point to initialize a search which then uses original EL. We randomly generate acyclic mixed graphs with 8 nodes, 10 directed edges, and 6 bidirected edges. We randomly select directed edges from all pairs such that and then select bidirected edges from the remaining unselected pairs. This setup ensures that are generically identifiable from by the result of Brito and Pearl (2002).

We generate random true parameter matrices and as follows. The coefficients are drawn uniformly from . For , we draw off-diagonal elements , , uniformly from . We then use exponential draws to set .

We consider errors from four distributions. First, we generate centered multivariate Gaussian errors with covariance matrix . Second, we generate them from a multivariate

-distribution with 4 degrees of freedom, which we denote by

, again with expectation zero and covariance matrix . Third, we consider log-normal errors. In this case, we simulate a multivariate Gaussian vector , centered and with covariance matrix equal to the correlation matrix that corresponds to . We then set the error vector to , which yields covariance matrix . Finally, in order to draw a multivariate distribution with recentered gamma marginals and covariance , we follow the steps:

  1. Draw .

  2. For each , generate and a random sign .

  3. If , add to and . If , add to and to .

  4. Subtract the true mean from each error term so that it has mean 0.

All optimizations are initialized with a procedure from Drton et al. (2017), where the free elements of are calculated via least squares. The resulting residuals are used to initialize the non-zero values . If a row is not diagonally dominant, the off-diagonal elements are scaled so that to ensure is positive definite.

Figure 1: Proportion of 500 simulations which converge to a valid stationary point, plotted versus the sample size. O- original EL; A- adjusted EL; H- hybrid EL. Red points indicate the profile formulation; blue points indicate the naive formulation.
Figure 2: The average run time in seconds among the simulations in which all methods converge to a valid stationary point, plotted versus the sample size. O- original EL; A- adjusted EL; H- hybrid EL. Red points indicate the profile formulation; blue points indicate the naive formulation.

Figure 1 shows that in all cases the profiled formulation converges at least as often as the naive formulation. AEL converges more often than original EL, and the hybrid procedure converges the most often. Even at a sample size of , the profiled problem converges nearly every single time, except in the case of log-normal errors. Figure 2 shows that the profiled form can be up to 40 times faster on average than the naive form.

4.2 Estimation Error

We now explore the estimation errors resulting from different approaches. We compare both original EL and AEL to the Gaussian MLE computed as in Drton et al. (2009), GLS, and WLS. The latter two estimates are computed using the R package lavaan (Rosseel, 2012). We also include a hybrid procedure that finds the Gaussian MLE and then uses the resulting residuals and the maximum EL weights at to form an estimate . Note that the distribution does not have finite 6th moments, so the limiting distributions from Proposition 1 may not hold; however, all estimation procedures still appear to be consistent.

Proceeding as in Section 4.1, we generate 1000 graphs for each error distribution and sample size. To measure estimation accuracy, we average the relative error for across each of the simulation runs in which all methods converge; recall Figure 1. The results are shown in Figure 3.

Figure 3: Log mean relative squared estimation error in over 1000 simulations, plotted versus the sample size. Average is only taken on simulations in which all methods converged. A- adjusted EL; E- empirical likelihood; G- generalized least squares; H- hybrid Gauss/EL; M- Gaussian MLE; W- weighted least squares.
Figure 4: Proportion of times (over 1000 simulations) the method converged to a local maximum of the objective function, plotted versus the sample size. A- adjusted EL; E- empirical likelihood; G- generalized least squares; H- hybrid Gauss/EL; M- Gaussian MLE; W- weighted least squares.

In general, there is no substantial difference in accuracy between the adjusted and original empirical likelihood methods. For the Gaussian case, MLE and GLS perform better than the methods which do not assume Gaussianity, but the improvement is slight. In the and log-normal case, the EL procedures perform substantially better than the other methods. Finally, for the gamma case, the hybrid method seems to outperform the other methods, followed closely by the EL methods; however, the differences between the methods are not substantial. In Figure 4, all methods converge more than 95% of the time in all distributions, except for the log-normal case. In this case, the WLS procedure still only converges roughly 90% at .

4.3 Confidence Regions

We examine the coverage frequencies of joint confidence regions for the parameters and . We construct Wald regions using the estimates of from the Gaussian MLE, GLS, and WLS. We also calculate a sandwich variance estimator using the Gaussian likelihood as the estimating equations and the asymptotic EL variance via Qin and Lawless (1994). Alternatively, we calculate the EL at using original EL, EEL, AEL. We then compare the resulting EL ratio to its asymptotic distribution. If a method does not converge, we count this as a case in which the confidence region does not cover the true parameters.

At each sample size and error distribution, we construct 1000 graphs with 6 nodes, 8 directed edges and 4 bidirected edges from the procedure described in Section 4.1. For the distribution, we increase the degrees of freedom to 7 to ensure Proposition 1 applies. The coverage rates for 90% confidence intervals are shown in Figure 5. Based on the displayed results, regions obtained from the Gaussian MLE and GLS can only be recommended when the errors are (close to) Gaussian. The EEL method performs the best, staying close to the parametric methods in the Gaussian case and doing the best in most non-Gaussian scenarios. The sandwich method is another good choice. However, we also observe that in order to achieve nominal coverage levels very large sample sizes may be required.

Figure 5: Coverage frequencies of joint confidence intervals. A- adjusted EL; X- extended EL; G- generalized least squares; Q- Qin and Lawless asymptotic EL variance; M- Gaussian MLE; S- sandwich estimator; W- weighted least squares; E- EL direct calibration.

4.4 Protein Signaling Network

Sachs et al. (2005, Figure 2) present a signaling network of 11 observed molecules and 13 unobserved molecules. The black edges in Figure 6 give a plausible mixed graph representation of that network and was also considered by Drton et al. (2017). A log-transformation of the available protein expression data improves Gaussianity but leaves the distribution of some of the variables skewed and/or multimodal. We consider two separate tests; each compares the SEM sub-model corresponding to the graph of black edges against a full model which adds one of the two red edges also shown in Figure 6. Note that the added red edge from induces a directed cycle. For the log-transformed data, we perform a Gaussian as well as an empirical likelihood ratio test. For the test involving the directed edge , the Gaussian LR is .416 (p-value = .52) and the ELR is 4.379 (p-value = .04). For the test involving the bidirected edge , the Gaussian LR is 15.216 (p-value ¡ .001) and the ELR is .782 (p-value = .37). While we do not have a certified gold standard network, and the implicit assumption of linearity may not be appropriate for all postulated relationships, these examples present situations in which the Gaussian assumption is particularly inappropriate and may cause concerns for a practitioner.

PKC

JNK

PIP2

PLCg

PIP3

Akt

P38

Raf

PKA

Mek

Erk

Figure 6: Plausible mixed graph for the protein-signaling network dataset. The relevant sub-model can be formed by removing the red bidirected edge between PIP2 and Akt and the red edge from MEK to PKA.

5 Discussion

In this article, we showed that EL methods are an attractive alternative for estimation and testing of non-Gaussian linear SEMs. Our approach of profiling out the error covariance matrix drastically reduces computational effort and creates a far more tractable and reliable estimation procedure. Furthermore, we showed that the use of AEL may further improve convergence of optimizers, particularly, when the sample size is small and the errors are skewed. EEL was seen to drastically improve the coverage rate of the joint confidence intervals.

Our EL methods are applicable under very few distributional assumptions, all the while allowing statistical inference in close to analogy to parametric modeling. When the data is non-Gaussian, the modified EL methods outperform the other methods we considered in almost all scenarios we explored. This concerns the proportion of times a valid estimate is returned, statistical efficiency, and also confidence region coverage. While there remains significant room for improvement in the design of confidence regions, we conclude that EL methods are a valuable tool for applications of linear SEMs to non-Gaussian data.


Appendix

Proof of Proposition 1.

We recall our notation and . Based on the right-most expression in (9), the considered naive/direct estimating equations may be based on the function with coordinates

for and , respectively.

Our claim follows from Theorem 1 of Qin and Lawless (1994) under the following conditions:

  1. [label=(0)]

  2. is positive definite.

  3. In a neighborhood of the d-dimensional parameter , the derivative is continuous, and and are bounded by an integrable function .

  4. has rank .

  5. is continuous and is bounded by an integrable function in a neighborhood of the true parameter .

Here, denotes the Euclidean norm.

Noting that , condition (1) is an immediate consequence of assumption (b) in our proposition. Condition (3) is implied by assumption (a). With polynomial estimating equations, all derivatives in conditions (2) and (4) exist. Now, and its first and second partial derivatives are at most quadratic functions of , which in turn is a linear function of a realization of the error vector . Local bounds on the concerned quantities are easily obtained and assumption (b) ensures their integrability. ∎

6 Acknowledgments

This material is based upon work supported by the U.S. National Science Foundation under Grants No. DMS 1561814 and 1712535. We would also like to thank the Institute for Mathematical Sciences at the National University of Singapore for travel support to the 2016 workshop: Empirical Likelihood Based Methods in Statistics.

References

  • Bollen [1989] K. A. Bollen. Structural equations with latent variables. Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics. John Wiley & Sons, Inc., New York, 1989. ISBN 0-471-01171-1. URL http://dx.doi.org/10.1002/9781118619179. A Wiley-Interscience Publication.
  • Brito and Pearl [2002] C. Brito and J. Pearl. A new identification condition for recursive models with correlated errors. Struct. Equ. Model., 9(4):459–474, 2002. ISSN 1070-5511.
  • Chaudhuri et al. [2007] S. Chaudhuri, M. Drton, and T. S. Richardson. Estimation of a covariance matrix with zeros. Biometrika, 94(1):199–216, 2007. ISSN 0006-3444. URL http://dx.doi.org/10.1093/biomet/asm007.
  • Chaudhuri et al. [2017] S. Chaudhuri, D. Mondal, and T. Yin. Hamiltonian Monte Carlo sampling in Bayesian empirical likelihood computation. J. R. Stat. Soc. Ser. B. Stat. Methodol., 79(1):293–320, 2017. ISSN 1369-7412. URL http://dx.doi.org/10.1111/rssb.12164.
  • Chen [2016] B. Chen. Identification and overidentification of linear structural equation models. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 1587–1595. Curran Associates, Inc., 2016. URL http://papers.nips.cc/paper/6223-identification-and-overidentification-of-linear-structural-equation-models.pdf.
  • Chen et al. [2008] J. Chen, A. M. Variyath, and B. Abraham. Adjusted empirical likelihood and its properties. J. Comput. Graph. Statist., 17(2):426–443, 2008. ISSN 1061-8600. URL http://dx.doi.org/10.1198/106186008X321068.
  • Colombo et al. [2012] D. Colombo, M. H. Maathuis, M. Kalisch, and T. S. Richardson. Learning high-dimensional directed acyclic graphs with latent and selection variables. Ann. Statist., 40(1):294–321, 2012.
  • Drton and Weihs [2016] M. Drton and L. Weihs. Generic identifiability of linear structural equation models by ancestor decomposition. Scand. J. Stat., 43(4):1035–1045, 2016. ISSN 0303-6898. URL http://dx.doi.org/10.1111/sjos.12227.
  • Drton et al. [2009] M. Drton, M. Eichler, and T. S. Richardson. Computing maximum likelihood estimates in recursive linear models with correlated errors. J. Mach. Learn. Res., 10:2329–2348, 2009. ISSN 1532-4435.
  • Drton et al. [2017] M. Drton, C. Fox, and Y. S. Wang. Computation of maximum likelihood estimates in cyclic structural equation models. Ann. Statist., to appear, 2017. arXiv:1610.03434.
  • Fox et al. [2017] J. Fox, Z. Nie, and J. Byrnes. sem: Structural Equation Models, 2017. URL https://CRAN.R-project.org/package=sem. R package version 3.1-9.
  • Foygel et al. [2012] R. Foygel, J. Draisma, and M. Drton. Half-trek criterion for generic identifiability of linear structural equation models. Ann. Statist., 40(3):1682–1713, 2012. ISSN 0090-5364. URL http://dx.doi.org/10.1214/12-AOS1012.
  • Grendár and Judge [2009] M. Grendár and G. Judge. Empty set problem of maximum empirical likelihood methods. Electron. J. Stat., 3:1542–1555, 2009. ISSN 1935-7524. URL http://dx.doi.org/10.1214/09-EJS528.
  • Kolenikov and Yuan [2009] S. Kolenikov and Y. Yuan. Empirical likelihood estimation and testing in covariance structure models. available at http://staskolenikov.net, 2009.
  • Muthen and Kaplan [1992] B. Muthen and D. Kaplan. A comparison of some methodologies for the factor analysis of non-normal Likert variables: A note on the size of the model. British Journal of Mathematical and Statistical Psychology, 45(1):19–30, 1992.
  • Olsson et al. [2000] U. H. Olsson, T. Foss, S. V. Troye, and R. D. Howell. The performance of ML, GLS, and WLS estimation in structural equation modeling under conditions of misspecification and nonnormality. Structural Equation Modeling, 7(4):557–595, 2000.
  • Owen [1988] A. B. Owen. Empirical likelihood ratio confidence intervals for a single functional. Biometrika, 75(2):237–249, 1988. ISSN 0006-3444. URL http://dx.doi.org/10.1093/biomet/75.2.237.
  • Owen [2001] A. B. Owen. Empirical likelihood. CRC press, 2001.
  • Pearl [2009] J. Pearl. Causality. Cambridge University Press, Cambridge, second edition, 2009. ISBN 978-0-521-89560-6; 0-521-77362-8. Models, reasoning, and inference.
  • Qin and Lawless [1994] J. Qin and J. Lawless. Empirical likelihood and general estimating equations. Ann. Statist., 22(1):300–325, 1994. ISSN 0090-5364. URL http://dx.doi.org/10.1214/aos/1176325370.
  • R Core Team [2017] R Core Team. R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria, 2017. URL https://www.R-project.org/.
  • Richardson and Spirtes [2002] T. Richardson and P. Spirtes.

    Ancestral graph Markov models.

    Ann. Statist., 30(4):962–1030, 2002. ISSN 0090-5364. URL http://dx.doi.org/10.1214/aos/1031689015.
  • Rosseel [2012] Y. Rosseel. lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2):1–36, 2012. URL http://www.jstatsoft.org/v48/i02/.
  • Sachs et al. [2005] K. Sachs, O. Perez, D. Pe’er, D. A. Lauffenburger, and G. P. Nolan. Causal protein-signaling networks derived from multiparameter single-cell data. Science, 308(5721):523–529, 2005.
  • Shpitser and Pearl [2006] I. Shpitser and J. Pearl. Identification of joint interventional distributions in recursive semi-Markovian causal models. In

    Proceedings, The Twenty-First National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence Conference, July 16-20, 2006, Boston, Massachusetts, USA

    , pages 1219–1226, 2006.
    URL http://www.aaai.org/Library/AAAI/2006/aaai06-191.php.
  • Spirtes et al. [2000] P. Spirtes, C. Glymour, and R. Scheines. Causation, prediction, and search.

    Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA, second edition, 2000.

    ISBN 0-262-19440-6. With additional material by David Heckerman, Christopher Meek, Gregory F. Cooper and Thomas Richardson, A Bradford Book.
  • Tsao and Wu [2014] M. Tsao and F. Wu. Extended empirical likelihood for estimating equations. Biometrika, 101(3):703–710, 2014. ISSN 0006-3444. URL http://dx.doi.org/10.1093/biomet/asu014.
  • Wermuth [2011] N. Wermuth. Probability distributions with summary graph structure. Bernoulli, 17(3):845–879, 2011. ISSN 1350-7265. URL http://dx.doi.org/10.3150/10-BEJ309.
  • Wright [1921] S. Wright. Correlation and causation. J. Agric. Res., 20:557–585, 1921.