1 Introduction
Structural equation models (SEMs) are multivariate statistical models in which each considered variable is a function of other variables and a stochastic error term. Often, some of these other variables are latent (Bollen, 1989). This paper, however, focuses on SEMs in which the effects of latent variables are summarized. Adopting the dominant linear paradigm, we will thus be concerned with models in which linear functions relate only observed variables, but error terms may be dependent. Such models are sometimes referred to as semiMarkovian (Shpitser and Pearl, 2006). Avoiding any explicit specification of latent confounding, the models play an important role in exploration of causeeffect structures (Colombo et al., 2012; Pearl, 2009; Richardson and Spirtes, 2002; Spirtes et al., 2000; Wermuth, 2011). Much insight about the models can be gained from a natural graphical representation by mixed graphs/path diagrams that originates in work of Wright (1921).
Formally, let be a multivariate sample with each observation indexed by a set . So, with each realvalued. Now consider the system of structural equations
(1) 
where the and are unknown parameters and the
are random errors. Define vectors
and , and a matrix with if . We assume that the error vectors are independent and identically distributed, have zero means, and have covariance matrix . However, we do not specify any parametric form for their distribution. For each , the equations in (1) can be written as . If is nonsingular, then this system is solved uniquely by . This solution has mean vector and covariance matrix(2) 
Specific models are now obtained by hypothesizing that a particular collection of coefficients and error covariances is zero.
An SEM can be represented conveniently by a path diagram/mixed graph . Here, the vertex set yields a correspondence between the nodes of the graph and the observed variables. The set is a set of directed edges , which encode that variable may have a direct effect on variable . The set comprises bidirected edges that indicate that the errors and may be correlated. Define the set of parents of node as . Similarly, define a set of siblings as . Bidirected edges have no orientation, and if and only if . Now, the graph induces a model through the requirement that
(3)  
(4) 
We emphasize that our treatment allows the model to have feedback loops, that is, may have directed cycles.
1.1 Related Work
Frequently, the errors in a SEM, and consequently also the observations , are assumed to be multivariate Gaussian which yield
maximum likelihood estimates
(MLEs). The Gaussian likelihood is often maximized using generic optimization methods; as done in the popular packages sem (Fox et al., 2017) and lavaan (Rosseel, 2012) for R (R Core Team, 2017). The coordinatedescent methods proposed by Drton et al. (2009) and Drton et al. (2017) can be a useful computational alternative that largely avoids convergence issues.As a less parametric method, generalized least squares (GLS) minimizes a discrepancy between the sample covariance and the covariance implied by the parameters. Although the estimates are slightly more robust to misspecification, they are still asymptotically equivalent to the Gaussian MLEs (Olsson et al., 2000)
. When multivariate Gaussianity is inappropriate, MLEs and GLS generally lose statistical efficiency and yield incorrectly calibrated confidence intervals.
Weighted least squares methods (WLS)—also called asymptotically distribution free—weight the discrepancy between the observed and hypothesized covariance structure by explicitly estimated fourth moments. Although WLS estimates are consistent and produce asymptotically correct confidence intervals even with nonGaussian errors, the estimation of higher order moments may come at a loss of statistical efficiency and cause convergence issues, which has limited their use
(Muthen and Kaplan, 1992).Chaudhuri et al. (2007) propose using the empirical likelihood (EL) of Owen (2001) to estimate a covariance matrix with structural zeros. In our setup, this corresponds to the special case of a mixed graph with no directed edges. Kolenikov and Yuan (2009) use EL to estimate the parameters of a linear SEM. In contrast to the mixed graph formulation, Kolenikov and Yuan (2009) consider the case where the latent variable structure is explicitly modeled and all errors are independent. The EL approach is appealing as it gives consistent estimates and asymptotically correct confidence intervals even when the errors are not multivariate Gaussian. However, EL can present numerous practical difficulties when the sample size is small relative to the number of parameters or estimating equations used. Moreover, standard implementation of EL methods is computationally feasible only for systems with a handful of variables. We believe that these issues have prevented application of EL to linear SEMs beyond what was done by Kolenikov and Yuan (2009).
1.2 Contribution
In this article, we apply the empirical likelihood framework to SEMs represented by mixed graphs and propose several modifications to a naive approach which address the most salient practical concerns:

[label=()]

We show that in the mixed graph setting, the covariance parameters can be profiled out. This greatly reduces the computational burden by reducing the number of estimating equations imposed and parameters directly estimated. It also naturally encodes the positive definite constraint on and yields a positive definite estimate of for any point with a well defined empirical likelihood.

When maximizing the empirical likelihood, we leverage a recent insight and directly incorporate gradient information in a quasiNewton procedure instead of the typical derivativefree approaches to empirical likelihood optimization. This again yields substantial computational savings.

We use the adjusted empirical likelihood (AEL), first proposed by Chen et al. (2008). This adjustment ensures that an empirical likelihood and corresponding gradient is well defined for every value in the parameter space.

We apply the idea of extended empirical likelihood (EEL), which furnishes drastically improved coverage of confidence intervals at small sample sizes (Tsao and Wu, 2014).
Our simulations show that with these proposed modifications, empirical likelihood becomes an attractive alternative for practitioners concerned with nonGaussianity in structural equation modeling.
2 Background on Empirical Likelihood
Let be a sample from an variate distribution belonging to a non/semiparametric statistical model . Let be the
dimensional probability simplex. For
, define the logempirical likelihood . This is the loglikelihood of the sample under the discrete distribution with mass at each point . Suppose we are interested in a parameter taking values in such that for a map we have for all . The logempirical likelihood at a given parameter value is then(5) 
where the feasible set
(6) 
reflects that the expectation of vanishes for distributions compatible with .
The empirical likelihood (EL) from (5) provides a basis for statistical inference. Maximizing it over yields the maximum empirical likelihood estimator
(7) 
that we refer to as MELE. Ratios of the EL yield empirical likelihood ratio statistics. Owen (1988) derives an EL analogue of Wilk’s Theorem, and the result was expanded to the general estimating equation framework by Qin and Lawless (1994). The specific regularity conditions needed are discussed in Section 3.3, and the results imply under very general conditions that the MELE is consistent and asymptotically normal. In addition, EL ratio statistics have limiting distributions that can be used to calibrate statistical tests and create confidence intervals or regions. For a detailed exposition of these ideas, we refer readers to Owen (2001).
The nice theoretical properties for EL, however, come at a high practical cost. The practical issues become particularly pressing for applications to linear SEMs, for which the number of parameters and estimating equations generally grow on the order of , where is the number of variables considered. We describe three difficulties that complicate the direct use of EL for SEMs:

[label=()]

For some values , the origin may be outside the convex hull of , in which case the feasible set from (6) is empty and the EL at
is zero. This “convex hull problem” occurs more often when the sample size is small relative to the number of estimating equations or when the data is skewed. As discussed by
Grendár and Judge (2009), it is possible that for all parameter vectors , which is known as the “empty set problem”. In addition, the logEL is typically not a convex function (Chaudhuri et al., 2017), and finding an initial point that has welldefined EL and is in the basin of attraction of the MELE can be difficult. 
The optimization problem defining the logEL from (5) is typically solved iteratively through its dual. Although this problem is convex, it can be computationally burdensome when the number of estimating equations, which corresponds to the number of dual variables, is large.
3 Empirical Likelihood for SEMs
We now turn to the application of EL to SEMs. For expository simplicity, we assume throughout that our observations are centered. In other words, the intercept parameter vector for (1) is zero, so that . However, our ideas extend straightforwardly to the case where we also make inference about .
3.1 Profiled Formulation
Consider the linear SEM given by a mixed graph , as defined in the Introduction. The general framework laid out in Section 2 can be applied directly to such a model by taking the covariance matrix of the observations as the general parameter . We may then define an EL at a pair of parameter matrices as the EL at the covariance matrix from (2). In such a direct application to the linear SEM, the logEL function is the maximum of the logEL over the set
(8) 
Here, is the halfvectorization operator for symmetric matrices. Under this formulation, there are constraints for the mean and covariance constraints, and the MELE is computed by optimization with respect to the pair of matrices , with restricted to be positive definite.
Inspection of the covariance constraints reveals that a great simplification is possible by profiling out . Indeed, the covariance constraint yields an explicit solution for given , , and . Specifically, with , we have
(9) 
The entries of are either constrained to be zero or freely varying. No constraints arise from the freely varying entries, and we may base estimation of on only the structural zeros in , that is,
Once a solution for is found, we may simply compute by setting for or . The profile logEL in this approach is the function
(10) 
obtained from the set of weight vectors
(11) 
The MELE is found by maximizing over the set from (3), and then . We emphasize that there are now only covariance constraints, and only the matrix needs to be optimized.
Following a standard strategy, we evaluate , that is, solve the “inner maximization” in (10) at a fixed , through the dual problem. Strong duality holds because the constraints in (11) are linear in the weights . Let be the map with coordinates for and for each nonedge , where . With dual variables and , the Lagrangian for the inner optimization over is
(12) 
Maximizing over the weights, with , we find
(13) 
and substitution into yields a convex dual function of . We optimize it via NewtonRaphson with a backtracking line search to ensure .
In the “outer maximization”, we optimize with respect to using a gradient based quasiNewton method. Although we can only evaluate numerically, once we have the optimal dual variables and the corresponding weights from (13), we can analytically compute the gradient of as
(14) 
see Chaudhuri et al. (2017). The Hessian, however, cannot be computed in closed form, so we use BFGS which builds an approximate Hessian via the gradient.
Although both formulations yield the same MELE, the profile approach from (10) and (11) drastically eases difficulties (i) and (ii) discussed in Section 2 as the number of estimating equations for the covariance is reduced to . This reduces the number of dual variables to optimize in the inner maximization. Moreover, when profiled, the outer maximization searches over only while the naive direct formulation from (8) requires a search over both and ; in particular, positive definiteness of needs to be respected in the naive optimization. Finally, satisfying the convex hull condition for the error covariances typically requires a simultaneous good choice of and . The directed edge weights can be easily initialized with regression estimates, but the covariance parameters are typically more difficult to specify. In Section 4, we show that the computational advantages produce substantial gains in computation time and converge to a valid stationary point at a much higher proportion of the time even when the sample size is small.
3.2 Small Sample Improvements
In addition to reformulating the optimization problem, we make two modifications to improve the performance of EL for SEMs. We apply adjusted empirical likelihood (AEL) to improve the search for a MELE and use extended empirical likelihood (EEL) to improve the coverage of confidence intervals.
Chen et al. (2008) proposed AEL to alleviate the convex hull problem mentioned in difficulty (i) above. The adjustment amounts to adding a pseudoobservation whose contribution to the estimating equations is for a choice of . Adding this term ensures that no matter the value of , the set of feasible weight vectors, now in , is nonempty. Hence, the logAEL and its gradient
(15) 
are well defined across the entire parameter space. Chen et al. (2008) show that AEL retains the asymptotic properties of the original EL when , and suggest . We adopt this choice.
The terms in our covariance constraint are products, . This is generally not true for the added term and is not clear how to define an appropriately sparse and positive definite matrix using AEL weights. Thus, we propose finding an estimate that maximizes the AEL and computing based on weights from recalculating the original EL at . As demonstrated in our numerical experiments, this approach alleviates some convergence issues but, of course, the original EL may be zero at the AEL maximizer , in which case we do not have an estimate of and say that the AEL procedure has not converged.
To address undercoverage of confidence regions for smaller samples, as described in difficulty (iii), we adopt the EEL of Tsao and Wu (2014) who show that their calibrated EEL confidence regions outperform those from the original EL. Assuming the MELE exists, a positive EEL may be defined for any matrix by taking the original EL at a convex combination of and . Specifically, the logEEL suggested by Tsao and Wu (2014) is
(16) 
for with .
3.3 Asymptotic Distribution of Empirical Likelihood Estimators
It follows from Qin and Lawless (1994, Theorem 1) that under the following assumptions, MELEs are asymptotically normal and empirical likelihood ratios converge to limits. The same is true for the modifications from Section 3.2.
Proposition 1.
Let be a mixed graph, let and . Let be a zeromean random vector with covariance matrix . Assume that:

[label=()]

The Jacobian of the parametrization defined on has full rank at .
If is an i.i.d. sample from the distribution determined by , i.e., the distribution of , then the MELE is asymptotically normal with
(17) 
Here, is given by the estimating equations corresponding to the naive formulation in (8). Furthermore, EL ratio statistics have limits. In particular, for and , we have
(18) 
We sketch the proof of the proposition in the appendix.
If the rank condition from (a) holds, then the rational map has full rank Jacobian at almost all choices of , and the map is generically finitetoone. There is thus a connection to local/finite identifiability of from the covariance matrix. For stateoftheart methods for determining identifiability see Foygel et al. (2012); Chen (2016); Drton and Weihs (2016).
4 Numerical Simulations
We now show a series of numerical experiments to evaluate the effectiveness of the proposed methods and compare the results to existing methods.
4.1 Convergence of Optimizers for Naive vs Profile Formulation
We first compare the naive/direct procedure which explicitly estimates and to the profiled procedure which only involves . For both procedures, we use the original EL and adjusted EL. We also consider a hybrid method, which first finds the maximum AEL point to initialize a search which then uses original EL. We randomly generate acyclic mixed graphs with 8 nodes, 10 directed edges, and 6 bidirected edges. We randomly select directed edges from all pairs such that and then select bidirected edges from the remaining unselected pairs. This setup ensures that are generically identifiable from by the result of Brito and Pearl (2002).
We generate random true parameter matrices and as follows. The coefficients are drawn uniformly from . For , we draw offdiagonal elements , , uniformly from . We then use exponential draws to set .
We consider errors from four distributions. First, we generate centered multivariate Gaussian errors with covariance matrix . Second, we generate them from a multivariate
distribution with 4 degrees of freedom, which we denote by
, again with expectation zero and covariance matrix . Third, we consider lognormal errors. In this case, we simulate a multivariate Gaussian vector , centered and with covariance matrix equal to the correlation matrix that corresponds to . We then set the error vector to , which yields covariance matrix . Finally, in order to draw a multivariate distribution with recentered gamma marginals and covariance , we follow the steps:
Draw .

For each , generate and a random sign .

If , add to and . If , add to and to .

Subtract the true mean from each error term so that it has mean 0.
All optimizations are initialized with a procedure from Drton et al. (2017), where the free elements of are calculated via least squares. The resulting residuals are used to initialize the nonzero values . If a row is not diagonally dominant, the offdiagonal elements are scaled so that to ensure is positive definite.
Figure 1 shows that in all cases the profiled formulation converges at least as often as the naive formulation. AEL converges more often than original EL, and the hybrid procedure converges the most often. Even at a sample size of , the profiled problem converges nearly every single time, except in the case of lognormal errors. Figure 2 shows that the profiled form can be up to 40 times faster on average than the naive form.
4.2 Estimation Error
We now explore the estimation errors resulting from different approaches. We compare both original EL and AEL to the Gaussian MLE computed as in Drton et al. (2009), GLS, and WLS. The latter two estimates are computed using the R package lavaan (Rosseel, 2012). We also include a hybrid procedure that finds the Gaussian MLE and then uses the resulting residuals and the maximum EL weights at to form an estimate . Note that the distribution does not have finite 6th moments, so the limiting distributions from Proposition 1 may not hold; however, all estimation procedures still appear to be consistent.
Proceeding as in Section 4.1, we generate 1000 graphs for each error distribution and sample size. To measure estimation accuracy, we average the relative error for across each of the simulation runs in which all methods converge; recall Figure 1. The results are shown in Figure 3.
In general, there is no substantial difference in accuracy between the adjusted and original empirical likelihood methods. For the Gaussian case, MLE and GLS perform better than the methods which do not assume Gaussianity, but the improvement is slight. In the and lognormal case, the EL procedures perform substantially better than the other methods. Finally, for the gamma case, the hybrid method seems to outperform the other methods, followed closely by the EL methods; however, the differences between the methods are not substantial. In Figure 4, all methods converge more than 95% of the time in all distributions, except for the lognormal case. In this case, the WLS procedure still only converges roughly 90% at .
4.3 Confidence Regions
We examine the coverage frequencies of joint confidence regions for the parameters and . We construct Wald regions using the estimates of from the Gaussian MLE, GLS, and WLS. We also calculate a sandwich variance estimator using the Gaussian likelihood as the estimating equations and the asymptotic EL variance via Qin and Lawless (1994). Alternatively, we calculate the EL at using original EL, EEL, AEL. We then compare the resulting EL ratio to its asymptotic distribution. If a method does not converge, we count this as a case in which the confidence region does not cover the true parameters.
At each sample size and error distribution, we construct 1000 graphs with 6 nodes, 8 directed edges and 4 bidirected edges from the procedure described in Section 4.1. For the distribution, we increase the degrees of freedom to 7 to ensure Proposition 1 applies. The coverage rates for 90% confidence intervals are shown in Figure 5. Based on the displayed results, regions obtained from the Gaussian MLE and GLS can only be recommended when the errors are (close to) Gaussian. The EEL method performs the best, staying close to the parametric methods in the Gaussian case and doing the best in most nonGaussian scenarios. The sandwich method is another good choice. However, we also observe that in order to achieve nominal coverage levels very large sample sizes may be required.
4.4 Protein Signaling Network
Sachs et al. (2005, Figure 2) present a signaling network of 11 observed molecules and 13 unobserved molecules. The black edges in Figure 6 give a plausible mixed graph representation of that network and was also considered by Drton et al. (2017). A logtransformation of the available protein expression data improves Gaussianity but leaves the distribution of some of the variables skewed and/or multimodal. We consider two separate tests; each compares the SEM submodel corresponding to the graph of black edges against a full model which adds one of the two red edges also shown in Figure 6. Note that the added red edge from induces a directed cycle. For the logtransformed data, we perform a Gaussian as well as an empirical likelihood ratio test. For the test involving the directed edge , the Gaussian LR is .416 (pvalue = .52) and the ELR is 4.379 (pvalue = .04). For the test involving the bidirected edge , the Gaussian LR is 15.216 (pvalue ¡ .001) and the ELR is .782 (pvalue = .37). While we do not have a certified gold standard network, and the implicit assumption of linearity may not be appropriate for all postulated relationships, these examples present situations in which the Gaussian assumption is particularly inappropriate and may cause concerns for a practitioner.
5 Discussion
In this article, we showed that EL methods are an attractive alternative for estimation and testing of nonGaussian linear SEMs. Our approach of profiling out the error covariance matrix drastically reduces computational effort and creates a far more tractable and reliable estimation procedure. Furthermore, we showed that the use of AEL may further improve convergence of optimizers, particularly, when the sample size is small and the errors are skewed. EEL was seen to drastically improve the coverage rate of the joint confidence intervals.
Our EL methods are applicable under very few distributional assumptions, all the while allowing statistical inference in close to analogy to parametric modeling. When the data is nonGaussian, the modified EL methods outperform the other methods we considered in almost all scenarios we explored. This concerns the proportion of times a valid estimate is returned, statistical efficiency, and also confidence region coverage. While there remains significant room for improvement in the design of confidence regions, we conclude that EL methods are a valuable tool for applications of linear SEMs to nonGaussian data.
Appendix
Proof of Proposition 1.
We recall our notation and . Based on the rightmost expression in (9), the considered naive/direct estimating equations may be based on the function with coordinates
for and , respectively.
Our claim follows from Theorem 1 of Qin and Lawless (1994) under the following conditions:

[label=(0)]

is positive definite.

In a neighborhood of the ddimensional parameter , the derivative is continuous, and and are bounded by an integrable function .

has rank .

is continuous and is bounded by an integrable function in a neighborhood of the true parameter .
Here, denotes the Euclidean norm.
Noting that , condition (1) is an immediate consequence of assumption (b) in our proposition. Condition (3) is implied by assumption (a). With polynomial estimating equations, all derivatives in conditions (2) and (4) exist. Now, and its first and second partial derivatives are at most quadratic functions of , which in turn is a linear function of a realization of the error vector . Local bounds on the concerned quantities are easily obtained and assumption (b) ensures their integrability. ∎
6 Acknowledgments
This material is based upon work supported by the U.S. National Science Foundation under Grants No. DMS 1561814 and 1712535. We would also like to thank the Institute for Mathematical Sciences at the National University of Singapore for travel support to the 2016 workshop: Empirical Likelihood Based Methods in Statistics.
References
 Bollen [1989] K. A. Bollen. Structural equations with latent variables. Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics. John Wiley & Sons, Inc., New York, 1989. ISBN 0471011711. URL http://dx.doi.org/10.1002/9781118619179. A WileyInterscience Publication.
 Brito and Pearl [2002] C. Brito and J. Pearl. A new identification condition for recursive models with correlated errors. Struct. Equ. Model., 9(4):459–474, 2002. ISSN 10705511.
 Chaudhuri et al. [2007] S. Chaudhuri, M. Drton, and T. S. Richardson. Estimation of a covariance matrix with zeros. Biometrika, 94(1):199–216, 2007. ISSN 00063444. URL http://dx.doi.org/10.1093/biomet/asm007.
 Chaudhuri et al. [2017] S. Chaudhuri, D. Mondal, and T. Yin. Hamiltonian Monte Carlo sampling in Bayesian empirical likelihood computation. J. R. Stat. Soc. Ser. B. Stat. Methodol., 79(1):293–320, 2017. ISSN 13697412. URL http://dx.doi.org/10.1111/rssb.12164.
 Chen [2016] B. Chen. Identification and overidentification of linear structural equation models. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 1587–1595. Curran Associates, Inc., 2016. URL http://papers.nips.cc/paper/6223identificationandoveridentificationoflinearstructuralequationmodels.pdf.
 Chen et al. [2008] J. Chen, A. M. Variyath, and B. Abraham. Adjusted empirical likelihood and its properties. J. Comput. Graph. Statist., 17(2):426–443, 2008. ISSN 10618600. URL http://dx.doi.org/10.1198/106186008X321068.
 Colombo et al. [2012] D. Colombo, M. H. Maathuis, M. Kalisch, and T. S. Richardson. Learning highdimensional directed acyclic graphs with latent and selection variables. Ann. Statist., 40(1):294–321, 2012.
 Drton and Weihs [2016] M. Drton and L. Weihs. Generic identifiability of linear structural equation models by ancestor decomposition. Scand. J. Stat., 43(4):1035–1045, 2016. ISSN 03036898. URL http://dx.doi.org/10.1111/sjos.12227.
 Drton et al. [2009] M. Drton, M. Eichler, and T. S. Richardson. Computing maximum likelihood estimates in recursive linear models with correlated errors. J. Mach. Learn. Res., 10:2329–2348, 2009. ISSN 15324435.
 Drton et al. [2017] M. Drton, C. Fox, and Y. S. Wang. Computation of maximum likelihood estimates in cyclic structural equation models. Ann. Statist., to appear, 2017. arXiv:1610.03434.
 Fox et al. [2017] J. Fox, Z. Nie, and J. Byrnes. sem: Structural Equation Models, 2017. URL https://CRAN.Rproject.org/package=sem. R package version 3.19.
 Foygel et al. [2012] R. Foygel, J. Draisma, and M. Drton. Halftrek criterion for generic identifiability of linear structural equation models. Ann. Statist., 40(3):1682–1713, 2012. ISSN 00905364. URL http://dx.doi.org/10.1214/12AOS1012.
 Grendár and Judge [2009] M. Grendár and G. Judge. Empty set problem of maximum empirical likelihood methods. Electron. J. Stat., 3:1542–1555, 2009. ISSN 19357524. URL http://dx.doi.org/10.1214/09EJS528.
 Kolenikov and Yuan [2009] S. Kolenikov and Y. Yuan. Empirical likelihood estimation and testing in covariance structure models. available at http://staskolenikov.net, 2009.
 Muthen and Kaplan [1992] B. Muthen and D. Kaplan. A comparison of some methodologies for the factor analysis of nonnormal Likert variables: A note on the size of the model. British Journal of Mathematical and Statistical Psychology, 45(1):19–30, 1992.
 Olsson et al. [2000] U. H. Olsson, T. Foss, S. V. Troye, and R. D. Howell. The performance of ML, GLS, and WLS estimation in structural equation modeling under conditions of misspecification and nonnormality. Structural Equation Modeling, 7(4):557–595, 2000.
 Owen [1988] A. B. Owen. Empirical likelihood ratio confidence intervals for a single functional. Biometrika, 75(2):237–249, 1988. ISSN 00063444. URL http://dx.doi.org/10.1093/biomet/75.2.237.
 Owen [2001] A. B. Owen. Empirical likelihood. CRC press, 2001.
 Pearl [2009] J. Pearl. Causality. Cambridge University Press, Cambridge, second edition, 2009. ISBN 9780521895606; 0521773628. Models, reasoning, and inference.
 Qin and Lawless [1994] J. Qin and J. Lawless. Empirical likelihood and general estimating equations. Ann. Statist., 22(1):300–325, 1994. ISSN 00905364. URL http://dx.doi.org/10.1214/aos/1176325370.
 R Core Team [2017] R Core Team. R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria, 2017. URL https://www.Rproject.org/.

Richardson and Spirtes [2002]
T. Richardson and P. Spirtes.
Ancestral graph Markov models.
Ann. Statist., 30(4):962–1030, 2002. ISSN 00905364. URL http://dx.doi.org/10.1214/aos/1031689015.  Rosseel [2012] Y. Rosseel. lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2):1–36, 2012. URL http://www.jstatsoft.org/v48/i02/.
 Sachs et al. [2005] K. Sachs, O. Perez, D. Pe’er, D. A. Lauffenburger, and G. P. Nolan. Causal proteinsignaling networks derived from multiparameter singlecell data. Science, 308(5721):523–529, 2005.

Shpitser and Pearl [2006]
I. Shpitser and J. Pearl.
Identification of joint interventional distributions in recursive
semiMarkovian causal models.
In
Proceedings, The TwentyFirst National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence Conference, July 1620, 2006, Boston, Massachusetts, USA
, pages 1219–1226, 2006. URL http://www.aaai.org/Library/AAAI/2006/aaai06191.php. 
Spirtes et al. [2000]
P. Spirtes, C. Glymour, and R. Scheines.
Causation, prediction, and search.
Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA, second edition, 2000.
ISBN 0262194406. With additional material by David Heckerman, Christopher Meek, Gregory F. Cooper and Thomas Richardson, A Bradford Book.  Tsao and Wu [2014] M. Tsao and F. Wu. Extended empirical likelihood for estimating equations. Biometrika, 101(3):703–710, 2014. ISSN 00063444. URL http://dx.doi.org/10.1093/biomet/asu014.
 Wermuth [2011] N. Wermuth. Probability distributions with summary graph structure. Bernoulli, 17(3):845–879, 2011. ISSN 13507265. URL http://dx.doi.org/10.3150/10BEJ309.
 Wright [1921] S. Wright. Correlation and causation. J. Agric. Res., 20:557–585, 1921.
Comments
There are no comments yet.