Accurate directional inference in Gaussian graphical models

by   Claudia Di Caterina, et al.

Directional tests to compare nested parametric models are developed in the general context of covariance selection for Gaussian graphical models. The exactness of the underlying saddlepoint approximation leads to exceptional accuracy of the proposed approach. This is verified by simulation experiments with high-dimensional parameters of interest, where the accuracy of standard asymptotic approximations to the likelihood ratio test and some of its higher-order modifications fails. The directional p-value isused to illustrate the assessment of Markovian dependencies in a dataset from a veterinary trial on cattle. A second example with microarray data shows how to select the graph structure related to genetic anomalies due to acute lymphocytic leukemia.



page 1

page 2

page 3

page 4


Learning Gaussian Graphical Models With Fractional Marginal Pseudo-likelihood

We propose a Bayesian approximate inference method for learning the depe...

A New Look at F-Tests

Directional inference for vector parameters based on higher order approx...

Learning Undirected Graphical Models with Structure Penalty

In undirected graphical models, learning the graph structure and learnin...

An empirical G-Wishart prior for sparse high-dimensional Gaussian graphical models

In Gaussian graphical models, the zero entries in the precision matrix d...

Simultaneous Inference for Pairwise Graphical Models with Generalized Score Matching

Probabilistic graphical models provide a flexible yet parsimonious frame...

Learning Continuous Exponential Families Beyond Gaussian

We address the problem of learning of continuous exponential family dist...

OpenGM: A C++ Library for Discrete Graphical Models

OpenGM is a C++ template library for defining discrete graphical models ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.