1 Introduction
New technologies allow the collection of large amounts of data up to a significant level of detail. To fully exploit the information in the data it is important that the possibly complex relationships among them are effectively captured and described. A statistical tool that allows to exploit the power of graphs to represent such relationships among a, possibly large, number of variables, is a graphical model. Indeed, a graphical model can provide a geometrical representation of the dependencies among the variables with the immediacy that graphs exhibit. The use of this particular type of models is widespread within disciplines, including finance and economics (Giudici & Spelta (2016)), social sciences (McNally et al. (2015), Williams (2018)), speech recognition (Bilmes (2004), Bell & King (2007)) and biology (Wang et al. (2016)).
A sensible way of describing a graph is (Roverato, 2017) as a collection of two sets of objects: vertices and edges. Vertices represent a finite set of elements, whereas the edges signify the existence of a link or interplay between pairs of those elements. In a diagram, the vertices are drawn as numerically labelled circles, while the edges can be represented by either a simple line or an arrow, symbolising the distinction between undirected and directed graphs, respectively. Formally, an edge is said to be undirected if the order in the pair of the connect vertices is not relevant; conversely, the edge is said to be directed and the order is represented by the direction of an arrow. Examples of both these types of graphs can be seen in Figures 1 and 2.
An attractive feature of undirected graphs is decomposability, since it allows to divide a graph into subgraphs (graphs which are part of a larger graph). Decomposability can help with the computations and in the implementation of efficient inferential methods as subgraphs can be treated separately. To elaborate, a decomposable graph can be divided into smaller parts, called cliques and separators. A clique is a subgraph where all its vertices are connected to each other. A separator has a more technical definition, but it can be intuitively illustrated as follows. Let us assume that a graph is formed by three subgraphs: , and . Then is a separator if the only way to move from a vertex in to a vertex in is through . In the Bayesian framework, the decomposability in cliques and separators allows to define priors which encode the statistical dependencies of a model. A more indepth treatment of the graph notions described above is given in Chapter 2.
A widely used statistical model for graphs is the Gaussian Graphical Model
(GGM). There are many useful reasons for assuming Normality. A remarkable one is that, among all distributions with same mean and same variance, the Normal assumption maximizes the entropy. As a consequence, it imposes the least number of structural constraints beyond the first and second moments. As such, the focus of this paper is on GGM.
The literature around Gaussian graphical models is vast, and it spans from frequentist to Bayesian approaches. Meinshausen & Bühlmann (2006)estimate the neighbourhood of vertices through the LASSO procedure (Tibshirani, 1996) and then put together those estimates to build the underlying graph. Of the same flavour as LASSO, Yuan & Lin (2007) have introduced a penalized likelihood method to estimate the concentration matrix, which for Gaussian graphical models encodes the conditional independence. Friedman et al. (2008) have developed the graphical LASSO algorithm which is quite fast compared to other frequentist based algorithms. The above methods look at the regularization penalty being imposed on the concentration matrix. A method where the penalty is imposed to the inverse of the concentration matrix, the covariance matrix, is presented by Bien & Tibshirani (2011). Giudici & Green (1999)
have applied the transdimensional reversible jump Markov chain Monte Carlo (RJMCMC) algorithm of
Green (1995) to estimate the decomposable graphs that underlie the relationships in the data. This RJMCMC method was extended to estimate the structure in a case of multivariate lattice data by Dobra et al. (2011). Another transdimensional algorithm, this time based upon birthdeath processes, was described by Mohammadi & Wit (2015). Jones et al. (2005)have reviewed the traditional MCMC (Markov chain Monte Carlo) methods used for graph search for both decomposable and nondecomposable cases when highdimensional data is considered and have proposed an alternative method to find high probability regions of the graph space. An MCMC method to estimate the normalising constant of the distribution which has its structure characterised by a nondecomposable graph has been proposed by
AtayKayis & Massam (2005). Their idea was also used by Jones et al. (2005) when nondecomposable graphs were involved. For decomposable graphs, Carvalho & Scott (2009) have introduced a prior for the covariance matrix which helps to improve the accuracy in the graph search. In addition, they have also presented a graph prior which automatically guards against multiplicity.The estimation methods in GGMs have been extensively studied in the literature for both directed (Friedman et al. (2000), Spirtes et al. (2000), Geiger & Heckerman (2002), Shojaie & Michailidis (2010), Stingo et al. (2010), Yajima et al. (2015), Consonni et al. (2017)) and undirected graphs (Dobra et al. (2004), Meinshausen & Bühlmann (2006), Yuan & Lin (2007), Banerjee et al. (2008), Friedman et al. (2008), Carvalho & Scott (2009), Kundu et al. (2013), Stingo & Marchetti (2015)).
We are tackling the Gaussian graphical model problem from the Bayesian perspective. In this approach there are two source of randomness as discussed by Giudici & Green (1999). One is related to the multivariate distribution and the quantities that may parametrise it, the other has to do with the underlying graph , equivalent to describing the conditional independence structure of the model under consideration. As such two kinds of priors are necessary: one related to the model parameters, in our case, the other associated with the graph . In this paper, we will focus on assigning a lossbased prior on , through the methodology of Villa & Walker (2015).
The paper has the following structure. In Section 2 we introduce the notation, as well as present some of the graph priors used in the context of Gaussian graphical models. Section 3 shows our proposed graph prior together with the framework necessary to derive it. We outline the behaviour of our prior for simulated and real data examples in Section 4. Section 5 contains some final discussion points.
2 Graph priors for Gaussian graphical models
We mentioned that graphical models help when modelling complex data. As the name suggests for Gaussian graphical models, the data is assumed to be sampled from a multivariate Gaussian distribution. Let
be adimensional random vector which follows a multivariate Gaussian distribution, that is
where is a dimensional column vector of zero means and is the positivedefinite covariance matrix. Let be the matrix of observations, where , for , is a dimensional realisation from the multivariate Gaussian distribution. The link between the assumed sampling distribution and the graph is specified by completing a positive definite matrix with respect to an undirected graph (Roverato & Whittaker, 1998; Giudici & Green, 1999; AtayKayis & Massam, 2005). For an arbitrary positive definite matrix and an undirected graph , is the unique positive definite matrix completion of with respect to . This means that for the pairs of vertices which share an edge, the corresponding entries of are the same as . The entries for the missing edges are set to be in the concentration (precision) matrix, that is . Therefore, we have a link between the multivariate sampling distribution and the graph structure represented by the zeros of the concentration matrix . In the Gaussian graphical models framework, the dimension of the multivariate Gaussian distribution also represents the number of vertices in the undirected graph . As our sampling distribution is Gaussian, the concentration matrix has a clear interpretation. The entries of the concentration matrix encode the conditional independence structure of the distribution (Lauritzen, 1996). As such, if and only if the
element of the concentration matrix is 0, the random variables
and are conditionally independent given all other variables in the matrix (pairwise Markov property); or, equivalently, given their neighbours (local Markov property). The previous statement is based upon the idea that in a Gaussian graphical model the global, local and pairwise Markov properties are equivalent. For more details about these properties, we refer the reader to Lauritzen (1996).Following Lauritzen (1996), a graph is represented by the pair with a finite set of vertices and a subset of
of ordered pairs of distinct edges. Throughtout the paper we will consider
, where is a strictly positive integer. In the Gaussian Graphical models setting,represents the dimension of the multivariate Normal distribution. In this paper we consider undirected graphs with no loops and without multiple edges between pairs of distinct vertices.
Vertices connected by an edge are called neighbours or adjacent. A sequence of distinct vertices , where the pair , is called a path of length from vertex to vertex . A subset of is an separator when all the paths from to go through the respective subset. Subset separates from if is a separator . A graph where is called a complete graph. A subgraph represents a subset of such that the edge set is restricted to those edges that have both endpoints in the respective subset. We call a complete subgraph a clique. We refer to the decomposition of an undirected graph as a triple where for disjoint sets and such that separates from and is complete. Therefore, the graph is decomposed in the subgraphs and . A decomposable graph can be broken up into cliques and separators. For a nondecomposable graph there will be subgraphs which cannot be decomposed further and are not complete. An example of a nondecomposable graph is in Figure 1, while if we swap the arrows for lines in Figure 2, thus transforming the directed graph into an undirected one, we observe a decomposable graph.
Assuming decomposable, Giudici & Green (1999) discuss the following prior on :
where is the number of decomposable graphs on a specific vertex set . If we consider unrestricted graphs, the above prior is the uniform prior on the graph space and has the form:
(1) 
where is the number of vertices in the graph. A criticism in using a uniform prior is that it assigns more mass to medium size graphs compared to, for example, the empty graph or the full graph.
To address the problem, Jones et al. (2005)
set independent Bernoulli trials on the edge inclusions, such that the prior probability is
leading to an expected number of edges equal to . Thus, the prior on is:where is the number of edges in the graph . Clearly, a close to zero would encourage sparser graphs, while for , more mass will be put on complex graphs.
Carvalho & Scott (2009) recommend a fully Bayesian approach, where should be inferred from the data. As such, they assume that , leading to:
(2) 
By setting (equivalent to setting a uniform prior on ) in equation (2), they obtain the prior on as:
(3) 
A property of the prior in equation (3) is that it corrects for multiplicity. That is, as more noise vertices are added to the true graph, the number of false positives (edges which are erroneously included in the graph) remains constant.
A somewhat similar form of the prior in equation (3) was derived by Armstrong et al. (2009). Their prior, called the sized based prior, uses the parameter representing the number of decomposable graphs instead of the combinatorial coefficient in the formula from above. The value of is estimated using an MCMC scheme and a recurrence relationship with graphs that have up to 5 vertices.
3 A lossbased prior for Gaussian graphical models
In this section, we present a prior based on a methodology that involves loss functions, firstly introduced in
Villa & Walker (2015).To introduce their idea, let us consider Bayesian models:
where is the sampling distribution parametrised by and represents the prior on the model parameter (possibly vector of parameters) . Assuming the priors are proper, the model prior probability
is proportional to the expected minimum Kullback–Leibler divergence from
, where the expectation is considered with respect to . That is:(4) 
To illustrate, let us start by considering what is lost if model is removed from the set of all the possible models and it is the true model. This loss is quantified by the Kullback–Leibler divergence from to the nearest model. The loss is then linked to the model prior probability via the selfinformation loss function (Merhav & Feder, 1998). The prior in (4) is then obtained by equating the two above losses. The above methodology has been used in the framework of change point analysis (Hinoveanu et al., 2019)
and for variable selection in linear regression models
(Villa & Lee, 2015). We follow the insight provided by the latter by adding an additional loss component to account for model complexity. We designed the penalty term to penalize complex graphs, meaning graphs with a relatively large number of edges. For instance, this is in line with the approach suggested by Cowell et al. (2007). Therefore, for a given number of vertices with a maximum number of edges , our prior has the form:(5) 
with and . The component of the prior that penalizes for complexity takes into account the number of the edges of the graph, , as well as the number of graphs with the same number of edges, . The former can be interpreted as an absolute complexity of the graph, whilst the latter is weighing the complexity of the graph relatively to all the graphs with the same number of edges (i.e. relative complexity). Note that the last one is considered in the logscale for mitigating the exponential behaviour of the binomial coefficient for large . This makes the two terms approximately on the same order of magnitude. The two components are mixed by means of , while represents the constant up to which a loss function is defined. Noting that the Kullback–Leibler divergence in (5) is minimized for , as such is zero, the prior will have the form:
(6) 
The constant allows to set the prior in order to control the sparsity of the graph. In particular, for , the prior in equation (6) will decrease quickly to zero, assigning most of the mass to simple graphs. On the other hand, small values of result in a prior where is more evenly distributed over the whole space of graphs. In fact, if we set the prior in (6) will become , that is the uniform prior. An interesting feature of the prior in (6) is that it has, as particular cases, other wellknown priors, besides the uniform prior. By setting, and we recover the prior in equation (3) proposed by Carvalho & Scott (2009).
If we set we obtain
which reminds the prior of Villa & Lee (2015), introduced in the context of linear regression.
Let represent the set of symmetric positivedefinite matrices constrained by , which means there is an equivalence between the zeroes of the concentration matrix and the missing edges from graph . The function denotes the multivariate Gaussian sampling distribution with covariance matrix
. Then, the graph posterior probability is:
Although our prior is suitable for both decomposable and nondecomposable graphs, here we focus on the former class of graphs so that we can compare the performance of our prior to other priors available in the literature.
Regarding the marginal likelihood, we are using the hyperinverse Wishart prior of Carvalho & Scott (2009) as prior for the constrained covariance matrix . This prior arises as the implied fractional prior of the covariance matrix (O’Hagan, 1995) for the following noninformative prior, whose form was purposely selected to maintain conjugacy:
Here, and represent the clique and separator sets for graph , respectively. Furthermore, the hyperinverse Wishart
prior is a conjugate prior for the multivariate Gaussian distribution. As such, the marginal likelihood can be expressed in closed form as:
with
denoting the normalising constant of the hyperinverse Wishart distribution with degrees of freedom parameter
and scale matrix . For a decomposable graph, can be expressed as a ratio of products over the cliques and separators, that iswhere
represents the multivariate gamma function.
4 Simulated and Real Data Examples
In this section, we are showing the behaviour of the prior in equation (6) in both simulated and real data scenarios. We focus on decomposable graphs and inference is made by implementing the FINCS algorithm.
For the analyses, on simulated and real data, we compare four priors on . Namely, the Carvalho and Scott prior ( Prior), the uniform prior ( Prior) and the proposed prior with two different settings: in the first we have and ( Prior) and for the second we have and ( Prior). Thus:
The above choices of the two priors have been dictated by the following reasons. The Prior allows to highlight the choice of a prior that penalises for the absolute graph complexity without including any prior information on the rate of penalisation (controllable by setting ). The choice of the Prior is driven by the motivation of understanding how equal weights for the two types of penalty considered, i.e. absolute versus relative, interplay.
4.1 Simulated Data Example
The simulation study has been taken from Carvalho & Scott (2009). We start from a graph with 10 vertices and 20 edges, which is represented in Figure 3. We have then added 5 and 40 noise vertices for, respectively, the first and the second simulation. These noise vertices represent vertices unconnected to each other or with the 10 vertices graph. The data has been simulated from a zero mean multivariate normal distribution with the covariance matrix designed to represent the dependencies of the above graphs. In both cases the sample size was of observations. That is, we have sampled 50 realisations for a vertices graph and a vertices graph, each embedding the graph from Figure 3 as a subgraph, through the R package BDgraph of Mohammadi & Wit (2017). We have run FINCS for 5 million iterations and set a global move every 50 iterations; the resampling step was considered at every iteration. During the FINCS search, we have saved the best 1000 graphs. The estimated edge posterior inclusion probabilities were computed as
and reported in Table 4.1, for the case , and in Table 4.1, for the case .
Edge  Noise Vertices: 5 (=15)  
10cm Prior  5cm Prior ()  5cm Prior ()  10cm Prior  
(1,6)  0.167  0.234  0.216  0.158 
(1,7)  0.916  0.981  0.960  0.997 
(2,4)  0.079  0.173  0.126  0.184 
(3,4)  0.014  0.017  0.018  0.321 
(3,6)  0.961  0.994  0.987  0.999 
(3,7)  0.198  0.355  0.282  0.311 
(3,8)  0.997  1.000  0.999  1.000 
(3,9)  0.013  0.012  0.013  0.025 
(4,6)  0.023  0.025  0.027  0.366 
(4,8)  0.005  0.003  0.005  0.006 
(4,9)  0.493  0.877  0.721  0.984 
(5,6)  0.007  0.003  0.005  0.007 
(5,9)  0.698  0.958  0.878  0.994 
(6,7)  0.014  0.014  0.015  0.013 
(6,8)  0.005  0.009  0.007  0.018 
(6,9)  0.011  0.013  0.011  0.297 
(7,9)  0.213  0.153  0.179  0.097 
(7,10)  1.000  1.000  1.000  1.000 
(8,9)  0.006  0.007  0.007  0.015 
(9,10)  0.785  0.874  0.834  0.962 
FPs:  0  1  0  2 
Edge  Noise Vertices: 40 (=50)  
10cm Prior  5cm Prior ()  5cm Prior ()  10cm Prior  
(1,6)  1.000  1.000  1.000  1.000 
(1,7)  1.000  1.000  1.000  1.000 
(2,4)  0.454  0.996  0.753  1.000 
(3,4)  0.002  0.003  0.003  0.120 
(3,6)  0.000  0.000  0.000  0.000 
(3,7)  0.000  0.000  0.000  0.000 
(3,8)  0.999  1.000  1.000  1.000 
(3,9)  0.001  0.001  0.001  0.006 
(4,6)  0.000  0.000  0.000  0.000 
(4,8)  1.000  1.000  1.000  1.000 
(4,9)  0.089  0.001  0.016  0.002 
(5,6)  0.000  0.000  0.000  0.001 
(5,9)  1.000  1.000  1.000  1.000 
(6,7)  1.000  1.000  1.000  1.000 
(6,8)  0.000  0.000  0.000  0.001 
(6,9)  0.991  1.000  1.000  1.000 
(7,9)  0.992  1.000  1.000  1.000 
(7,10)  0.000  0.000  0.000  0.001 
(8,9)  0.912  1.000  0.985  1.000 
(9,10)  1.000  1.000  1.000  1.000 
FPs:  0  11  2  41 
In terms of false positive flags (FPs), we see an increase for the and priors when moving from 5 to 40 noise vertices; although the increment for the uniform prior is virtually onetoone. For the prior, that is when we mix the and the prior with equal weight, the increase in FPs is marginal. One way of deciding if an edge has to be included in the graph is to consider its posterior inclusion probability. A decision threshold is (Carvalho & Scott, 2009) is 0.5. With the above threshold with note a general agreement between the priors, with just two exceptions: edge in Table 4.1, and edge in Table 4.1. However, we have to note that the posterior inclusion probabilities for the two vertices, when the prior is used on , are very close to 0.5.
4.2 Real Data Examples
To illustrate the proposed prior, and to compare it with the others, we have selected three data sets, encompassing different sizes, both in terms of variables and in terms of number of observations. The results, obtained with the same settings for the FINCS algorithm implemented in Section 4.1, are presented below, Sections 4.2.1, 4.2.2 and 4.2.3. For comparison purposes, edges have been selected as part of the estimated graph if their posterior inclusion probability was at least 0.5.
4.2.1 The Multivariate Flow Cytometry Dataset
Sachs et al. (2005)
have made flow cytometry measurements for 11 phosphorylated proteins and phospholipids across a total number of 7466 observations. The 11 proteins considered have the following nomenclature: Raf, Mek, Plcg, PIP2, PIP3, Erk, Akt, PKA, PKC, P38, Jnk. The purpose of their study was to infer a Bayesian network to reveal possible connections between enzymes. We have centred the data and the key results are reported in Table
4.2.1 and Table 4.2.1.The more sparse graph was produced using the prior, and the included edges are listed in Table 4.2.1. The edges not included under the prior are reported in Table 4.2.1.
In the analysis the data has been centred. According to FINCS, the most sparse graph corresponds to the prior and the included edges can be seen in Table 4.2.1, which also display the estimated posterior inclusion probabilities under the other priors. In Table 4.2.1 , we can see the edges that were omitted for the prior, but included for the others. The most complex graph is selected under the prior, where 5 extra edges are added, while the and the priors include, respectively, 1 and 2 edges more than the prior. To note, edge , which is included by all the priors except the prior, has a posterior inclusion probability for the latter prior relatively close to 0.5, suggesting that it is likely to be the sole relevant difference among the priors. For the remaining edges in Table 4.2.1, a more conservative threshold (e.g. set at 0.7) would have excluded them from all the graphs. For the included edges (Table 4.2.1), there is strong agreement among the priors, as the posterior inclusion predictabilities are all quite close to one.
Index  Edge  prior  prior  prior  prior 

1  (1,2)  1.000  1.000  1.000  1.000 
2  (1,3)  1.000  1.000  1.000  1.000 
3  (1,6)  1.000  1.000  1.000  1.000 
4  (1,7)  1.000  1.000  1.000  1.000 
5  (1,11)  0.999  0.999  0.999  0.999 
6  (2,3)  1.000  1.000  1.000  1.000 
7  (2,6)  1.000  1.000  1.000  1.000 
8  (2,7)  1.000  1.000  1.000  1.000 
9  (2,8)  0.999  0.997  0.998  0.999 
10  (2,10)  0.892  0.932  0.907  0.904 
11  (2,11)  0.999  1.000  0.999  0.999 
12  (3,4)  1.000  1.000  1.000  1.000 
13  (3,5)  1.000  1.000  1.000  1.000 
14  (3,6)  1.000  1.000  1.000  1.000 
15  (3,7)  1.000  1.000  1.000  1.000 
16  (3,8)  1.000  1.000  1.000  1.000 
17  (3,9)  0.978  0.910  0.952  0.957 
18  (3,10)  0.999  0.983  0.996  0.997 
19  (3,11)  1.000  1.000  1.000  1.000 
20  (4,5)  1.000  1.000  1.000  1.000 
21  (5,7)  1.000  1.000  1.000  1.000 
22  (5,11)  0.947  0.938  0.924  0.923 
23  (6,7)  1.000  1.000  1.000  1.000 
24  (6,8)  1.000  1.000  1.000  1.000 
25  (6,11)  1.000  1.000  1.000  1.000 
26  (7,8)  1.000  1.000  1.000  1.000 
27  (7,9)  1.000  1.000  1.000  1.000 
28  (7,10)  1.000  1.000  1.000  1.000 
29  (7,11)  1.000  1.000  1.000  1.000 
30  (8,9)  1.000  1.000  1.000  1.000 
31  (8,10)  1.000  1.000  1.000  1.000 
32  (8,11)  1.000  1.000  1.000  1.000 
33  (9,10)  1.000  1.000  1.000  1.000 
34  (9,11)  1.000  1.000  1.000  1.000 
35  (10,11)  1.000  0.999  1.000  1.000 
Index  Edge  prior  prior  prior  prior 

1  (1,5)  0.550  0.043  0.182  0.216 
2  (1,8)  0.832  0.436  0.644  0.677 
3  (2,5)  0.561  0.046  0.190  0.224 
4  (2,9)  0.656  0.322  0.480  0.507 
5  (4,11)  0.528  0.197  0.338  0.363 
4.2.2 The PTSD Symptoms for Earthquake Survivors in Wenchuan, China Dataset
This dataset (McNally et al., 2015) represents the measurement of 17 symptoms associated with PTSD (Posttraumatic stress disorder) reported by 362 survivors of an earthquake from the Wenchuan county in the Sichuan province, China. Each of the participants indicated through a ordinal scale from 1 to 5 how affected they were by every single one of the 17 PTSD symptoms, where 1 signifies not being bothered by the symptom at hand, whereas 5 corresponds to an extreme response to the same symptom. All participants have lost at least one child in the respective earthquake. The data is available with the R package APR (Mair, 2015). Amongst those 362 answers, in 18 cases, there was missing information associated with one or several symptoms. These cases were discarded, leaving a final sample of 344 participants, and the data was centred.
The sparser graph is identified under the prior and it contains 44 edges. With exception of edge , the remaining 43 edges were also included in the other three priors. Table 4.2.2 reports the 8 edges not included in all the four priors.
Index  Edge  prior  prior  prior  prior 

1  (1,14)  0.608  0.492  0.413  0.763 
2  (1,17)  1.000  1.000  0.456  1.000 
3  (2,4)  0.513  0.512  0.385  0.463 
4  (3,17)  0.528  0.531  0.246  0.634 
5  (4,17)  0.994  0.969  0.442  0.998 
6  (7,17)  0.908  0.895  0.414  0.999 
7  (9,11)  0.495  0.405  0.431  0.663 
8  (13,16)  0.027  0.019  0.562  0.045 
4.2.3 The Breast Cancer Dataset
Hess et al. (2006) have collected gene expression data for 133 patients which had breast cancer. This dataset was also analysed by Ambroise et al. (2009) and made available through the R package SIMONE (Statistical Inference for MOdular NEtworks) developed by one of the authors. There are 26 genes considered in the study. The dataset is split in two groups, one pertaining to the pathological complete response (pCR) to the chemotherapy treatment started after surgery, whereas the other corresponds to the disease still being present in the patients (notpCR). First, we have looked at the notpCR cases which was recorded for 99 patients. The remaining 34 patients had a positive response to the treatment (the pCR case). The data has been centred.
For both groups the most sparse graph identified corresponds to the prior, closely followed by the prior. In the nonpCR case, the graph corresponding to the prior had 25 edges, amongst which 22 edges have been identified in all other priors. In Table 4.2.3, we see the edges that were omitted under some priors, but were included under others for the nonpCR group. Table 4.2.3 shows the inclusion and omission of several edges under our four priors when the pCR group is considered. For the pCR case, the graph identified based on the posterior inclusion probabilities under the prior has 21 edges, amongst which 17 edges are included in the graphs inferred under all other three considered priors.
Index  Edge  prior  prior  prior  prior 

1  (1,14)  0.098  0.100  0.138  0.824 
2  (1,15)  0.759  0.819  0.742  0.143 
3  (2,8)  0.056  0.314  0.127  0.870 
4  (4,6)  0.109  0.129  0.099  0.622 
5  (4,7)  0.461  0.343  0.475  0.970 
6  (4,8)  0.176  0.889  0.165  0.313 
7  (4,11)  0.019  0.879  0.012  0.000 
8  (4,13)  0.003  0.643  0.003  0.000 
9  (4,15)  0.320  0.200  0.344  0.850 
10  (4,17)  0.000  0.887  0.001  0.005 
11  (4,19)  0.002  0.661  0.003  0.000 
12  (6,9)  0.160  0.568  0.162  0.998 
13  (6,15)  0.365  0.705  0.372  1.000 
14  (6,26)  0.003  0.480  0.004  0.999 
15  (7,8)  0.000  0.001  0.000  0.965 
16  (7,11)  0.001  0.801  0.001  0.000 
17  (7,15)  0.000  0.000  0.000  0.912 
18  (7,16)  0.024  0.092  0.046  0.521 
19  (7,17)  0.000  1.000  0.001  1.000 
20  (7,23)  0.002  0.005  0.004  0.956 
21  (8,12)  0.000  0.000  0.000  0.606 
22  (8,23)  0.058  0.005  0.024  0.865 
23  (9,15)  0.878  0.479  0.894  0.034 
24  (9,26)  0.213  0.770  0.296  1.000 
25  (11,13)  0.145  0.041  0.201  0.551 
26  (11,14)  0.560  0.041  0.636  0.931 
27  (11,17)  0.364  0.968  0.408  0.472 
28  (11,19)  0.000  0.849  0.000  0.000 
29  (12,17)  0.000  0.741  0.000  0.951 
30  (12,24)  0.002  0.872  0.001  0.985 
31  (13,14)  0.291  0.237  0.604  0.979 
32  (14,20)  0.003  0.013  0.009  0.752 
33  (17,19)  0.003  0.998  0.001  0.006 
34  (17,23)  0.018  0.065  0.007  0.583 
35  (17,25)  0.036  0.980  0.075  0.999 
Index  Edge  prior  prior  prior  prior 

1  (2,9)  0.001  0.008  0.004  0.538 
2  (2,10)  0.001  0.001  0.001  0.985 
3  (5,16)  0.190  0.522  0.333  0.994 
4  (5,17)  0.309  0.750  0.510  0.996 
5  (6,16)  0.011  0.016  0.011  0.759 
6  (6,17)  0.111  0.775  0.464  0.983 
7  (8,10)  0.000  0.000  0.000  1.000 
8  (8,15)  0.621  0.028  0.561  1.000 
9  (8,16)  0.001  0.995  0.017  1.000 
10  (8,20)  0.001  0.011  0.001  0.720 
11  (8,25)  0.953  0.969  0.972  0.251 
12  (8,26)  0.241  0.998  0.389  1.000 
13  (9,26)  0.004  0.021  0.010  0.601 
14  (10,15)  0.000  0.000  0.000  0.999 
15  (10,16)  0.001  0.641  0.001  1.000 
16  (10,18)  0.000  0.010  0.000  0.996 
17  (10,21)  0.009  0.003  0.007  0.987 
18  (10,26)  0.001  0.005  0.002  1.000 
19  (11,16)  0.000  0.984  0.002  0.010 
20  (11,18)  0.056  0.980  0.045  0.001 
21  (14,20)  0.652  0.008  0.660  0.972 
22  (15,16)  0.000  0.004  0.000  0.999 
23  (15,26)  0.991  0.993  0.994  0.013 
24  (16,17)  0.000  0.055  0.000  0.963 
25  (16,25)  0.012  0.037  0.007  0.785 
26  (16,26)  0.000  0.995  0.007  1.000 
27  (17,22)  0.000  0.000  0.000  0.741 
28  (17,25)  0.000  0.000  0.000  0.751 
29  (18,26)  0.362  0.021  0.508  0.062 
30  (20,24)  0.001  0.000  0.003  0.972 
5 Conclusion
In the present work, we have illustrated a novel prior for the space of graphs in the context of Graphical Gaussian Models. The prior is derived using a loss with two components: one relative to the informational content of the graph and one related to its complexity. The results were obtained by implementing the FINCS algorithm and comparison were made with to alternative weakly informative priors: the uniform prior and the prior prosed in Carvalho & Scott (2009). In addition, we have shown how the latter prior and the proposed can be interpreted as special case of a more general prior. We have found that the proposed prior and the Carvalho & Scott (2009) prior appear to perform similarly, when real data is analysed, with a tendency to proved sparser graphs under the proposed prior. In the case of simulated data, the uniform prior is outperformed by the other priors, in particular when noise is included in the graph.
Appendix  FINCS algorithm
References
 (1)
 Ambroise et al. (2009) Ambroise, C., Chiquet, J. & Matias, C. (2009), ‘Inferring sparse Gaussian graphical models with latent structure’, Electronic Journal of Statistics 3, 205–238.
 Armstrong et al. (2009) Armstrong, H., Carter, C. K., Wong, K. F. K. & Kohn, R. (2009), ‘Bayesian Covariance Matrix Estimation using a Mixture of Decomposable Graphical Models’, Statistics and Computing 19(3), 303–316.
 AtayKayis & Massam (2005) AtayKayis, A. & Massam, H. (2005), ‘A Monte Carlo Method for Computing the Marginal Likelihood in Nondecomposable Gaussian Graphical Models’, Biometrika 92(2), 317–335.

Banerjee et al. (2008)
Banerjee, O., El Ghaoui, L. & d’Aspremont, A. (2008), ‘Model Selection Through Sparse Maximum Likelihood
Estimation for Multivariate Gaussian or Binary Data’,
The Journal of Machine Learning Research
9, 485–516.  Bell & King (2007) Bell, P. & King, S. (2007), Sparse Gaussian graphical models for speech recognition, in ‘INTERSPEECH 2007, 8th Annual Conference of the International Speech Communication Association, Antwerp, Belgium, August 2731, 2007’, pp. 2113–2116.
 Bien & Tibshirani (2011) Bien, J. & Tibshirani, R. J. (2011), ‘Sparse estimation of a covariance matrix’, Biometrika 98(4), 807–820.
 Bilmes (2004) Bilmes, J. A. (2004), Graphical Models and Automatic Speech Recognition, in M. Johnson, S. P. Khudanpur, M. Ostendorf & R. Rosenfeld, eds, ‘Mathematical Foundations of Speech and Language Processing’, Springer New York, New York, NY, pp. 191–245.
 Carvalho & Scott (2009) Carvalho, C. M. & Scott, J. G. (2009), ‘Objective Bayesian Model Selection in Gaussian Graphical Models’, Biometrika 96(3), 497.
 Consonni et al. (2017) Consonni, G., La Rocca, L. & Peluso, S. (2017), ‘Objective Bayes CovariateAdjusted Sparse Graphical Model Selection’, Scandinavian Journal of Statistics 44(3), 741–764.
 Cowell et al. (2007) Cowell, R. G., Dawid, A. P., Lauritzen, S. L. & Spiegelhalter, D. J. (2007), Probabilistic Networks and Expert Systems: Exact Computational Methods for Bayesian Networks, 1st edn, Springer Publishing Company, Incorporated.

Dobra et al. (2004)
Dobra, A., Hans, C., Jones, B., Nevins, J. R., Yao, G. & West, M.
(2004), ‘Sparse graphical models for
exploring gene expression data’,
Journal of Multivariate Analysis
90(1), 196 – 212. Special Issue on Multivariate Methods in Genomic Data Analysis. 
Dobra et al. (2011)
Dobra, A., Lenkoski, A. & Rodriguez, A. (2011), ‘Bayesian Inference for General Gaussian Graphical Models With Application to Multivariate Lattice Data’,
Journal of the American Statistical Association 106(496), 1418–1433.  Friedman et al. (2008) Friedman, J., Hastie, T. & Tibshirani, R. (2008), ‘Sparse inverse covariance estimation with the graphical lasso’, Biostatistics 9(3), 432–441.
 Friedman et al. (2000) Friedman, N., Linial, M., Nachman, I. & Pe’er, D. (2000), ‘Using Bayesian Networks to Analyze Expression Data’, Journal of Computational Biology 7(34), 601–620. PMID: 11108481.

Geiger & Heckerman (2002)
Geiger, D. & Heckerman, D. (2002), ‘Parameter priors for directed acyclic graphical models and the characterization of several probability distributions’,
The Annals of Statistics 30(5), 1412–1440.  Giudici & Green (1999) Giudici, P. & Green, P. (1999), ‘Decomposable Graphical Gaussian Model Determination’, Biometrika 86(4), 785.
 Giudici & Spelta (2016) Giudici, P. & Spelta, A. (2016), ‘Graphical Network Models for International Financial Flows’, Journal of Business & Economic Statistics 34(1), 128–138.
 Green (1995) Green, P. J. (1995), ‘Reversible jump markov chain monte carlo computation and bayesian model determination’, Biometrika 82(4), 711–732.
 Hess et al. (2006) Hess, K. R., Anderson, K., Symmans, W. F., Valero, V., Ibrahim, N., Mejia, J. A., Booser, D., Theriault, R. L., Buzdar, A. U., Dempsey, P. J., Rouzier, R., Sneige, N., Ross, J. S., Vidaurre, T., Gómez, H. L., Hortobagyi, G. N. & Pusztai, L. (2006), ‘Pharmacogenomic Predictor of Sensitivity to Preoperative Chemotherapy With Paclitaxel and Fluorouracil, Doxorubicin, and Cyclophosphamide in Breast Cancer’, Journal of Clinical Oncology 24(26), 4236–4244.

Hinoveanu et al. (2019)
Hinoveanu, L. C., Leisen, F. & Villa, C. (2019), ‘Bayesian lossbased approach to change point
analysis’, Computational Statistics & Data Analysis 129, 61 –
78.
http://www.sciencedirect.com/science/article/pii/S0167947318301919  Jones et al. (2005) Jones, B., Carvalho, C., Dobra, A., Hans, C., Carter, C. & West, M. (2005), ‘Experiments in Stochastic Computation for HighDimensional Graphical Models’, Statistical Science 20(4), 388–400.
 Kundu et al. (2013) Kundu, S., Baladandayuthapani, V. & Mallick, B. K. (2013), ‘Bayes Regularized Graphical Model Estimation in High Dimensions’, ArXiv eprints . Provided by the SAO/NASA Astrophysics Data System.
 Lauritzen (1996) Lauritzen, S. L. (1996), Graphical Models, Claredon Press, Oxford.
 Mair (2015) Mair, P. (2015), APR: Applied Psychometrics With R. R package version 0.06/r205.
 McNally et al. (2015) McNally, R. J., Robinaugh, D. J., Wu, G. W. Y., Wang, L., Deserno, M. K. & Borsboom, D. (2015), ‘Mental Disorders as Causal Systems: A Network Approach to Posttraumatic Stress Disorder’, Clinical Psychological Science 3(6), 836–849.
 Meinshausen & Bühlmann (2006) Meinshausen, N. & Bühlmann, P. (2006), ‘HighDimensional Graphs and Variable Selection with the Lasso’, The Annals of Statistics 34(3), 1436–1462.
 Merhav & Feder (1998) Merhav, N. & Feder, M. (1998), ‘Universal prediction’, IEEE Transactions on Information Theory 44(6), 2124–2147.
 Mohammadi & Wit (2015) Mohammadi, A. & Wit, E. C. (2015), ‘Bayesian Structure Learning in Sparse Gaussian Graphical Models’, Bayesian Analysis 10(1), 109–138.
 Mohammadi & Wit (2017) Mohammadi, A. & Wit, E. C. (2017), BDgraph: Bayesian Structure Learning in Graphical Models using BirthDeath MCMC. R package version 2.36.

O’Hagan (1995)
O’Hagan, A. (1995), ‘Fractional bayes factors for model comparison’,
Journal of the Royal Statistical Society. Series B (Methodological) 57(1), 99–138.  Roverato (2017) Roverato, A. (2017), Graphical Models for Categorical Data, SemStat Elements, Cambridge University Press.
 Roverato & Whittaker (1998) Roverato, A. & Whittaker, J. (1998), ‘The Isserlis matrix and its application to nondecomposable graphical Gaussian models’, Biometrika 85(3), 711–725.
 Sachs et al. (2005) Sachs, K., Perez, O., Pe’er, D., Lauffenburger, D. A. & Nolan, G. P. (2005), ‘Causal ProteinSignaling Networks Derived from Multiparameter SingleCell Data’, Science 308(5721), 523–529.
 Scott & Carvalho (2008) Scott, J. G. & Carvalho, C. M. (2008), ‘Featureinclusion stochastic search for gaussian graphical models’, Journal of Computational and Graphical Statistics 17(4), 790–808.
 Shojaie & Michailidis (2010) Shojaie, A. & Michailidis, G. (2010), Penalized Principal Component Regression on Graphs for Analysis of Subnetworks, in J. D. Lafferty, C. K. I. Williams, J. ShaweTaylor, R. S. Zemel & A. Culotta, eds, ‘Advances in Neural Information Processing Systems 23’, Curran Associates, Inc., pp. 2155–2163.
 Spirtes et al. (2000) Spirtes, P., Glymour, C. & Scheines, R. (2000), Causation, Prediction, and Search, 2nd edn, MIT press.
 Stingo et al. (2010) Stingo, F. C., Chen, Y. A., Vannucci, M., Barrier, M. & Mirkes, P. E. (2010), ‘A Bayesian graphical modeling approach to microRNA regulatory network inference’, The Annals of Applied Statistics 4(4), 2024–2048.
 Stingo & Marchetti (2015) Stingo, F. & Marchetti, G. M. (2015), ‘Efficient local updates for undirected graphical models’, Statistics and Computing 25(1), 159–171.
 Tibshirani (1996) Tibshirani, R. (1996), ‘Regression shrinkage and selection via the lasso’, Journal of the Royal Statistical Society. Series B (Methodological) 58(1), 267–288.
 Villa & Lee (2015) Villa, C. & Lee, J. E. (2015), ‘Model Prior Distribution for Variable Selection in Linear Regression Models’, ArXiv eprints . Provided by the SAO/NASA Astrophysics Data System.
 Villa & Walker (2015) Villa, C. & Walker, S. (2015), ‘An Objective Bayesian Criterion to Determine Model Prior Probabilities’, Scandinavian Journal of Statistics 42(4), 947–966.
 Wang et al. (2016) Wang, T., Ren, Z., Ding, Y., Fang, Z., Sun, Z., MacDonald, M. L., Sweet, R. A., Wang, J. & Chen, W. (2016), ‘FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks’, PLOS Computational Biology 12(2), 1–16.
 Williams (2018) Williams, D. R. (2018), ‘Bayesian inference for gaussian graphical models: Structure learning, explanation, and prediction’, PsyArXiv .
 Yajima et al. (2015) Yajima, M., Telesca, D., Ji, Y. & Müller, P. (2015), ‘Detecting differential patterns of interaction in molecular pathways’, Biostatistics 16(2), 240–251.
 Yuan & Lin (2007) Yuan, M. & Lin, Y. (2007), ‘Model Selection and Estimation in the Gaussian Graphical Model’, Biometrika 94(1), 19.
Comments
There are no comments yet.