I Introduction
This paper addresses the following question: how can one construct, from a set of observations, the model that most meaningfully describes the underlying system? This is a general question at the core of scientific research. Indeed, the socalled scientific method has been devised around a combination of observation, model and prediction in a circular way, where hypotheses are formulated and tested with further observations, iteratively refining or changing the models to obtain better predictions; all within the principle of parsimony where a simpler model with less parameters and less assumptions should be preferred to a more complex one.
In the context of the present paper the ‘model’ is the multivariate probability distribution that best describes the set of observations. The problem of finding such a distribution becomes particularly challenging when the number of variables,
, is large and the number of observations, , is small. Indeed, in such a multivariate problem, the model must take into account at least an orderof interrelations between the variables and therefore the number of modelparameters scales at least quadratically with the number of variables. A parsimonious approach requires to discover the model that best reproduces the statistical properties of the observations while keeping the number of parameters as small as possible. Using a maximum entropy approach, up to the second order in the moments of the distribution, the model becomes the multivariate normal distribution. In the multivariate normal case there is a simple relationship between the sparsity pattern of the inverse of the covariance matrix (the precision matrix, henceforth denoted by
) and the underlying partial correlation structure (referred to as ‘graphical model’ in the literature Lauritzen (1996)): two nodes and are linked in the graphical model if and only if the corresponding precision matrix element is different from zero. Therefore the problem of estimating a sparse precision matrix is equivalent to the problem of learning a sparse multivariate normal graphical model (known in the literature as Gaussian Markov Random Field (GMRF) Rue and Held (2005)). Once the sparse precision matrix has been estimated, a number of efficient tools – mostly based on research in sparse numerical linear algebra – can be used to sample from the distribution, calculate conditional probabilities, calculate conditional statistics and forecast Davis (2006); Rue and Held (2005). GMRFs are of great importance in many applications spanning computer vision
Wang et al. (2013), sparse sensing Montanari (2012), finance Tumminello et al. (2005); Aste et al. (2010); Pozzi et al. (2013); Musmeci et al. (2015a, b); Denev (2015), gene expression Wu et al. (2003); Schäfer and Strimmer (2005); Lezon et al. (2006); biological neural networks
Schneidman et al. (2006), climate networks Zerenner et al. (2014); EbertUphoff and Deng (2012); geostatistics and spatial statistics Haran (2011); Hristopulos and Elogne (2009); Hristopulos (2003). Almost universally, applications require modelling a large number of variables with a relatively small number of observations and therefore the issue of the statistical significance of the model parameters is very important.The problem of finding meaningful and parsimonious models, sometimes referred as sparse structure learning Zhou (2011), has been tackled by using a number of different approaches. Let us hereafter briefly account for some of the most relevant in the present context.
Constraint based approaches recover the structure of the network by testing the local Markov property. Usually the algorithm starts from a complete model and adopts a backward selection approach by testing the independence of nodes conditioned on subsets of the remaining nodes (algorithms SGS and PC Zhou (2011)) and removing edges associated to nodes that are conditionally independent; the algorithm stops when some criteria are met – e.g. every node has less than a given number of neighbours. Conversely forward selection algorithms start from a sparse model and add edges associated to nodes that are discovered to be conditionally dependent. An hybrid model is the GS algorithm where candidate edges are added to the model (the “grow” step) in a forward selection phase and subsequently reduced using a backward selection step (the “shrinkage” step) Zhou (2011). However, the complexity of checking a large number of conditional independence statements makes these methods unsuitable for moderately large graphs. Furthermore, aside from the complexity of measuring conditional independence, these methods do not generally optimize a global function, such as likelihood or the Akaike Information Criterion Akaike (1974, 1998) but they rather try to exhaustively test all the (local) conditional independence properties of a set of data and therefore are difficult to use in a probabilistic framework.
Score based approaches
learn the inference structure trying to optimize some global function: likelihood, KullbackLeibler divergence
Kullback and Leibler (1951), Bayesian Information Criterion (BIC) Schwarz et al. (1978), Minimum Description Length Rissanen (1978)or the likelihood ratio test statistics
Petitjean et al. (2013). In these approaches, the main issue is that the optimization is generally computationally demanding and some sort of greedy approach is required. Statistical physics methodologies for the discovery of inference networks by maximization of entropy or minimization of cost functions have been used in biologically motivated studies to extract gene regulatory networks and signalling networks (see for instance Molinelli13 ; Diambra11 ).In the field of decomposable models there are a number of methods that efficiently explore the graphical structure (directed, in the case of Bayesian models, or undirected in the case of loglinear or multivariate Gaussian models) by using advanced graph structures such as junction tree or clique graph Giudici et al. (1999); Deshpande et al. (2001); Petitjean et al. (2013), with the goal of producing sparse models (socalled “thin junction trees” Bach and Jordan (2001); Srebro (2001); Chechetka and Guestrin (2008)).
Other approaches d’Aspremont et al. (2008); Banerjee et al. (2008a); Banerjee et al. (2006) treat the problem as a constrained optimization problem to recover the sparse covariance matrix. Within this line, regression based approaches
generally try to minimize some loss function which enforces parsimony and sparsity by using penalization to constrain the number and size of the regression parameters. Specifically ridge regression uses a
norm penalty; instead the lasso method Tibshirani (1996) uses an norm penalty and the elasticnet approach uses a convex combination of and penalties on the regression coefficients Zou and Hastie (2005). These approaches are among the best performing regularization methodologies presently available. The norm penalty term favours solutions with parameters with zero value leading to models with sparse inverse covariances. Sparsity is controlled by regularization parameters ; the larger the value of the parameters the more sparse the solution becomes. This approach has become extremely popular and, around the original idea, a large body of literature has been published with several novel algorithmic techniques that are continuously advancing this method Tibshirani (1996); Meinshausen and Bühlmann (2006); Banerjee et al. (2008b); Ravikumar et al. (2011); Hsieh et al. (2011); Oztoprak et al. (2012) among these the popular implementation Glasso (Graphicallasso) Friedman et al. (2008) which uses lasso to compute sparse graphical models. However, Glasso methods are computationally intensive and, although they are sparse, the nonzero parameters interaction structure tends to be noisy and not significantly related with the true underlying interactions between the variables.Meaningful structures associated with the relevant network of interactions in complex systems are instead retrieved by Information Filtering Networks which were first introduced by Mantegna Mantegna (1999) and some of the authors of the present paper Tumminello et al. (2005); Aste and Matteo (2006) for the study of the structure of financial markets and biological systems Song et al. (2012). There is now a large body of literature demonstrating that Information Filtering Networks such as Maximum Spanning Tree (MST) or Planar Maximally Filtered Graphs (PMFG) constructed from correlation matrices retrieve meaningful structures, extracting the relevant interactions from multivariate datasets in complex systems Aste et al. (2010); Song et al. (2012); Pozzi et al. (2013). Recently, a new family of Information Filtering Networks, the triangulated maximal planar graph (TMFG) Massara et al. (2015), was introduced. These are planar graphs, similar to the PMFG, but with the advantages to be generated in a computationally efficient way and, more importantly for this paper, they are decomposable graphs (see example in Fig.1). A decomposable graph has the property that every cycle of length greater than three has a chord, an edge that connects two vertices of the cycle in a smaller cycle of length three. Decomposable graphs, also called chordal or triangulated, are clique forests, made of kcliques (complete sub graphs of k vertices) connected by separators. Separators are also cliques of smaller sizes with the property that the graph becomes divided into two or more disconnected components when the vertices of the separator are disconnected. For example, in the schematic representation of the TMFG reported in Fig.1 the cliques are the tetrahedra and whereas the separator is the triangle .
The novelty of the method presented in this paper is the combination of decomposable Information Filtering Networks Tumminello et al. (2005); Aste et al. (2010); Pozzi et al. (2013); Musmeci et al. (2015a, b) with Gaussian Markov Random Fields Rue and Held (2005); Lauritzen (1996) to produce parsimonious models associated with a meaningful structure of dependency between the variables. The strength of this methodology is that the global sparse inverse covariance matrix is produced from a simple sum of local inversions. This makes the method computationally very efficient and statistically robust. Given the Local/Global nature of its construction, in the following we shall refer to this method as LoGo.
In this paper, we demonstrate that the structure provided by Information Filtering Networks is also extremely effective to generate highlikelihood sparse probabilistic models. In the linear case, the LoGo sparse inverse covariance has only parameters but, despite its sparsity, the associated multivariate normal distribution can still retrieve high likelihood values yielding, in practical applications such as financial data, comparable or better results than stateoftheart Glasso penalized inversions.
The methodology introduced in this paper contributes to the literature on graphical modelling, sparse inverse covariances and machine learning. But it is of even greater relevance to physical sciences, particularly for what concerns complex systems research and network theory. Indeed, physical sciences are increasingly engaged in the study of complex systems where the challenge is to elaborate models able to make predictions based on observational data. With LoGo, for the first time we combine a successful tool to describe complex systems structure (namely the Information Filtering Networks) with a parsimonious probabilistic model that can be used for quantitative predictions. Such a combination is of relevance to any context where parsimonious statistical modelling are applicable.
The rest of the paper is organized as follows: In Section II we describe our methodology providing algorithms for sparse covariance inversion for two graph topologies: MST and the TMFG. We then show in Section III that our method yields comparable or better results in maximum likelihood compared to lassotype and ridge regression estimates of the inverse covariances from financial time series. Subsequently we discuss how our approach can be used for time series prediction, financial stress testing and risk allocation. With Section IV we end with possible extensions for future work and conclusive remarks.
Ii Parsimonious modelling with Information Filtering Networks
ii.1 Factorization of the multivariate probability distribution function
Let us start by demonstrating how a decomposable Information Filtering Network can be associated with a convenient factorization of the multivariate probability distribution. Let us consider, in general, two sets and of variables with non empty intersection . Let us also assume that the variables are mutually dependent within their own ensemble or but when one variable belongs to set and the other variable belongs to set , then they are independent conditioned to . We can now use the Bayes formula: where is the joint probability distribution function of all variable in and ,
is the conditional probability distribution function for the variables in
minus the subset in common with conditioned to all variables in and is the marginal probability distribution function of all variables in (see Fig.2). From the Bayes formula we also have the following identity: that combined with the previous gives the following factorization for the joint probability distribution function of all variable in and Lauritzen (1996):(1) 
Let us now apply this formula to a set of variables associated with a decomposable Information Filtering Network made of cliques, , with and complete separators , with . In such a network, the vertices represent the variables and the edges represent couples of conditionally dependent variables (condition being with respect to all other variables). Conversely, variables which are not directly connected with a network edge are conditionally independent. Given such a network, in the same way as for Eq.1
, one can write the joint probability density function for the set of
variables in terms of the following factorization into cliques and separators Lauritzen (1996):(2) 
where and are respectively the marginal probability density functions of the variables constituting and Lauritzen (1996). The term counts the number of disconnected components produced by removing the separator and it is therefore the degree of the separator in the clique tree. Given the graph , Eq.2 is exact, it is a direct consequence of the Bayes formula and it is therefore very general
and applicable to both linear, nonlinear as well as parametric or nonparametric modelling
.ii.2 Functional form of the multivariate probability distribution function
We search for the functional form of the multivariate probability distribution function, . To find the functional form of the distribution and the values of its parameters , we use the maximum entropy method Jaynes (1957a, b) which constrains the model to have some given expectation values while maximising the overall information entropy . At the second order, the model distribution that maximizes entropy while constraining moments at given values is:
(3) 
where
is the vector of expectation values with coefficients
and are the matrix elements of . They are the Lagrange multipliers associated with the second moments of the distribution which are the coefficients of the covariance matrix of the set of variables . It is clear that Eq.3 is a multivariate normal distribution with . If we require the model to reproduce exactly all second moments , then the solution for the distribution parameters is . Therefore, in order to construct the model, one could estimate empirically the covariance matrix from a set of observations and then invert it in order to estimate the inverse covariance. However, in the case when the observation length is smaller than the number of variables the empirical estimate of the covariance matrix cannot be inverted. Furthermore, also in the case when , such a model has parameters and this might be an overfitting solution describing noise instead of the underlying relationships between the variables resulting in poor predictive power Hotelling (1953); Schäfer and Strimmer (2005). Indeed, we shall see in the following that, when uncertainty is large ( small), models with a smaller number of parameters can have stronger predictive power and can better describe the statistical variability of the data Dempster (1972). Here, we consider a parsimonious modelling that fixes only a selected number of second moments and leaves the others unconstrained. This corresponds to model the multivariate distribution by using a sparse inverse covariance where the unconstrained moments are associated with zero coefficients in the inverse. Let us note that this in turns implies zero partial correlation between the corresponding couples of variables.ii.3 Sparse inverse covariance from decomposable Information Filtering Networks
From Eq.2 it follows that, in the case of the multivariate normal distribution, the network coincides with the structure of nonzero coefficients, in Eq.3 and their values can be computed from the local inversions of the covariance matrices respectively associated with the cliques and separators Lauritzen (1996):
(4) 
and if are not both part of a common clique.
This is a very simple formula that reduces the global problem of a matrix inversion into a sum of local inversions of matrices of the sizes of the cliques and separators (no more than 3 and 4 in the case of TMFG graphs Diestel (2010); Massara et al. (2015)). This means that, for TMFG graphs, only observations would be enough to produce a nonsingular global estimate of the inverse covariance. An example illustrating this inversion procedure is provided in Fig.3.
Planar graph made of 4cliques separated by 3cliques:
Example for the computation of element of the inverse covariance:
ii.4 Construction of the maximum likelihood network
We are now facing two related problems: 1) How to choose the moments to retain? 2) How to verify that the parsimonious model is describing well the statistical properties of the system of variables? The solutions of these two problems are related because we aim to develop a methodology that chooses the nonzero elements of the inverse covariance in such a way as to best model the statistical properties of the real system under observation. In order to construct a model that is closest to the real phenomenon we search for the set of parameters, , associated with the largest likelihood, i.e. with the largest probability of observing the actual observations: , ….. The logarithm of the likelihood from a model distribution function, (Eq.3), with parameters , is associated to the empirical estimate of the covariance matrix, , by Hoel et al. (1954):
(5) 
The network that we aim to discover must be associated with largest loglikelihood and it can be constructed in a greedy way by adding in subsequent steps elements with maximal loglikelihood. In this paper we propose two constructions: 1) the maximum spanning tree (MST) Kruskal Jr. (1956), which builds a spanning tree which maximises the sum of edge weights; 2) a variant of the TMFG Massara et al. (2015), which builds a planar graph that aims to maximize the sum of edge weights. In both cases edge weights are associated with the loglikelihood. One can show that for all decomposable graphs, following Eq. 4, the middle term in Eq. 5 is: . Hence, to maximize loglikelihood, only must be maximized; from Eq.2, this is Lauritzen (1996):
(6) 
For the LoGoMST, the construction is simplified because in a tree the cliques are the edges , the separators are the nonleaf vertices and are the vertex degrees. In this case Eq.6 becomes , with
the sample variance of variable ‘
’. Given that , with the Pearson’s correlation matrix element , then . Therefore, the MST can be built through the standard Prim’s Prim (1957) or Kruskal’s Kruskal Jr. (1956) algorithms from a matrix of weights with coefficients . The LoGoMST inverse covariance estimation, , is then computed by the local inversion, Eq.4, on the MST structure. Note that the MST structure depends only on the correlations not the covariance.The LoGoTMFG construction requires a specifically designed procedure. Also in this case, only correlations matter; indeed, the structure of the inverse covariance network reflects the partial correlations i.e. the correlation between two variables given all others. LoGoTMFG starts with a tetrahedron, , with smallest correlation determinant and then iteratively introduces inside existing triangular faces the vertex that maximizes where and are the new clique and separator created by the vertex insertion. The LoGoTMFG procedure is schematically reported in Algorithm 1 and in Fig. 4. The TMFG is a computationally efficient algorithm Massara et al. (2015) that produces a decomposable (chordal) graph, with edges, which is a cliquetree constituted by fourcliques connected with threecliques separators. Note that for TMFG always.
Let us note that by expanding to the second order in the correlation coefficients, the logarithms of the determinants are well approximated by a constant minus the sum of the square correlation coefficients associated with the edges in the cliques or separators. This can simplify the algorithm and the TMFG could be simply computed from a set of weights given by the squared correlation coefficients matrix, as described in Massara et al. (2015).
Further, let us note that, for simplicity, in this paper we only consider likelihood maximization. A natural, straightforward extension of the present work is to consider Akaike Information Criterion Akaike (1974, 1998) instead. However, we have verified that, for the cases studied in this paper the two approaches give very similar results.
Iii Results
iii.1 Inverse covariance estimation
We investigated stock prices time series from a US equity market computing the daily logreturns ( with and with days during a period of 15 years from 1997 to 2012 Musmeci et al. (2015a)). We build 100 different datasets by creating stationary time series of different lengths selecting returns at random points in time and randomly picking series out of the 342 in total. Each dataset was divided into two temporal nonoverlapping windows with elements constituting the ‘training set’ and other elements the ‘testing set’.
In order to quantify the goodness of the methodology we computed the log likelihood of the testing dataset using the inverse covariance estimates from the training set. Larger log likelihood indicate models that better describe the testing data. Figure 5 reports the results for time series of different lengths from to . Smaller values of mean shorter number of observations in the training dataset used to construct the model and therefore correspond to larger uncertainties. Note that, the green upward triangles in Fig. 5, denoted with MAX, are the theoretical maximum from the inverse sample covariance matrix calculated on the testing set which is reported as a reference indicating the upper value for the attainable likelihood. Let us first observe from this figure that, for these stationary financial time series study, LoGoTMFG outperforms the likelihood from the inverse covariance solution (denoted with ‘Inv. Cov.’ in Fig. 5). For the inverse covariance is not computable and therefore comparison cannot be made; when , the inverse covariance is computable but it performs very poorly for small sample sizes becoming comparable to LoGoTMFG only after with both approaching the theoretical maxima at . Note that also LoGoMST outperforms the inverse covariance solution in most of the range of . We then compared the loglikelihood from LoGoMST and LoGoTMFG sparse inverse covariance with stateoftheart Glasso norm penalized sparse inverse covariance models (using the implementation by hsieh2014quic ) and Ridge norm penalized inverse model. Glasso method depends on the regularization parameters which were estimated by using two standard methods: i) GLassoCV uses a twofold cross validation method Pedregosa et al. (2011); ii) GLassoSp fixes the regularization parameter to the value that creates in the training set a sparse inverse with sparsity equal to LoGoTMFG network ( parameters). Ridge inverse penalization parameter was also computed by cross validation method Pedregosa et al. (2011). Fig.6 reports a comparison between these methods for various values of . We can observe that LoGoTMFG outperforms the Glasso methods achieving larger likelihood from . Results are detailed in Table 1 where we compare also with the null model (‘NULL’) which is a completely disconnected network corresponding to a diagonal .
LoGo models can achieve better performances than Glasso models with fewer coefficients and are computationally more efficient. This is shown in Fig.7 where we report the comparison between the number of nonzero offdiagonal coefficients in the precision matrix in GlassoCV and LoGo models. These results shows that the number of coefficients for GLassoCV is 3 to 6 times larger than for LoGoTMFG while the computation time for LoGoTMFG is about three order of magnitude smaller than the computation time for GLassoCV. Note that LoGoTMFG has a constant number of coefficients equal to corresponding to the number of edges in the TMFG network. A further comparison between performance, sparsity and execution time is provided in Fig. 8 where the top plot reports the fraction of loglikelihood for Glasso (implemented by using hsieh2014quic ) and for LoGoTMFG vs. the fraction of non zero offdiagonal coefficients of for data series lengths of . In the bottom plot we report instead the fraction of computational time vs. the fraction of non zero offdiagonal coefficients of for Glasso and for LoGoTMFG. We can observe that at the same sparsity (value 1 in the xaxis indicated with the vertical line) Glasso underperforms LoGo by 10% and Glasso is about 50 times slower than LoGo on the same machine. We verified that eventually the maximum performance of Glasso can become 1.5% better than LoGo but with 10 times more parameters and computation time 2,000 times longer. Let us note that, in this example, the best Glasso performance are measured aposteriori on the testing set, they are therefore hypothetical maxima which cannot be reached with cross validation methods that instead result in average performances of a few percent inferior to LoGo (see Fig.6). All these results refer to the same simulations as for the results in Figs.5. The computation time of LoGo can be decreased even further by parallelising the algorithm.
The previous results are for time series made stationary by random selecting logreturns at different times. In practice, financial time series  and other real world signals  are nonstationary having statistical properties that change with time. Table 2 reports the same analysis as in Fig.6 and Table 1 but with datasets taken in temporal sequence with the training set being the data points preceding the testing set. Let us note that, considering the timeperiod analysed, in the case of large , the training set has most data points in the period preceding the 20072008 financial crisis whereas the testing set has data in the period following the crisis. Nonetheless, we see that the results are comparable with the one obtained for the stationary case. Surprisingly, we observe that for relatively small timeseries lengths the values of the loglikelihood achieved by the various models is larger than in the stationary case. This counterintuitive fact can be explained by the larger temporal persistence of real data with respect to the randomized series. Further results and a plot of the log likelihood for this non stationary case are given in Appendix A (Fig.9).
We also investigated artificial datasets of
multivariate variables generated from factor models respectively with 3 and 30 common factors. Results for the average loglikelihood and the standard deviations computed over 100 samples at different values of
are reported in Table 3. In this case we note that, while for a number of factors equal to 3 LoGo is again performing consistently better than the inverse covariance, better than Glasso models of equal sparsity and comparably well with cross validated Glasso; conversely, when the number of factors is set to 30 LoGo is underperforming with respect to cross validated Glasso and even the inverse covariance (for ). However, we note that LoGo is doing better than the Glasso model with equal sparsity. This seems to indicate that factor models with more than a few factors require denser models with larger numbers of non zero elements in the inverse covariance than the one provided by MST and TMFG networks used in the present LoGo construction. Note that by increasing the number of factors performances of all models become worse. Let us finally note that ridge inverse covariance performs well in all the cases studied. It is indeed well known that this is a powerful estimator for the inverse covariance, however the purpose of the present investigation concerns sparse modelling and ridge inverse covariance is dense with all coefficients different from zero. Furthermore, let us remark that LoGo can outperform ridge in several cases, as we can notice from Fig.6 and Tabs.1, 2 and 3. Further results and a plot of the log likelihood for factor models are given in Appendix B (Fig.10).50  100  300  500  1000  2000  

Inv. Cov.      112 (53)  485 (29)  709 (8)  755 (4) 
LoGo TMFG  631 (52)  711 (28)  753 (9)  756 (8)  759 (5)  764 (3) 
LoGo MST  658 (43)  702 (28)  737 (9)  738 (8)  742 (5)  747 (3) 
Ridge  676 (17)  715 (11)  741 (6)  746 (6)  756 (5)  766 (3) 
GLasso Sp  674 (23)  679 (18)  678 (10)  677 (8)  678 (5)  679 (2) 
GLasso CV  734 (25)  743 (15)  750 (5)  750 (5)  750 (4)  752 (2) 
MAX      895 (6)  866 (5)  820 (4)  801 (3) 
NULL  276 (0.02)  276 (0.02)  276 (0.01)  276 (0.01)  276 (0.01)  276 (0.00) 
50  100  300  500  1000  2000  

Inv. Cov.        253 (835)  481 (335)  678 (15) 
LoGo TMFG  679 (212)  757 (120)  674 (247)  665 (312)  605 (211)  694 (12) 
LoGo MST  699 (202)  747 (120)  656 (255)  641 (324)  583 (221)  678 (11) 
Ridge  753 (113)  746 (86)  721 (113)  721 (155)  684 (115)  710 (10) 
GLasso Sp  722 (110)  704 (107)  664 (113)  627 (159)  625 (97)  659 (4) 
GLasso CV  769 (134)  761 (89)  718 (110)  700 (178)  666 (120)  711 (6) 
MAX        919 (63)  869 (40)  851 (3) 
NULL  276 (0.14)  276 (0.07)  276 (0.08)  276 (0.07)  276 (0.06)  276 (0.00) 
(a)  

50  400  1000  2000  
Inv. Cov.    394 (14)  63 (17)  46 (18) 
LoGo TMFG  98 (13)  51 (17)  51 (18)  58 (20) 
LoGo MST  142 (10)  116 (13)  116 (11)  121 (13) 
Ridge  163 (8)  82 (16)  53 (15)  48 (17) 
GLasso Sp  452 (9)  447 (4)  447 (3)  447 (2) 
GLasso CV  128 (7)  62 (13)  61 (11)  55 (15) 
MAX    59 (18)  2 (17)  20 (18) 
NULL  427 (12)  425 (6)  426 (3)  425 (2) 
(b)  

50  400  1000  2000  
Inv. Cov.    484 (12)  143 (12)  115 (11) 
LoGo TMFG  440 (7)  376 (2)  373 (1)  372 (2) 
LoGo MST  432 (5)  400 (3)  393 (1)  392 (1) 
Ridge  324 (11)  158 (9)  129 (12)  113 (11) 
GLasso Sp  430 (3)  425 (1)  424 (1)  424 (1) 
GLasso CV  326 (5)  151 (6)  130 (14)  118 (11) 
MAX    18 (9)  78 (12)  88 (11) 
NULL  425 (4)  426 (1)  425 (1)  426 (1) 
iii.2 Time series prediction: LoGo for regressions and causality
LoGo estimates the joint probability distribution function yielding the set of parameters for the model system’s dependency structure. In this section we demonstrate how this model can be used also for forecasting. Information Filtering Networks have been proven to be powerful tools to characterize the structure of complex systems comprising several variables such as financial markets and biological systems Tumminello et al. (2005); Aste et al. (2010); Musmeci et al. (2015a); Song et al. (2012); musmeci2016 . They have also been proven effective to understand how financial risk is distributed in markets and how to construct performing portfolios Pozzi et al. (2013); Musmeci et al. (2015b). However, so far, they were not associated to probabilistic models able to make use of their capability of meaningful representation of the market structure. LoGo provides this instrument and in particular we here show how Information Filtering Networks can be used to compute sparse regressions. Indeed, generally speaking, a regression consists in estimating the expectation values of a set of variables, , conditioned to the values of another set of variables :
(7) 
when multivariate normal statistics is used, this is the linear regression. If LoGo sparse inverse covariance
is used, then Eq.7 computes a sparse linear regression. If the set of variables are ‘past’ variables and the variables are ‘future’ variables, then the regression becomes a forecasting instrument where values of variables in the future can be inferred from past observations. Here we consider to have variables in the past () and variables in the future ( ). These variables can either be the same variables at different lags or different variables. For simplicity of notation, and without loss of generality, we consider centred variables with zero expectation values. We can consider and as two distinct sets of variables that, of course, have some dependency relation. The conditional expectation valuescan be calculated from the conditional joint distribution function, which, from the Bayes theorem, is
. From this expression one obtains:(8) 
where we have written the precision matrix as a block matrix
(9) 
whith being the part of the precision matrix in the lower right and the part of the precision matrix in the lower left. As pointed out previously, Eq.8 is just a different way to write the linear regression which, in a more conventional form, reads: with the coefficients and the residuals given by . However, by using LoGo to estimate , Eq.8 becomes a sparse predictive model associated with a meaningful inference structure. Now, through Eq.8 we can quantify the effect of past values of a set of variables over the future values of another set. Indeed, Eq.8 is a map describing the impact of variables in the past onto the future; nonzero elements of single out the subset of variables of that has direct future impact on a subset of variables of . With LoGo we can identify the channels that spread instability through the network and we can quantify their effects.
Risk and systemic vulnerability are described instead by the structure of . Indeed, the expected conditional fluctuations of the variables are quantified by the conditional covariance:
(10) 
which involves the term only, which therefore describes propagation of uncertanty across variables. Through this term we can link Information Filtering Network with causality relations, indeed Granger causality and transfer entropy are both associated to the ratio of the determinants of two conditional covariances between past and future variables Granger ; Barnett . This introduces a novel way to associate causal directionality to Information Filtering Networks.
iii.3 Financial applications: Stress Testing and Risk Allocation
iii.3.1 Financial applications: Stress Testing
A typical stress test for financial institutions, required by regulatory bodies, consists in forecasting the effect of severe financial and economic shocks on the balance sheet of a financial institution. In this context let’s reformulate the previous results by considering the set of economic and financial variables that can be shocked and the set of the securities held in an institution’s portfolio. Assuming that all the changes in the economic and financial variables and in the assets of the portfolio can be modelled as a GMRF, then Eq.8 represents the distribution of the returns of the portfolio () conditional on the realization of the economic and financial shocks (). An approach along similar lines was proposed in Rebonato (2010); Denev (2015). We note that with the LoGo approach we have a sparse relationship between the financial variables and the securities. This makes calibration more robust and it can be insightful to identify mechanisms of impact and vulnerabilities.
iii.3.2 Risk Allocation
A second application is the calculation of conditional statistics in the presence of linear constraints (see Rue and Held (2005)). In this case we indicate with a set of random variables associated with the returns in a portfolio of assets and with the associated sparse inverse covariance matrix. Let be the vector of holdings of the portfolio, then is the return of the portfolio. An important question in portfolio management is to allocate profits and losses to different assets conditional on a given level of profit or loss, which is equivalent to knowing the distribution of returns conditional on a given level of loss . More generally we want to estimate where is generally a low rank ( in our example) matrix that specifies hard linear constraints. Using the Lagrange Multipliers method (see Strang (1986) for an introduction) the conditional mean is calculated as (Rue and Held (2005)):
(11) 
and the conditional covariance is:
(12) 
In case is estimated using decomposable Information Filtering Networks (MST or TMFG) then it can be written as a sum of smaller matrices (as in algorithm 1) involving cliques and separators:
(13) 
This decomposition allows for a sparse and potentially parallel evaluation of the matrix products in Eqs. 11 and 12.
This framework can therefore be used to build the Profit/Loss (P/L) distribution of a portfolio, conditionally on a number of explanatory variables, and to allocate the P/L to the different assets conditional on the realization of a given level of profit and loss. The solution is analytical and therefore extremely quick. Besides, given the decomposability of the portfolio, Eq.13 allows to calculate important statistics in parallel, by applying the calculations locally to the cliques and to the separators. For instance, it is a simple exercise to show that the unconditional expected P/L and the unconditional volatility can be calculated in parallel by adding the contributions of the cliques and subtracting the contributions of the separators. In summary LoGo provides the possibility to build a basic risk management framework that allows risk aggregation, allocation, stress testing and scenario analysis in a multivariate Gaussian framework in a quick and potentially parallel fashion.
Iv Conclusions
We have introduced a methodology, LoGo, that makes use of Information Filtering Networks to produce probabilistic models which are sparse, are associated with high likelihood and are computationally fast making possible their use with very large datasets. It has been established that Information Filtering Networks produce sparse structures that well reflect the properties of the underlying complex system Tumminello et al. (2005); however, so far, Information Filtering Networks has been only used for descriptive purposes, now LoGo provides us an instrument to build predictive models from Information Filtering Networks opening an entirely new range of perspectives that we have just started exploring.
LoGo produces highdimensional sparse inverse covariances by using lowdimensional local inversions only, making the procedure computationally very efficient and little sensitive to the curse of dimensionality. The construction through a sum of local inversion, which is at the basis of LoGo, makes this method very suitable for parallel computing and dynamical adaptation by local, partial updating. We discussed the wide potential applicability of LoGo for sparse inverse covariance estimation, for sparse forecasting models and for financial applications such as stresstesting and risk allocation.
By comparing the results of LoGo modelling, for financial data, with a stateoftheart Glasso procedure we demonstrated that LoGo can obtain, in a fraction of the computation time, equivalent or better results with a sparser network structure. However, we observed that when applied to factor models with more than a few factors, LoGo is underperforming with respect to less sparse or dense models. This is probably the consequence of the present LoGo implementations that use Information Filtering Networks (MST and TMFG) with constrained sparsity (respectively and edges). Such a limitation can be easily overcome by constructing more complex networks with larger maximum cliques and larger number of edges. A natural extension beyond MST and TMFG would be to use, instead of the present greedy local likelihood maximization under topological constraints, an information criteria (such as Akaike information criterion Akaike (1974, 1998)) which let a chordal graph to be constructed through local moves without constraining a priori its final topological properties. This would produce clique forests which generalize the MST and TMFG studied in this paper. Further extensions could be along the lines of the biologically motivated work Molinelli13 where ensemble of inference network were explored. These extensions could increase model robustness; this however would be unavoidably at expenses of computational efficiency. The tradeoff between efficiency and performance for the choice of Information Filtering Networks for LoGo will be the topic of future investigations.
The model introduced in this paper is a secondorder solution of the maximum entropy problem, resulting in linear, normal multivariate modelling. It is however well known that many real systems follow nonlinear probability distributions. Linearity is a severe limitation which can however be overcome in several ways. For instance, LoGo can be extended to a much broader class of nonlinear models by using the socalled kernel trick ShaweTaylor and Cristianini (2004). Other generalisations to nonlinear transelliptical models Liu et al. (2012) can also be implemented. Furthermore, the probability decomposition at the basis of LoGo (Eq.2) is general and can be even applied to nonparametric, nonlinear models. These would be however, the topics of future works.
Acknowledgements
T.A. acknowledges support of the UK Economic and Social Research Council (ESRC) in funding the Systemic Risk Centre (ES/K002309/1). TDM wishes to thank the COST Action TD1210 for partially supporting this work. We acknowledge Bloomberg for providing the data.
Appendix A Comparison between LoGo and stateoftheart sparse Glasso model for non stationary financial data
We investigated the comparison between LoGo and stateoftheart sparse Glasso model from hsieh2014quic for the same financial data used for Figs.6 and Tab.1 but using the real temporal sequence of the financial data. These sequences are nonstationary having varying statistical properties across the time windows. This unavoidably must affect the capability of the model to describe statistically test data from the study of the training data being the two associated with different temporal states where different events affect the market dynamics. Results for the loglikelihood are reported in Fig.9, these are the same results reported in Tab.2. By comparing with Fig.6 we observe a much greater overall noise with an interesting collapse of performances with similar values for all models. We also observe larger loglikelihoods for shorter timewindow. This fact is commented in Section III. Let us notice that these results are in agreement with the finding for the stationary case with LoGo still performing better or comparably well with respect to GlassoCV. Computational times and sparsity value are very similar to what reported for the stationary case.
Appendix B Comparison between LoGo and stateoftheart sparse Glasso model for sparse factor models
Plots for the loglikelihood vs. the time series length for factor models are reported in Fig.10 which correspond to the results reported in the paper in Tab.2. We observe that LoGo still performs well and relatively similar to the real data performances when the number of factors is small. Conversely, when the number of factors increases LoGo underperforms even the inverse correlation for . Let us note that LoGo is still over performing the Glasso with the same number of parameters.
References
 Lauritzen (1996) S. L. Lauritzen, Graphical Models (Oxford:Clarendon, 1996).
 Rue and Held (2005) H. Rue and L. Held, Gaussian Markov Random Fields: Theory and Applications, vol. 104 of Monographs on Statistics and Applied Probability (Chapman & Hall, London, 2005).
 Davis (2006) T. A. Davis, Direct methods for sparse linear systems, vol. 2 (Siam, 2006).
 Wang et al. (2013) C. Wang, N. Komodakis, and N. Paragios, Comput. Vis. Image Underst. 117, 1610 (2013), ISSN 10773142.
 Montanari (2012) A. Montanari, Compressed Sensing: Theory and Applications pp. 394–438 (2012).
 Tumminello et al. (2005) M. Tumminello, T. Aste, T. Di Matteo, and R. N. Mantegna, Proceedings of the National Academy of Sciences of the United States of America 102, 10421 (2005).
 Aste et al. (2010) T. Aste, W. Shaw, and T. Di Matteo, New Journal of Physics 12, 085009 (2010).
 Pozzi et al. (2013) F. Pozzi, T. Di Matteo, and T. Aste, Scientific Reports 3, 1665 (2013).
 Musmeci et al. (2015a) N. Musmeci, T. Aste, and T. Di Matteo, PLoS ONE 10, e0116201 (2015a).
 Musmeci et al. (2015b) N. Musmeci, T. Aste, and T. Di Matteo, Journal of Network Theory in Finance 1, 77 (2015b).
 Denev (2015) A. Denev, Probabilistic Graphical Models: A New Way of Thinking in Financial Modelling (Risk Books, 2015).
 Wu et al. (2003) X. Wu, Y. Ye, K. R. Subramanian, and L. Zhang, in BIOKDD (2003), vol. 3, pp. 63–69.
 Schäfer and Strimmer (2005) J. Schäfer and K. Strimmer, Bioinformatics 21, 754 (2005).
 Lezon et al. (2006) T. R. Lezon, J. R. Banavar, M. Cieplak, A. Maritan, and N. V. Fedoroff, Proceedings of the National Academy of Sciences 103, 19033 (2006), eprint http://www.pnas.org/content/103/50/19033.full.pdf+html.
 Schneidman et al. (2006) E. Schneidman, M. J. Berry, R. Segev, and W. Bialek, Nature 440, 1007 (2006).
 Zerenner et al. (2014) T. Zerenner, P. Friederichs, K. Lehnertz, and A. Hense, Chaos: An Interdisciplinary Journal of Nonlinear Science 24, 023103 (2014).
 EbertUphoff and Deng (2012) I. EbertUphoff and Y. Deng, Geophysical Research Letters 39, 1 (2012).

Haran (2011)
M. Haran, Handbook of Markov Chain Monte Carlo pp. 449–478 (2011).
 Hristopulos and Elogne (2009) D. T. Hristopulos and S. N. Elogne, Signal Processing, IEEE Transactions on 57, 3475 (2009).
 Hristopulos (2003) D. T. Hristopulos, SIAM Journal on Scientific Computing 24, 2125 (2003).
 Zhou (2011) Y. Zhou, arXiv preprint arXiv:1111.6925 (2011).
 Akaike (1974) H. Akaike, Automatic Control, IEEE Transactions on 19, 716 (1974).
 Akaike (1998) H. Akaike, in Selected Papers of Hirotugu Akaike (Springer, 1998), pp. 199–213.
 Kullback and Leibler (1951) S. Kullback and R. A. Leibler, The annals of mathematical statistics 22, 79 (1951).
 Schwarz et al. (1978) G. Schwarz et al., The annals of statistics 6, 461 (1978).
 Rissanen (1978) J. Rissanen, Automatica 14, 465 (1978).
 Petitjean et al. (2013) F. Petitjean, G. Webb, A. E. Nicholson, et al., in Data Mining (ICDM), 2013 IEEE 13th International Conference on (IEEE, 2013), pp. 597–606.
 (28) L. Diambra Physica A: Statistical Mechanics and its Applications 390 2011 21982207.
 (29) Evan J. Molinelli, et al. PLoS Comput Biol. 9 2013 e1003290
 Giudici et al. (1999) P. Giudici, Green, and PJ, Biometrika 86, 785 (1999).

Deshpande et al. (2001)
A. Deshpande,
M. Garofalakis,
and M. I.
Jordan, in
Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence
(Morgan Kaufmann Publishers Inc., 2001), pp. 128–135.  Bach and Jordan (2001) F. R. Bach and M. I. Jordan, in Advances in Neural Information Processing Systems (2001), pp. 569–576.
 Srebro (2001) N. Srebro, in Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence (Morgan Kaufmann Publishers Inc., 2001), pp. 504–511.
 Chechetka and Guestrin (2008) A. Chechetka and C. Guestrin, in Advances in Neural Information Processing Systems (2008), pp. 273–280.
 d’Aspremont et al. (2008) A. d’Aspremont, O. Banerjee, and L. El Ghaoui, SIAM Journal on Matrix Analysis and Applications 30, 56 (2008).
 Banerjee et al. (2008a) O. Banerjee, L. El Ghaoui, and A. d’Aspremont, The Journal of Machine Learning Research 9, 485 (2008a).
 Banerjee et al. (2006) O. Banerjee, L. E. Ghaoui, A. d’Aspremont, and G. Natsoulis, in Proceedings of the 23rd international conference on Machine learning (ACM, 2006), pp. 89–96.
 Tibshirani (1996) R. Tibshirani, Journal of the Royal Statistical Society, Series B 58, 267 (1996).
 Zou and Hastie (2005) H. Zou and T. Hastie, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67, 301 (2005).
 Meinshausen and Bühlmann (2006) N. Meinshausen and P. Bühlmann, Annals of Statistics 34, 1436 (2006).
 Banerjee et al. (2008b) O. Banerjee, L. El Ghaoui, and A. d’Aspremont, J. Mach. Learn. Res. 9, 485 (2008b), ISSN 15324435.
 Ravikumar et al. (2011) P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu, Electronic Journal of Statistics 5, 935 (2011).
 Hsieh et al. (2011) C. Hsieh, I. S. Dhillon, P. K. Ravikumar, and M. A. Sustik, in Advances in Neural Information Processing Systems 24, edited by J. ShaweTaylor, R. Zemel, P. Bartlett, F. Pereira, and K. Weinberger (Curran Associates, Inc., 2011), pp. 2330–2338.
 Oztoprak et al. (2012) F. Oztoprak, J. Nocedal, S. Rennie, and P. A. Olsen, in Advances in Neural Information Processing Systems 25, edited by P. Bartlett, F. Pereira, C. Burges, L. Bottou, and K. Weinberger (2012), pp. 755–763.
 Friedman et al. (2008) J. Friedman, T. Hastie, and R. Tibshirani, Biostatistics (Oxford, England) 9, 432 (2008).
 Mantegna (1999) R. N. Mantegna, The European Physical Journal B  Condensed Matter and Complex Systems 11, 193 (1999).
 Aste and Matteo (2006) T. Aste and T. Di Matteo, Physica A 370, 156 (2006).
 Song et al. (2012) W.M. Song, T. Di Matteo, and T. Aste, PLoS ONE 7, e31929 (2012).
 Massara et al. (2015) G. P. Massara, T. Di Matteo, and T. Aste, Journal of Complex Networks (2016); arXiv preprint arXiv:1505.02445.
 Jaynes (1957a) E. T. Jaynes, The Physical Review 106, 620 (1957a).
 Jaynes (1957b) E. T. Jaynes, The Physical Review 108, 171 (1957b).
 Hotelling (1953) H. Hotelling, Journal of the Royal Statistical Society. Series B (Methodological) 15, 193 (1953), ISSN 00359246.
 Dempster (1972) A. P. Dempster, Biometrics 28, 157 (1972).
 Diestel (2010) R. Diestel, Graph Theory, vol. 173 of Graduate Texts in Mathematics (Springer, Heidelberg, 2010).
 Hoel et al. (1954) P. G. Hoel et al., Introduction to mathematical statistics. (1954).
 Prim (1957) R. Prim, Bell System Technical Journal, The 36, 1389 (1957), ISSN 00058580.
 Kruskal Jr. (1956) J. B. Kruskal Jr., Proceedings of the American Mathematical Society 7, 48 (1956).
 (58) ChoJui Hsieh, Mátyás A Sustik, Inderjit S Dhillon and Pradeep D Ravikumar, Journal of Machine Learning Research, 1, 2911–2947 (2014)
 Pedregosa et al. (2011) F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, et al., Journal of Machine Learning Research 12, 2825 (2011).
 Rebonato (2010) R. Rebonato, Coherent stress testing (2010).
 Strang (1986) G. Strang, Introduction to Applied Mathematics (WellesleyCambridge Press, 1986), ISBN 9780961408800.
 ShaweTaylor and Cristianini (2004) J. ShaweTaylor and N. Cristianini, Kernel methods for pattern analysis (Cambridge university press, 2004).
 Liu et al. (2012) H. Liu, F. Han, and C.h. Zhang, in Advances in Neural Information Processing Systems (2012), pp. 809–817.
 (64) Clive WJ. Granger, Journal of econometrics 39.1 (1988) pp. 199211.
 (65) Lionel Barnett, Adam B. Barrett and Anil K. Seth., Physical review letters 103.23 (2009) 238701.
 (66) N. Musmeci, T. Aste, and T. Di Matteo Scientific Reports, accepted October 2016, arXiv preprint arXiv:1605.08908 (2016)
Comments
There are no comments yet.