Relational data often take the form of a symmetric binary matrix, with entries indicating the presence or absence of links between pairs of individuals or entities. In dynamic settings, the links and the set of entities under consideration can change over time, and interest focuses on inferences on the time varying relational structure and in prediction. Examples include social network analysis, in which links encode friendship networks among individuals, and broader relational settings in which closeness between a pair of units (products, stimuli, countries, companies, etc) is expressed on a binary scale. Figure 1 shows an example of time-varying binary similarity matrices encoding dynamic co-movements in National Stock Market Indices from 2004 to 2013. Co-movements among a set of assets or market indices are typically analyzed via time-varying covariance or correlation matrices of their corresponding log-returns , , (see e.g. Tsay, 2005, Wilson & Ghahramani, 2010, Nakajima & West, 2012, Durante et al., 2013); we instead provide a different and not yet fully explored direction of research by treating co-movements as dynamic relational data, shifting our attention from to the time-varying symmetric matrices . The matrix has entries if index and index move in the same direction at time (i.e. and , or and ) and if they move in opposite directions (i.e. and , or and ). Co-movements indicate similarity in the indices.
A rich literature is available on modeling similarity or dissimilarity matrices, with Multidimensional Scaling (MDS) providing a widely used technique for graphically representing units in a Euclidean space conditionally on their pairwise dissimilarity measures. General theory and applications are available for Euclidean distances and rank dissimilarities (see Cox & Cox, 2001), with subsequent developments in a Bayesian framework (Oh & Raftery, 2001, Oh & Raftery, 2007) improving the overall performance, but subject to possible issues due to non-identifiable latent coordinates, lack of full conditional conjugacy and absence of an automatic procedure for learning the dimension of the latent space. Moreover, generalizations in the dynamic case are lacking, with only few recent proposals restricted to specific applications for discrete time evolution (Jamali-Rad & Leus, 2012).
When binary similarity or dissimilarity matrices are analyzed, the previous procedures prove to be inappropriate or impractical (Holbrook et al., 1982), with predicted values outside the probability range and a large number of tied ranks for each unit in non-metric MDS applications. Spatial analysis of choice data (DeSarbo & Hoffman, 1987, DeSarbo et al., 1999
) provides a possible generalization of MDS for binary variables, with recently developed algorithms available also in the dynamic case(Sarkar et al., 2007)
. However, questionable independence assumptions are required to ease maximum likelihood estimation, and Bayesian extensions(DeSarbo et al., 1999)
to overcome this problem lack scalability in selecting the dimensionality of the latent space via cross-validation methods. Moreover, dynamic extensions via the Kalman filter rely on first and second order Taylor expansions for the observation model, providing difficulties in the derivation of theoretical properties for the exact formulation and requiring a sufficient number of observations to meet the Gaussian assumption. These models are specifically tailored for embedding problems in 2-mode co-occurrence data recording links between two different types of entities (i.e. consumer-products, author-words). Our focus is instead on dynamic modeling for one-mode binary matrices.
There is a growing body of literature in social networks on model-based statistical analysis of one-mode binary matrices, traditionally focusing on overly-restrictive models, such as Erdös & Rény (1959), the model (Holland & Leinhardt, 1981) and the Exponential Random Graph Model (ERGM) (Frank & Strauss, 1986), with generalizations for dynamic inference available via discrete temporal ERGM (Robins & Pattinson, 2001) and hidden temporal ERGM (Guo et al., 2007). ERGMs have had growing popularity, but have a number of drawbacks. Estimation relies on pseudo-likelihood (Strauss & Ikeda, 1990) and approximate MCMC methods (Snijders, 2002), due to the computational intractability in a fully likelihood approach. Solutions can be degenerate or nearly-degenerate (Handcock et al., 2003), and questions remain about coherence, inflexibility and other key issues.
An alternative class of models focus on clustering the nodes, based on the pattern of inter-connections in the network. Stochastic Block Models (SBM) (Nowicki & Snijders, 2001) provide a common framework, with the Infinite Relational Model (IRM) (Kemp et al., 2006) allowing an unknown number of clusters via a Dirichlet process. Dynamic SBMs have been recently considered (Ishiguro et al., 2010,Yang et al., 2011, Xu & Hero, 2013). Ishiguro et al. (2010)
focus on discrete dynamic evolution via a hidden Markov model.Xu & Hero (2013) accommodate continuous time analysis via a state space formulation, but require sufficient numbers of observations in each block to meet Gaussian assumptions for the sample mean. They use the extended Kalman filter to linearize the observation equation, leading to questions of accuracy.
We dynamically model binary relational matrices by embedding the nodes in a low-dimensional latent Euclidean space, with coordinates evolving in continuous time via Gaussian processes and edge probabilities constructed utilizing a logistic mapping function from the probability matrix space to the dot product of the latent coordinates. Hence, we are most closely related to the literature on latent space models (Hoff et al., 2002) and Mixed Membership Stochastic Block models (MMSB) (Airoldi et al., 2008), which allow each node to belong to multiple blocks with fractional membership. Dynamic latent space models (Sarkar & Moore, 2005) and MMSB models (Xing et al., 2010) incorporate Gaussian perturbations in discrete time and state space models, respectively. Posterior computation relies on several layers of approximation without theory available to justify accuracy. In contrast, we provide a simple Gibbs sampling algorithm for our model, which converges to the exact posterior and infers the dimension of the latent space automatically.
The paper is organized as follows. In Section 2, we describe the general model structure with particular attention to prior specification and theoretical properties. Section 3 provides the Gibbs sampling steps. A simulation study is examined in Section 4, and an application to quarterly co-movements in world financial markets is presented in Section 5.
2 Dynamic Latent Space Model
2.1 Notation and Motivation
Let be the symmetric binary similarity matrix at time and be the corresponding symmetric probability matrix having entries for every and . Letting
independently for each and , our aim is to define a prior for the collection of time-varying probability matrices with the goals being to (i) obtain a provably flexible specification, (ii) maintain simple computations, (iii) perform dimensionality reduction in order to scale to moderately large settings, (iv) allow unequal spacing and missing observations and (v) allow predictions including a measure of predictive uncertainty. Since the matrices are symmetric and the similarities or dissimilarities of a unit with itself are meaningless, we will focus on modeling the lower triangular part without taking into account the diagonal elements.
2.2 Latent space dynamic model formulation
We construct via a monotonic increasing link function mapping a latent similarity measure among units and at time , , into the probability space. Specifically, we choose
to be the distribution function of the logistic random variable, obtaining
for , and . Without further assumptions on , one needs to model separately stochastic processes, one for each time-varying similarity measure , with , and , leading to burdensome computations as increases and failing to borrow information exploiting the underlying process inducing similarities among the units. In order to reduce the dimensionality of the problem and to learn also the network structure among the units for every , we express the similarity measures as a quadratic combination of a set of latent coordinates for unit and unit . Specifically
where for and for
, are the vectors of latent coordinates of unitand respectively, giving rise, together with the baseline , to the similarity measure between the two units via a projection approach. According to this specification, units with latent coordinates in the same directions will have a higher probability of being similar (i.e. ), while units with opposite coordinates are more likely to be dissimilar (i.e. ).
This formulation is also intuitive in practical applications. Recall our motivating example of finance, and assume for simplicity and only two latent coordinates representing for example unexpected inflation and industrial production, respectively. Then indices of countries with features in the same directions will have a higher probability of co-moving, while countries with opposite unexpected inflation and industrial production will more likely move on different directions.
In matrix notation, equation (3) can be rewritten as
where is a real symmetric matrix with latent similarity entries and . Note that, assuming without loss of generality , the above decomposition is not unique. For example if we define with a orthogonal matrix, then . If one is interested also in making inference on the latent coordinates matrix , different proposals are available in latent factor modeling to ensure identifiability via restrictions (see e.g. Bollen, 1989) or Procrustean transformations (Hoff et al., 2002). However since our focus is on making inference and prediction on the probability matrices, we follow Ghosh & Dunson (2009) in avoiding identifiability constraints, as such constraints are not necessary to ensure identifiability of the induced similarity matrix .
It is important to characterize the class of matrices whose lower triangular elements can be represented as in (2) with latent similarities decomposed as in (4). Theorem 2.2 and the corresponding Corollary 2.2 state that for sufficiently large, the lower triangular matrix elements of any symmetric probability matrix have such a representation. For , denotes the space of all dimensional matrices of arbitrary coordinate functions mapping from and the space of all baseline mean functions.
Given a symmetric real matrix , , there exist such that
Proof. Assume without loss of generality that and take . Consider
is the matrix of the eigenvectors ofand
the diagonal matrix with the corresponding eigenvalues. Then, for every .
Given a symmetric probability matrix , , there exist such that
Proof. The proof follows immediately from Theorem 2.2 and from the fact that the mapping from to is a one-to-one continuous increasing function.
This ensures that our specification is sufficiently flexible to characterize any true generating process, and hence can be viewed as nonparametric given sufficiently flexible priors for the components.
2.3 Prior Specification
We aim to specify independent prior distributions and for and in order to induce a prior for through (2) and (3). This prior is carefully defined to have large support, favor simple and efficient computation, allow missing values, induce a continuous time specification, and allow learning of the latent space dimension. Bhattacharya & Dunson (2011) proposed a useful approach for Bayesian learning of the number of latent factors in a model for a single large covariance matrix, and we extend their approach from independent Gaussian latent factors to Gaussian process latent factors. In particular, we let
independently for all and , with a squared exponential correlation function , which allows for continuous time analysis and unequal spacing, and a shrinkage parameter defined as
Note that if the expected value for is greater than . As a result, as goes to infinity, tends to infinity, shrinking , for every towards zero. This leads to a flexible prior for with a local shrinkage parameter that favors many stochastic processes of latent coordinates being close to as increases. To conclude prior specification we choose
Before proceeding with posterior computation, we focus on the support of the induced prior based on priors and . Specifically we are interested in proving whether the prior can generate a time-varying symmetric probability matrix that is arbitrarily close to any function . Intuitively, large support on continuous symmetric similarity matrix functions relies on the continuity of the Gaussian process coordinate functions. Since for each fixed ,
are independently Gaussian distributed,is distributed according to a sum of independent Wishart random variables. Combining the large support of the Wishart distribution with the one of the Gaussian for the baseline , provides large support for the induced prior . Since is obtained via a one to one continuous increasing function of , we will map non-null probability subsets of the space of into non-null probability subsets of the space of , providing the desired large support for the induced prior . Theorem 2.3 states the large support property for , while Corollary 2.3 provides the same property for by combining results in the previous Theorem with the fact that is defined as a monotonic increasing continuous function of . Proof of Theorem 2.3 is provided in Appendix.
Let denote the induced prior on based on the specified prior on . Assuming compact, for all continuous and for all
Let denote the induced prior on based on the specified prior on . Assuming compact, for all continuous and for all
Proof. Since the elements of are defined as a one to one continuous mapping of the elements of through the function , by definition of continuity we have that for every there exists an such that
for all such that , where means that the function is applied to every element of . Finally, since by Theorem 2.3 the event has non-null probability, it follows that the same holds for the event , completing the proof.
3 Posterior computation
Posterior computation is performed adapting a recently proposed data-augmentation scheme based on a new class of Pólya-Gamma distributions; for a detailed description seePolson et al. (2013)
. The approach provides a strategy for fully Bayesian inference in models with binomial likelihoods, which bypasses the need for analytic approximations, while allowing us to exploit conjugacy for block updating.
The main result is that binomial likelihoods parameterized by log-odds can be represented as a mixture of Gaussians with respect to Pólya-Gamma distributions. Specifically
where and , with denoting the Pólya-Gamma random variable with parameters and . When is a linear predictor, and a Gaussian prior is considered for , full conditional conjugacy is ensured for Bayesian inference on the coefficients. Moreover the implied conditional distribution for , given , is again Pólya-Gamma, providing a simple Gibbs sampler alternating between two main steps. Specifically, letting be the number of successes and the vector of regressors for every observation
, and assuming a Bayesian logistic regression setting where, and having Gaussian prior , the Gibbs alternates between
where and ; with and is the diagonal matrix with ’s entries.
Recalling model (1), with probabilities defined as in (2) and latent similarities from (3), for , and , and taking a fixed truncation level for the number of latent coordinates, the Gibbs sampler for our model, is:
Update each augmented data from the full conditional Pólya-Gamma posterior:
for every , and .
Given , and , the Pólya-Gamma data augmentation scheme ensures full conditional Gaussian posterior for with , of the form
With , and the Gaussian process covariance matrix with .
Update the time-varying latent coordinate vector for every unit from its conditional posterior. Specifically, conditionally on , , , , and defining and , let be the vector obtained by stacking sub-vectors for all the couples such that or , with ; and the corresponding vector of probabilities, then
where with prior, according to GP formulation, and a matrix of regressors with entries suitably chosen from the elements of in order to reproduce the equality:
for all the probabilities such that or , with and . Model (5) is a proper logistic regression with linear predictor, therefore, according to our Pólya-Gamma sampling scheme, we update the vector of time-varying coordinates , represented by by sampling from:
and is the diagonal matrix with the corresponding Pólya-Gamma augmented data.
Conditioned on and
, sample the global shrinkage hyperparameters from
Where for and .
We can easily handle missing values by adding a further step imputing the unobserved binary similarities from their conditional distribution given the current state of the chain. Specifically:
5. Given and sample each missing value from its conditional distribution
Step 5 provides also a strategy for predicting new outcomes. Specifically, if we are interested in making inference on future with given the observed similarity matrices , , then we can simply perform the previous posterior computations adding to the observed dataset a new matrix
of missing values and make inference on the predictive posterior distribution using the samples of the Markov chain for.
4 Simulation Study
We provide a simulation study with the aim to evaluate the performance of the proposed model in analyzing a dataset constructed to mimic also a possible generating process in the finance application. The focus is on the ability to correctly reconstruct the true underlying processes, and also on the performance with respect to out of sample predictions. We also provide a comparison between our proposed approach and the estimated probability process for each time-varying binary outcome when using only temporal information without exploring matrix structure, showing graphically the sub-optimality of the latter in terms of efficiency and bias.
4.1 Estimating Performance
We generate a set of time varying matrices with in the discrete set . Each is simulated according to (1) with probabilities obtained from (2) and (3), generating from a with length scale and choosing time-varying latent coordinates , from Gaussian processes with length scale , independently for each unit . To evaluate the out of sample predictive performance we take to be a matrix of missing values, and assume similarities between units and and all the others, missing at times to assess the behavior with respect to missing data. For inference we choose a truncation level , length scales and set for the shrinkage parameters. We ran Gibbs iterations which proved to be enough for reaching convergence and discarded the first . Mixing was assessed by analyzing the effective sample sizes of the MCMC chains for the quantities of interest (i.e. , for , and ) after burn-in. We found most of these values concentrating around effective samples on a total of , providing a good mixing result.
The comparison in Figure 2 between true probability matrices and their corresponding posterior mean for some selected time , highlights the good performance of our approach in correctly estimating the true latent process and making predictions. The latter can be noticed by comparing true and estimated probability matrices at , recalling that in our simulation we assumed having missing entries and we were interested in analyzing the predictive performance of our model with respect to . Similar results are provided by the plot of true against the corresponding estimates and by the ROC curve in Figure 3 having an area underneath of .
Figure 4 shows a graphical comparison between the performance of our model with respect to and some selected probability trajectories (top), and the inferential results when the mean process and probability process are estimated with the same setting of our model but using only the time series of the corresponding
without borrowing information across the network (bottom). The sub-optimality of the independent approach is apparent in terms of both bias (over-smoothed trajectories) and variance (larger hpd intervals). When network structure is taken into account, the model provides accurate estimates, with posterior distributions rapidly concentrating around the true corresponding processes, while accurately selecting the dimension of the latent space. In particular, we find that the estimatedvalues start at 0.8 and 0.7 for and , respectively, but then drop to small values for the later factors. This implies that these later factor trajectories are quite flat and have limited influence. Borrowing information across the network over time has the additional advantage of reducing hyperparameter sensitivity, in particular with respect to the length scale in GP prior. We obtain, in fact, similar results when instead letting , and in sensitivity analyses.
5 Application to co-movements among National Stock Market Indices
National Stock Indices represent technical tools constructed by a synthesis of numerous data on the evolution of the various stocks, and represent important indicators of the financial condition in a given country. Modeling co-variations among these quantities, and in general among assets, represents a fundamental issue in many financial applications, such as the Arbitrage Pricing Theory (APT) of Ross (1976) and the Capital Asset Pricing Model (CAPM) developed by Sharpe (1964), and the correlations or covariances among asset’s returns are the typical measures of co-movements employed in this framework.
A rich literature is available in modeling dynamic covariance or correlation matrices, covering multivariate generalizations of ARCH and GARCH models (see e.g. Tsay, 2005, Engle, 2002, Alexander, 2001, Bollerslev et al., 1988), Stochastic volatility models (Harvey et al., 1994) and recent Bayesian extensions (see e.g. Wilson & Ghahramani, 2010, Nakajima & West, 2012, Durante et al., 2013). In this application, we instead provide a different and not fully explored measure of co-movement exploiting the network structure among financial indices and giving exactly the probability that such event happens at a given time. This is accomplished by applying our model to the time-varying matrices having entries if index and index co-move at time (indices are similar), and if opposite increments are recorded (indices are dissimilar).
We constructed using the quarterly log-returns of the main National Stock Market Indices () from 2004 to 2013 (, with the last empty matrix used for prediction), available at http://finance.yahoo.com/ and applied model (1), with probabilities specified as in (2) and latent similarity measures obtained via the projection approach defined in (3). For posterior computation we run Gibbs iterations with a burn-in of , setting a truncation level , length scales , and . Similarly to the simulation study, most of the chains have effective sample sizes around on a total of after burn-in, showing good mixing. We find that the first two latent factors are the most informative, with the remaining latent processes being concentrated near zero. A similar result was obtained in the seminal work of Fama & French (1993), providing three main common risk factors in the returns of stocks.
5.1 Model Interpretation
The estimated trajectory of the baseline process together with the point-wise 0.95 hpd intervals in Figure 5, provide important insights on the overall financial market behavior, in agreement with other theories on financial crises (see, e.g., Baig and Goldfaijn, 1999, and Claessens & Forbes, 2009) and recent applications (Durante et al., 2013, Kastner et al., 2013). Increasing and persistent level of the baseline process, inducing higher probability of co-movements, are recorded during the growth and burst of USA housing bubble and the initial turmoils before the 2008 global financial crisis (A). This result provides an empirical proof in favor of the increasing inter-connection among financial markets due to the proliferation of risky loans between 2004 and 2007, and the growing demand by foreign countries for financial assets built from the real estate market, such as residential mortgage-backed securities (RMBS) and collateralized debt obligations (CDO). As expected the global financial crisis between late-2008 and end-2009 (B), and the following, Greek debt crisis together with the worsening of European sovereign-debt crisis (C), are manifested through a further increase of the co-movement probabilities, highlighting a clear financial contagion effect.
Figure 6 shows the estimated (blue lines) and predicted (red lines) co-movement probability trajectories among USA and some selected European countries, pointing out the good performance of the proposed model in adaptively learning the data structure, confirmed also by a ROC curve having an area underneath of . It is worth noticing that the local adaptivity of the estimated trajectories is not due to an over-parameterization of the model since the shrinkage prior on and the choice of small length scales in the GP covariance functions, imply smooth trajectories and a parsimonious model formulation. Thus adaptivity is provided by the information borrowed in the financial network for each time . Co-movement probabilities among USA and Greece register a sharp drop in correspondence of the Greek debt crisis, differently from what happens with other European countries such as Germany and France, which instead evolve on similar patterns. We found this result reasonable in providing an empirical proof on the attempt to reduce the inter-connection with a country in crisis.
Finally, Figure 7 provides interesting insights on the financial network structure among the countries under investigation. Specifically we represent three different weighted networks, with weights given by the average estimated co-movement probability over all the time window considered (a), the estimated probability averaged over the period of the global financial crisis (b), and the Greek debt crisis (c). A reasonable global network structure with countries having similar financial economies most closely related among each other is provided in plot (a). As expected Japan appears to be closer to Western economies than Asian financial markets, while China has lower inter-connections with other countries. Stronger networks are estimated for European markets and Asian Tigers. International financial contagion effect is highlighted through strong inter-connections among all financial markets during the 2008 global financial crisis (b), with a still evident clustering effect, and Greece already showing a slightly different behavior. Finally, when the network during the Greek debt crisis is analyzed, we register evident low connections among Greece and almost all the other financial markets considered, and interestingly learn a strong network between Greece, Spain and Italy, representing the countries most affected by the European sovereign-debt crisis.
We proposed a Bayesian nonparametric dynamic model for binary similarity matrices, borrowing information across time and the network structure of the data under investigation and allowing for dimensionality reduction. The model has been constructed using latent similarity measures defined by the dot product of latent coordinate vectors, with entries evolving in continuous time via Gaussian process priors. The shrinkage hyperprior allows us to automatically learn the dimension of the latent space and ensures a parsimonious definition of the model, with the risk of over-parameterization due to a higher number of latent features avoided. The Pólya-Gamma data augmentation strategy allows us to define a simple and efficient Gibbs sampler for posterior computations based on full conditional conjugate posterior distributions, which is promising in terms of scaling to moderately large, and easily handling missing values as well as forecasting problems. Scalability to large could be, instead, improved via stochastic differential equations models approximating the GP prior on the latent coordinate processes (Zhu and Dunson, 2012). We provided also theoretical results on the flexibility of the model, illustrated its performance via a simulation study and obtained interesting insights on the network among financial markets during the recent crisis, by applying the model to time-varying co-movement data.
Our model has a broad range of applicability, with dynamic social network analysis and time-varying binary evaluations among units providing two natural fields of application. Further directions of research could be devoted to the definition of similar models for discrete valued dynamic matrices, which could provide useful tools for analyzing edge valued dynamic social networks or datasets with comparison among units expressed on a Likert scale.
Airoldi et al. (2008)
Airoldi, E.M., Blei, D.M., Fienberg, S.E., & Xing, E.P. (2008).
Mixed Membership Stochastic Blockmodels.
Journal of Machine Learning Research9, 1981–2014.
- Alexander (2001) Alexander, C.O. (2001). Orthogonal GARCH. Mastering Risk 2, 21–38.
- Baig and Goldfaijn (1999) Baig, T., & Goldfaijn, I. (1999). Financial Market Contagion in the Asian Crisis. Staff Papers, International Monetary Fund 46, 167–195.
- Bhattacharya & Dunson (2011) Bhattacharya, A., & Dunson, D.B. (2011). Sparse Bayesian infinite factor models. Biometrika 98, 291–306.
- Bollen (1989) Bollen, K.A. (1989). Structural Equations with Latent Variables. Wiley.
- Bollerslev et al. (1988) Bollerslev, T., Engle, R.F., & Wooldridge, J.M. (1988). A capital-asset pricing model with time-varying covariances. Journal of Political Economy 96, 116–131.
- Claessens & Forbes (2009) Claessens, S., & Forbes, K. (2009) International Financial Contagion, An overview of the Issues. Springer.
- Cox & Cox (2001) Cox, T.F., & Cox, M.A. (2001). Multidimensional Scaling. II ed., Chapman and Hall.
- DeSarbo et al. (1999) DeSarbo, W.S., Kim, Y., & Fong D. (1999). A Bayesian multidimensional scaling procedure for the spatial analysis of revealed choice data. Journal of Econometrics 89, 79–108.
- DeSarbo & Hoffman (1987) DeSarbo, W.S., & Hoffman, D.L. (1987). Constructing MDS joint spaces from binary choice data: A new multidimensional unfolding threshold model for marketing research. Journal of Marketing Research 24, 40–54.
- Durante et al. (2013) Durante, D., Scarpa, B. & Dunson, D.B. (2013). Locally adaptive factor processes for multivariate time series. http://arxiv.org/abs/1210.2022.
- Engle (2002) Engle, R.F. (2002). Dynamic conditional correlation: a simple class of multivariate generalized autoregressive conditional heteroskedasticity models. Journal of Business & Economic Statistics 20, 339–350.
- Erdös & Rény (1959) Erdös, P., & Rény, A. (1959). On Random Graphs Publicationes Mathematicae 6, 290–297.
- Fama & French (1993) Fama, E.F., & French, K.R. (1993). Common risk factors in the returns on stocks and bonds. Journal of Financial Economics 33, 3–56.
- Frank & Strauss (1986) Frank, O., & Strauss, D. (1986). Markov Graphs. Journal of the American Statistical Association 81, 832–842.
- Ghosh & Dunson (2009) Ghosh, J., & Dunson, D.B. (2009) Default priors and efficient posterior computation in Bayesian factor analysis. Journal of Computational and Graphical Statistics 18, 306–320.
- Guo et al. (2007) Guo, F., Hanneke S., Fu, W., & Xing, E.P. (2007). Recovering temporally rewiring networks: A model-based approach. In International Conference in Machine Learning.
- Handcock et al. (2003) Handcock, M.S., Robins, G.L., Snijders, T.A.B., Moody, J., & Besag, J. (2003). Assessing Degeneracy in Statistical Models of Social Networks. Journal of the American Statistical Association 76, 33–50.
- Harvey et al. (1994) Harvey, A.C., Ruiz E., & Shepard N. (1994). Multivariate stochastic variance models. Review of Economic Studies, 61, 247–264.
- Hoff et al. (2002) Hoff, P.D., Raftery, A.E., & Handcock, M.S. (2002). Latent Space Approaches to Social Network Analysis. Journal of the American Statistical Association 97, 1090–1098.
- Holbrook et al. (1982) Holbrook M.B., William L.M., & Russell S.W. (1982). Constructing Joint Spaces from ”Pick-Any” Data: A New Tool for Consumer Analysis. Journal of Consumer Research 9, 99–105.
Holland & Leinhardt (1981)
Holland, P.W., & Leinhardt, S. (1981).
An Exponential Family of Probability Distributions forDirected Graphs.Journal of the American Statistical Association 76, 33–65.
- Ishiguro et al. (2010) Ishiguro, K., Iwata, T., Ueda, N. & Tenenbaum, J. (2010). Dynamic infinite relational model for time-varying relational data analysis. In Advances in Neural Information Processing Systems (NIPS).
- Jamali-Rad & Leus (2012) Jamali-Rad, H., & Leus, G. (2012). Dynamic Multidimensional Scaling for Low-Complexity Mobile Network Tracking. IEEE Transactions on Signal Processing 60, 4485–4491.
Kastner et al. (2013)
Kastner, G., Frühwirth-Schnatter, S., & Lopes, H.F. (2013)
Analysis of Exchange Rates via Multivariate Bayesian Factor Stochastic Volatility Models.
In Lanzarone E., & Ieva, F.
The Contribution of Young Researchers to Bayesian Statistics, Proceedings of BAYSM2013.63, Springer.
Kemp et al. (2006)
Kemp, C., Tenenbaum, J. B., Griffiths, T. L., Yamada, T. & Ueda, N. (2006).
Learning systems of concepts with an infinite relational model.
In Proceedings of the 21st National Conference on Artificial Intelligence.
- Nakajima & West (2012) Nakajima, J., & West, M. (2012) Dynamic factor volatility modeling: A Bayesian latent threshold approach. Journal of Financial Econometrics 11, 116–153.
- Nowicki & Snijders (2001) Nowicki, K., & Snijders, T.A.B. (2001). Estimation and prediction for stochastic blockstructures. Journal of the American Statistical Association 96, 1077–1087.
- Oh & Raftery (2007) Oh, M.S., & Raftery A.E. (2007). Model-based clustering with dissimilarities: A Bayesian approach. Journal of Computational and Graphical Statistics 16, 559–585.
- Oh & Raftery (2001) Oh, M.S., & Raftery A.E. (2001). Bayesian Multimensional scaling and choice of dimension. Journal of the American Statistical Association 96, 1031–1004.
- Polson et al. (2013) Polson, N.G., Scott J.G., & Windle J. (2013). Bayesian inference for logistic models using Polya-Gamma latent variables. http://arxiv.org/abs/1205.0310.
- Robins & Pattinson (2001) Robins, G.L., & Pattison, P.E., (2001). Random graph models for temporal processes in social networks. Journal of Mathematical Sociology 25, 5–41.
- Ross (1976) Ross, S. (1976). The arbitrage theory of capital asset pricing. Journal of Finance 13, 341–360.
- Sarkar et al. (2007) Sarkar, P., Siddiqi, S.M., & Gordon, G.J. (2007). A latent space approach to dynamic embedding of co-occurrence data. In Proceedings of the 11th International Conference on Artificial Intelligence and Statistics (AI-STATS).
- Sarkar & Moore (2005) Sarkar, P., & Moore, A.W. (2005). Dynamic social network analysis using latent space models. In Advances in Neural Information Processing Systems (NIPS).
- Sharpe (1964) Sharpe, W. (1964). Capital asset prices: a theory of market equilibrium under conditions of risk. Journal of Finance 19, 425–442.
- Snijders (2002) Snijders, T.A.B. (2002). Markov Chain Monte Carlo Estimation of Exponential Random Graph Models. Journal of Social Structure 2, 1–40.
- Strauss & Ikeda (1990) Strauss, D., & Ikeda, M. (1990). Pseudolikelihood Estimation for Social Networks. Journal of the American Statistical Association 49, 204–212.
- Tsay (2005) Tsay, R.S. (2005). Analysis of Financial Time Series. II ed., Wiley.
- Wilson & Ghahramani (2010) Wilson, A.G. & Ghahramani Z. (2010). Generalised Wishart Processes. http://arxiv.org/abs/1101.0240.
- Xing et al. (2010) Xing, E.P., Fu, W., & Song, L. (2010). A State-Space Mixed Membership Blockmodel for Dynamic Network Tomography. The Annals of Applied Statistics 4, 535–566.
- Xu & Hero (2013) Xu, S.K., & Hero III, A.O., (2013). Dynamic stochastic blockmodels: Statistical models for time-evolving networks http://arxiv.org/abs/1304.5974.
- Yang et al. (2011) Yang, T.B, Chi, Y., Zhu, S.H., Gong, Y.H., & Jin, R. (2011). Detecting communities and their evolutions in dynamic social networks–a Bayesian approach. Machine Learning 82, 157–189.
- Zhu and Dunson (2012) Zhu, B., & Dunson, D.B. (2012). Locally Adaptive Bayes Nonparametric Regression via Nested Gaussian Processes. http://arxiv.org/abs/1201.4403.
Proof of Theorem 2.3: Since is compact, for every there exists an open covering of -balls with a finite subcover such that , where . Then:
Define . Since
we only need to look at each -ball independently as follow:
Where the first inequality comes from repeated uses of triangle inequality, and the second follows from the fact that each of these terms is an independent event. We evaluate each of these terms in turn.
Based on the continuity of , for all , there exists an such that:
Given the GP prior on the elements of and letting , the equation
represents a finite sum over pairwise products of almost surely continuous functions (recalling GP assumption on the elements ) and thus result in a matrix with elements almost surely continuous on . Therefore is almost surely continuous on since the baseline is itself almost surely continuous given the GP prior assumption. Therefore, similarly as before, for all , there exists and such that:
To examine last term, first note that: