The quantitative study of the structure, dynamics, and function of real-world networks is an emerging interdisciplinary field that spans nearly every domain of science. Generative models, in which we define a structured probability distribution over all graphs, are a powerful and increasingly popular approach for studying real-world networks. Work on generative models now spans machine learning, statistics, the social sciences, ecology, and statistical physics. However, as a result of their different traditions and goals, these communities have developed a range of different generative models, including latent position models, block models , and feature models . Furthermore, these communities use different methods to learn such models from data, including frequentist, Bayesian, and nonparametric methods. Despite this large and diverse ecology of approaches, formulating a coherent understanding of whether and how these models are related remains an open challenge (notwithstanding several thoughtful efforts [4, 5, 6, 7]). We focus on the general framework of modeling conditionally independent edges between vertices with latent vertex attributes.
Here, we survey and organize this diversity of models and methods, and then introduce a unified view of the field. This view positions the three main classes of generative models for networks as special cases of a single coherent framework, positions the three major approaches to learning these models as different philosophical choices, and yields new insights from translation, such as theoretical properties, interpretation, and new inference algorithms. We close with a discussion of the challenges and opportunities this view presents for the field.
2 A walking tour of existing models
The following brief tour of generative models for networks illustrates the large diversity of approaches, as well as the different traditions used across disciplines. Despite different points of origin and assumptions for many of these models, we emphasize their underlying similarity throughout this tour of models. We note that dynamic networks, multiplex networks, networks with metadata, or networks produced from physical processes  can often be modeled in a straightforward manner using the model classes described below.
The first class of models is the latent space models. The canonical latent space model was introduced in statistics and mathematical sociology by Hoff and colleagues . In this model, vertices take latent positions for a -dimensional space and edges are generated with a probability that depends only on the Euclidean distance between vertex positions. Subsequently, Hoff introduced a more flexible construction called “the eigenmodel” that captures both the latent space model and latent block models . In general, latent space models assume latent continuous vertex attributes with edge probabilities given by a distance function on those attributes. Moreover, by relaxing the continuous (or Euclidean ) space assumption and allowing edge probabilities to be determined by a generic function of those attributes, we can recover as special cases each of the two model classes discussed below. Thus, latent space models effectively subsume all generative models for networks.
In the canonical latent space model, connection probabilities are assumed to decrease with latent distance, which induces strong assortativity in the resulting networks (like vertices connect with like). This pattern implies that vertices can be clustered by their latent positions. It then is natural to define a hierarchical model that explicitly represents such clusters within the latent space model . But, homophily can be a misleading assumption for many real networks, which sometimes exhibit ordered (status-oriented) or disassortative patterns (dislike connects with dislike). Disassortativity in particular is common in many biological, technological, and economic networks. For example, the probabilistic niche model from ecology is a general latent space model that can naturally produce assortative, ordered, or disassortative latent clustering by dissociating sending and receiving latent positions of each vertex .
Finally, exponential random graph models in the sociological literature give network likelihood an exponential family form, typically as a function of network statistics; whenever they employ latent variables, they also fall within the latent space model class [12, 6].
Our second class is the block models, which use a latent space defined by categorical variables rather than continuous variables: vertices take positionsin some latent space for blocks, where . These models are often used to cluster vertices into roughly homogeneous groups or classes, a task akin to “community detection” in network science [13, 8]. Block models are by far the most well-explored and broadly used class of generative models for networks.
Block models originated in mathematical sociology [14, 15, 16] and represent active, albeit somewhat disconnected, areas of research in sociology, machine learning, and statistical physics [6, 7, 8]. In a block model, vertices can belong to either exactly one latent block (hard clustering) or have partial memberships in multiple blocks (overlapping or mixed membership). Vertices within a block are stochastically equivalent, meaning any pair of vertices in a given block have the same probabilities to connect to all other vertices in the network, and block interactions describe a coarse-graining of the network interactions. If vertices can have mixed membership, vertices instead take latent positions in the simplex . In nonparametric variations, is first sampled, otherwise fixed in advance [6, 17]. The simplest block model is the Erdős-Rényi random graph, in which . Larger models can exhibit a rich variety of block-level patterns, including assortative, disassortative, core-periphery, ordered, bipartite structure  and overlapping groups . Furthermore, block models can be extended to directly model other structural information, including degree heterogeneity , edge-weights , social status , or a growing number of blocks [6, 17, 21]. It is worthwhile to note that block models include topic models as a special case .
The third class is the latent feature models. This class is a natural progression from latent blocks and explicitly highlights the connection to the canonical latent space model. Instead of dividing total membership into one or across a mixture of groups, latent feature models allow vertices to have arbitrarily many unique features, typically binary, so that vertex attributes have the form , where . Machine learning offers a rich literature on latent feature models that extends naturally to relational data, i.e., networks 
. Edge probabilities are given as a function of a weighted sum of feature vectors: by allowing these weights to be positive or negative, latent feature models can allow vertex features to combine assortatively or disassortatively. Richer structure, including hierarchies on the features, can also be imposed[6, 22]. Alternatively, fixed- (parametric) variations combine features with block models  or in a regression framework .
A fourth class includes models defined on a latent hierarchical or tree-like space. Although such a latent structure may appear distinct from the previous model classes, these models of hierarchical structure are in fact a special subset of the other classes. For instance, we can directly impose a hierarchical organization on the latent block or feature models. Some models can be explicitly defined using an underlying tree structure, in which vertices in the network are leaves and common ancestors determine the probability of connection, as in the (nonparametric) hierarchical random graph model  and the Bayesian and nonparametric variations [26, 27, 6]. Each of these models uses a piece-wise constant underlying space , or a discrete latent space model, in which ultrametric distances between vertices determine the probability of connection ; the continuous version can be captured by hyperbolic geometry of a graph [9, 30].
3 Philosophies, representations, and unifications
In order to learn a particular generative model using real network data, we must also choose a learning paradigm that defines the relationship between an observed network and the various parameters of the model (including complexity parameters like ). Here, we characterize the three major approaches for learning network models in terms of their different philosophical assumptions, and illustrate how moving a model between paradigms can yield insights into theory and interpretation, and shed new light on both practical and efficient application of these models to network data.
Generative models for networks can be defined as parametric, frequentist or Bayesian, or under the Bayesian nonparametric paradigms. (Frequentist nonparametric methods are available as well, although relatively uncommon in this context. There is a bit of this flavor in some recent papers [20, 30, 31, 32], however.) However, different communities usually favor one paradigm over the others, which generally reflects differences in traditions and goals as much as it reflects any objective rationale. As popular as hierarchical Bayesian models are in machine learning, frequentist perspectives still dominate the ecology, economics and statistical physics literatures, often with an emphasis on interpretability. Some network models have appeared in multiple communities, and these can be independent rediscoveries under differing paradigms or are intentional adaptations from one paradigm to another, although this is not always obvious. There is thus much to be gained by unifying various network models within a single paradigm of learning from data.
Past efforts at unification have generally focused on placing large classes of models within a common representation and restricted set of philosophical choices. For instance, the eigenmodel  captures both latent space and block models, which can also be represented within a generalized linear model schema 
. Another general approach is matrix estimation, which can be applied broadly to networks in their adjacency matrix representation, and is in the tradition of nonparametric graph limit (graphon) estimation [33, 34, 35, 36, 37].
Bayesian nonparametric models are also increasingly popular, particularly in the machine learning community, and many of the models previously described can be unified under this framework [6, 28, 37]. Placing generative models for networks under the Bayesian nonparametric umbrella allows the model implies a particular method of model selection (which is effectively subsumed within the inference step). Bayesian nonparametric models allow the structural parameters to change, sometimes dramatically so, as more data is observed, e.g., with an increasing number of blocks or features. In practice, this means we begin with an infinite dimensional space and move to a finite representation. Practical application also requires a choice of priors that are tractable—though the consequences of which remain unclear, e.g., for consistency and practical performance. On the other hand, efforts under the Bayesian nonparametric framework have successfully led to novel machinery for modeling, inference, and theory [38, 28, 36].
To illustrate the more general theoretical perspective to be gained from this broad view of learning network models, we explore recent work defining the underlying theoretical basis for generative models for networks via the graphon. This approach is not without its limitations, however, and we then explore a related but distinct approach based on continuous processes.
3.1 Unification under the graphon
A reasonable and desirable property of generative models for networks is that they should not depend on the order in which we observe data, i.e., that our models are exchangeable. Network data, presented as a (random) adjacency matrix, requires joint exchangeability, so row and column identities in the matrix are jointly preserved under permutations of the data.111Bipartite networks and feature data require separable exchangeability, as row and column identities need not be related. Joint exchangeability is a special case that notably requires symmetry of the graphon. Exchangeability as a requirement leads to representation theorems. For exchangeable sequences
, de Finetti’s theorem implies that they can be represented by an underlying i.i.d. mixture of random variables. For a jointly exchangeable adjacency matrix, the Aldous-Hoover theorem implies the existence of a random measurable function, given in the following manner: for each vertex, draw uniformly at random from the unit interval , and for each pair of vertices , draw uniformly at random from . There then exists a function such that the adjacency matrix entries follow . We call this function the graphon . Equivalently, the graphon is the limit object of a sequence of graphs [39, 40, 41], which can then in theory be directly estimated nonparametrically [33, 35, 36, 34, 37].
From this perspective, exchangeability for network data implies the existence of a latent variable generative model, and furthermore that in such a model, edges are conditionally independent. The latent variable models discussed above can be justified under this framework .
However, an important consequence of the graphon construction is that graphs will be almost surely dense ( edges) or trivially empty [28, 41]. That is, exchangeability of a network model implies that we are in the regime of dense networks. This is problematic because most real-world networks, in fact, are sparse (vertices have constant or very slowing-growing degree; graphs have edges). Most of the models discussed in our walking tour fall within the jointly exchangeable framework of Aldous-Hoover. This would seem to imply that all generative models for networks are misspecified, despite their many successful practical applications. One solution to this fundamental problem is to abandon exchangeability in favor of alternative properties , or to attempt to escape the Aldous-Hoover representation entirely .
3.2 Unification under continuous space
To move beyond the graphon, Caron and Fox lift the representation of discrete adjacency matrices into a continuous space . Under this formulation, graphs are generated purely as a point process. By representing a graph as a sample from a continuous-time process, the Caron-Fox formulation sidesteps the dense-graph implication of the Aldous-Hoover approach while still preserving joint exchangeability. Instead of a random measurable function on the unit square, this approach implies, due to Kallenberg, a representation as a mixture of random functions.
This alternative approach is promising, as it presents the possibility that generative models for networks are not all misspecified, so long as they can be reformulated as a point process model for networks. To demonstrate this approach, Caron and Fox describe several choices of parametrization that correspond to an Erdős-Rényi model, graphon model, and the configuration model for random graphs with specified degree sequence. Each vertex has a latent variable weight parameter corresponding to sociability, which is analogous to degree “propensity” in the popular degree-corrected stochastic block model . This model parameterization connects their model explicitly to the family of physical models with specified degree distribution .
Introducing higher order structure, such as community, ordered, or hierarchical structure, has not yet been accomplished, and will be a natural and productive extension of these models. Hierarchical models that are based on the point process model seem likely to yield new theoretical insights and inference algorithms. On the other hand, more is currently known about the graphon and the pathologies of discrete graph models than about the continuous-space approach. Thus, there may be as-yet-unknown difficulties lurking within the point process model. One immediate issue is that this model only yields graphs with superlinear density , that is, it does not model extremely sparse graphs ( edges for vertices) . It will be crucial to assess the practical implications of these models to understand if, when, and how they break down on real-world network data.
4 Challenges and opportunities
Moving models between learning paradigms has already been a productive source of new ways to model networks. However, without a broader interest in the application of these models or in the underlying theoretical questions about networks, such model development risks being primarily a fluency exercise in philosophical bases and notation. The broad view of generative models for networks described above reveals a number of challenges and opportunities for the field, many of which are directly related to moving between representations and translating insights across fields.
The interpretability of model parameters, model structure, and their relationship to network data are crucial for inference about real-world complex systems. Identifying interpretable underlying structure is a strong motivation for many of the applications of network models in the sciences. For instance, the structure of social networks is generally believed to be driven by latent spaces, including geography , socioeconomic status , and popularity . And, models of biological networks generally seek structural “modules” that are meaningfully related to biological function, e.g., species with similar feeding patterns in food webs, the different disease phenotypes of the malaria parasite, or proteins with similar cellular functions [25, 18, 2].
Models motivated by specific applications can be useful for understanding how to interpret the different patterns of large-scale network organization encoded by different models . Extremely general models often have limited interpretability, and thus also limited utility for scientific applications. That is, we tend to trade-off between model interpretability and model generality (not necessarily model complexity). For instance, approximating the graphon with Gaussian processes  is quite general but such a model cannot easily produce useful insights about the underlying mechanisms that generated any particular network. Thus, an opportunity and a challenge is the adaptation of structural metaphors from applied domains to general models, allowing us to leverage the corresponding interpretations, e.g., social group structure to block models or ecological niche structure to latent space models [7, 11].
Assortativity is the tendency in observational data that vertices that are connected have similar attributes. Many generative models for networks assume that similarity between vertices is what generates edges (a process called homophily), and conversely, that we can infer vertex similarity directly from the structure of a network. That is, these models assume homophily is the underlying generative mechanism. However, assortativity also occurs when vertices become more similar as a result of an existing connection (a process sometimes called “social contagion,” although that name has unfortunate connotations). In particular, influence can also produce assortativity, and few generative models allow for both mechanisms. Deciding which of the two governs a particular system is sometimes scientifically attractive but it is also generally statistically impossible .
Observed or latent attributes can drive assortativity directly, as in homophily, but homophily and structural similarity are easily confounded as well. (There remains another idea to conflate: detecting assortativity is not the same as predicting, given some vertex features, that similar vertices will be connected. Assortativity only guarantees that given an edge, vertex features are likely to be similar.) Furthermore, vertex attributes and network structure may be unrelated. Fosdick and Hoff  designed a framework to first test for dependence between vertex attributes and network structure, and then model network structure and attributes jointly. Even then, dependence between attributes and latent network structure is not causality. Latent positions correlated with known metadata may be correlated with the true causal mechanism, e.g., jointly caused by a true but unknown mechanism . Although latent space models do not solve the correlation-causation problem, they can be a useful way to instrumentalize these processes.
In the opposite direction, we can leverage patterns of assortativity to make useful predictions. Recommender systems are built on this notion, where we can infer profiles of interests given a user profile. In sociology for instance, McCormick and Zheng use the latent space model  to model the distribution of latent social positions of partially unobserved populations . They use these distributions to infer demographic profiles of underrepresented populations, i.e., in a setting where such assumptions about generalizability and assortativity may be justified.
Practical model selection for models of network data remains an open challenge. This includes finding the dimensionality of the latent space or number of blocks or features: different choices can easily introduce a linear number of new parameters. A number of methods have been applied, including AIC  and BIC  (both of which are known to fail in our context ); MDL [49, 50]19, 51, 10]; and likelihood ratios . Bayesian nonparametric models move the modeling selection task inside the model, where dimensionality becomes a parameter to be inferred. In general, it is unclear when these methods fail. Potentially inappropriate assumptions made during inference (e.g., assortativity of latent blocks ) and lack of effective validation  complicate this already non-trivial challenge.
Even if our favorite model selection technique succeeds, how do we know if the latent variable representation we have inferred is reasonable? One answer is metadata: for example, if the recovered latent space corresponds to vertex information that was excluded from the model . On the other hand, using unsupervised methods to find patterns that correlate with ‘ground truth’ can be problematic, depending on how we validate and interpret the inferred vertex distribution and latent space .
Simulation-based model checking can be used to further probe the reasonableness of our models, that is, th egoodness of fit [53, 54]. Model checking can then expand beyond link prediction: e.g., by measuring the structural similarity of resampled graphs to the original data. This manner of model checking is also reminiscent of approximate Bayesian computation (ABC), which avoids intractable optimization problems by comparing other properties from the resampled model. Model checking in this manner can be used further to compare approximate inference methods. Finally, model checking provides a robust critical method to investigate hypotheses .
Model checking also points the way to potential method to compare networks, which is currently an open challenge for working with network data. However, we currently lack a sensible general framework with which to compare networks. For networks suspected to have come from the same generative model, one can resample networks and compare the networks of interest. (Even this is dependent on the model specification, however. Exponential random graph models, for example, lack consistency under subsampling .) If the graphon is used directly, one would be able to directly compare networks of different sizes, but this requires estimating the graphon directly such that samples could be compared. To sidestep this issue, Asta and Shalizi recently proposed a nonparametric graph comparison method based on hyperbolic latent space models .
Models for networks suffer dependencies among data and model parameters, and tend to scale poorly or suffer from strong degeneracies. MCMC methods remain popular across domains, despite poor scalability [7, 12, 11, 8]. Variational methods and belief propagation (message passing) have become popular in both statistical physics and machine learning (e.g., [2, 56, 19, 8]), including recent extension to stochastic variational inference . Variational methods are much faster, but the costs on these models of their mean-field approximation (independence assumptions) are not well understood. In addition, a number of algorithms make assumptions that cannot be widely true, such as strictly assortative clustering . Belief propagation, likewise, is very efficient but necessarily approximate and only suitable for (tree-like) sparse graphs. Pseudolikelihood methods have been developed for latent feature models , block models , and the latent space model . Currently, Bayesian nonparametric models tend to suffer during optimization and generally scale poorly, but we expect this to be an active area moving forward .
Model use and accessibility
Making new models relevant to the wider communities using network analysis requires making them easily accessible in language, construction, and implementation. One challenge already troubling inference and analysis for these models comes from the inherent symmetries in the model construction. Translational and reflectional symmetries, as well as other forms of non-identifiability, are a practical issue for comparing and combining point estimates and posterior probabilities (i.e., model selection and comparison). These symmetries contribute to the ruggedness of the likelihood space, which further challenges the validity of variational assumptions for model inference. This—and the challenges already discussed—will necessarily create challenges for automatic, rapid model construction and evaluation, as in probabilistic programming . Sometimes this can be resolved simply with a canonical (but arbitrary) setting (e.g., ) but even this can be theoretically non-trivial . Even if model-fitting tools become widely available, applied network analysis still generally lacks best practices for coping with model non-identifiabilities. To meaningfully impact change, such developments will need to be developed jointly between theoreticians and practitioners.
Generative models are a powerful and increasingly popular approach for understanding the structure of networks. They are also studied or used by researchers spanning a surprisingly diverse set of fields, including both method-oriented fields like machine learning and question-oriented fields like ecology and physics. This diversity has produced a broad variety of models and a highly fragmented literature. It has also slowed the development of a coherent understanding of the theoretical basis and practical applications of these models. Most of the specific generative models developed across these literatures, however, can be viewed as being special cases of a general latent space model, where the latent space may have certain restrictions or characteristics, and employing one of three particular learning paradigms for working with actual data. This view is not a grand unified mathematical theory, but it does provide a coherent understanding of how different models in different fields are related. This view also highlights a number of interesting challenges and opportunities for future work on generative models for networks.
Even after connecting these perspectives, we still face the challenge of evaluating the theoretical and practical costs and benefits of different sets of modeling assumptions or choices over one another. For example, how can we understand what is being traded off when we use a latent block model under a Bayesian nonparametric learning paradigm versus a general latent space model under a frequentist learning paradigm? Even when pursuing the same basic goals, different communities of researchers and even different individuals can rationalize different modeling choices, e.g., which priors to use for regularization, how to manage model parameter uncertainty, or when to employ frequentist-style hypothesis testing.
At their core, many of these choices reflect fundamental beliefs about the nature of data, e.g., the choice of distance function or the choice of representation for class or feature memberships. Other choices reflect fundamental beliefs about the nature of models, such as believing that the number of parameters should grow with larger sample sizes or that domain knowledge should be incorporated into the expected distributions and relationships between variables. The costs of these choices are often unknown in part because the modeling goals are usually unstated, but the consequences can be quite real. For instance, what is the empirical cost of choosing mathematically tractable priors for Bayesian nonparametric models? How much more or less robust is low-level parametric modeling versus a general nonparametric approximation?
One of the genuine advantages of generative models for networks is that they explicitly encode our beliefs about both the nature of the data and the nature of our models, which allows us to carefully quantify parameter and model uncertainty and to interpret the results with respect to the data generating process. What we lack, however, is an effective, principled way to evaluate the costs and benefits of specific choices with respect to specific goals.
Making progress on this fundamental issue would shed considerable light on all aspects of generative models for networks. Connecting models to high-level representations, for discrete random graphs [28, 7] and continuous space representations  provides an illustrative example of exploring these boundaries. Translating modeling techniques such as block structure and hierarchical structure to the point process model should provide an exciting new ground for model development, borrowing interpretation and developments already established on simpler, discrete spaces and physical, ecological, and social processes. Finally, jointly building theoretical foundations for these models while representing more complex structure in network data will help unite and benefit network science, both in applied and theoretical domains.
This work was supported by the US AFOSR and DARPA grant number FA9550-12-1-0432 (AZJ, AC) and the NSF Graduate Research Fellowship award number DGE 1144083 (AZJ).
-  P. D. Hoff, A. E. Raftery, and M. S. Handcock, “Latent space approaches to social network analysis,” Journal of the American Statistical Association, vol. 97, no. 460, pp. 1090–1098, 2002.
-  E. M. Airoldi, D. M. Blei, S. E. Fienberg, and E. P. Xing, “Mixed Membership Stochastic Blockmodels,” Journal of Machine Learning Research, vol. 9, pp. 1981–2014, 2008.
-  K. Miller, T. L. Griffiths, and M. I. Jordan, “Nonparametric latent feature models for link prediction,” NIPS, 2009.
-  P. D. Hoff, “Modeling homophily and stochastic equivalence in symmetric relational data,” NIPS, 2007.
-  A. C. Thomas, Hierarchical Models for Relational Data. PhD thesis, Harvard University, 2009.
-  M. N. Schmidt and M. Mørup, “Nonparametric Bayesian Modeling of Complex Networks,” IEEE Signal Processing Magazine, pp. 110–128, 2013.
-  A. Goldenberg, A. X. Zheng, S. E. Fienberg, and E. M. Airoldi, “A Survey of Statistical Network Models,” Found. Trends Mach. Learn., vol. 2, no. 2, pp. 129–233, 2010.
-  M. E. J. Newman, Networks: An Introduction. Oxford, UK: Oxford University Press, 2010.
-  D. Krioukov, F. Papadopoulos, M. Kitsak, A. Vahdat, and M. Boguñá, “Hyperbolic geometry of complex networks,” Phys. Rev. E, vol. 82, p. 036106, 2010.
-  M. S. Handcock, A. E. Raftery, and J. M. Tantrum, “Model-based clustering for social networks,” Journal of the Royal Statistical Society: Series A, vol. 170, no. 2, pp. 301–354, 2007.
-  R. J. Williams and D. W. Purves, “The probabilistic niche model reveals substantial variation in the niche structure of empirical food webs.,” Ecology, vol. 92, no. 9, pp. 1849–57, 2011.
-  G. Robins, P. Pattison, Y. Kalish, and D. Lusher, “An introduction to exponential random graph (p*) models for social networks,” Social Networks, vol. 29, no. 2, pp. 173–191, 2007.
-  B. Karrer and M. E. J. Newman, “Stochastic blockmodels and community structure in networks,” Physical Review E, vol. 83, no. 1, p. 016017, 2011.
-  P. W. Holland, K. B. Laskey, and S. Leinhardt, “Stochastic blockmodels: First steps,” Social Networks, vol. 5, no. 2, pp. 109–137, 1983.
-  Y. J. Wang and G. Y. Wong, “Stochastic Blockmodels for Directed Graphs,” Journal of the American Statistical Association, vol. 82, no. 397, pp. 8–19, 1987.
-  K. Nowicki and T. A. B. Snijders, “Estimation and Prediction for Stochastic Blockstructures,” Journal of the American Statistical Association, vol. 96, no. 455, pp. 1077–1087, 2001.
-  C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda, “Learning systems of concepts with an infinite relational model,” AAAI, 2006.
-  D. B. Larremore, A. Clauset, and A. Z. Jacobs, “Efficiently inferring community structure in bipartite networks,” Physical Review E, vol. 90, no. 1, p. 012805, 2014.
-  C. Aicher, A. Z. Jacobs, and A. Clauset, “Learning Latent Block Structure in Weighted Networks,” Journal of Complex Networks, to appear. arxiv:1404.0431, 2014.
-  B. Ball and M. E. J. Newman, “Friendship networks and social status,” Network Science, vol. 1, no. 01, pp. 16–30, 2013.
-  D. S. Choi, P. J. Wolfe, and E. M. Airoldi, “Stochastic blockmodels with a growing number of classes,” Biometrika, pp. 273–284, 2012.
-  K. Palla, D. Knowles, and Z. Ghahramani, “An infinite latent attribute model for network data,” ICML, 2012.
-  M. Kim and J. Leskovec, “Multiplicative Attribute Graph Model of Real-World Networks,” Internet Mathematics, vol. 8, no. 1-2, pp. 113–160, 2012.
-  P. D. Hoff, “Multiplicative latent factor models for description and prediction of social networks,” Computational & Mathematical Organization Theory, vol. 15, no. 4, pp. 261–272, 2009.
-  A. Clauset, C. Moore, and M. E. J. Newman, “Hierarchical structure and the prediction of missing links in networks,” Nature, vol. 453, no. 7181, pp. 98–101, 2008.
-  D. M. Roy, C. Kemp, V. K. Mansinghka, and J. B. Tenenbaum, “Learning annotated hierarchies from relational data,” NIPS, 2007.
-  D. M. Roy and Y. W. Teh, “The Mondrian Process,” NIPS, 2009.
-  P. Orbanz and D. M. Roy, “Bayesian Models of Graphs, Arrays and Other Exchangeable Random Structures,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–25, 2014.
-  T. A. B. Snijders, “Statistical Models for Social Networks,” Annual Review of Sociology, vol. 37, no. 1, pp. 131–153, 2011.
-  D. Asta and C. R. Shalizi, “Geometric network comparison,” Preprint, arXiv:1411.1350, 2014.
-  S. Chatterjee, “Matrix estimation by Universal Singular Value Thresholding,” Annals of Statistics, to appear, arxiv:1212.1247.
-  S. C. Olhede and P. J. Wolfe, “Network histograms and universality of blockmodel approximation,” PNAS, vol. 111, p. 14722–14727, 2014.
-  J. J. Yang, Q. Han, and E. M. Airoldi, “Nonparametric estimation and testing of exchangeable graph models,” AISTATS, vol. 33, pp. 1060–1067, 2014.
P. J. Bickel, A. Chen, and E. Levina, “The method of moments and degree distributions for network models,”Annals of Statistics, vol. 39, no. 5, pp. 2280–2301, 2011.
-  P. J. Wolfe and S. C. Olhede, “Nonparametric graphon estimation,” Preprint, arXiv: 1309.5936, 2013.
-  P. J. Bickel and A. Chen, “A nonparametric view of network models and Newman–Girvan and other modularities,” Proc. Natl. Acad. Sci. USA, vol. 106, no. 50, pp. 21068–73, 2009.
-  J. R. Lloyd, P. Orbanz, Z. Ghahramani, and D. M. Roy, “Random function priors for exchangeable arrays with applications to graphs and relational data,” NIPS, 2012.
-  F. Caron and E. B. Fox, “Bayesian nonparametric models of sparse and exchangeable random graphs,” in NIPS Workshop on Frontiers in Network Analysis, 2014.
-  P. Diaconis and S. Janson, “Graph limits and exchangeable random graphs,” Rendiconti di Matematica, Serie VII, vol. 28, pp. 33–61, 2007.
-  T. Austin, “On exchangeable random variables and the statistics of large graphs and hypergraphs,” Probability Surveys, vol. 5, pp. 80–145, 2008.
-  L. Lovász, Large networks and graph limits. American Mathematical Society, 2012.
-  D. Liben-Nowell, J. Novak, R. Kumar, P. Raghavan, and A. Tomkins, “Geographic routing in social networks,” PNAS, vol. 102, no. 33, pp. 11623–11628, 2005.
-  N. Eagle, M. Macy, and R. Claxton, “Network diversity and economic development.,” Science, vol. 328, no. 5981, pp. 1029–31, 2010.
-  C. R. Shalizi and A. C. Thomas, “Homophily and Contagion Are Generically Confounded in Observational Social Network Studies.,” Sociological methods & research, vol. 40, no. 2, pp. 211–239, 2011.
-  B. K. Fosdick and P. D. Hoff, “Testing and Modeling Dependencies Between a Network and Nodal Attributes,” Preprint, arXiv:1306.4708, 2013.
-  U. von Luxberg, R. C. Williamson, and I. Guyon, “Clustering: Science or art?,” Journal of Machine Learning Research: W&CP, vol. 27, pp. 65–79, 2012.
-  T. H. McCormick and T. Zheng, “Latent space models for networks using Aggregated Relational Data,” tech. rep., University of Washington, 2013.
-  X. Yan, J. E. Jensen, F. Krzakala, C. Moore, C. R. Shalizi, L. Zdeborov, P. Zhang, and Y. Zhu, “Model selection for degree-corrected block models,” J Stat Mech: Theory and Experiment, pp. 1–13, 2014.
-  T. P. Peixoto, “Parsimonious module inference in large networks,” Phys Rev Lett, p. 148701, 2013.
-  T. P. Peixoto, “Model selection and hypothesis testing for large-scale network models with overlapping groups,” Preprint, arXiv:1409.3059, 2014.
-  J. Hofman and C. Wiggins, “Bayesian Approach to Network Modularity,” Phys Rev Lett, vol. 100, no. 25, p. 258701, 2008.
-  P. Gopalan, D. Mimno, S. M. Gerrish, M. J. Freedman, and D. M. Blei, “Scalable inference of overlapping communities,” NIPS, 2012.
-  D. R. Hunter, S. M. Goodreau, and M. S. Handcock, “Goodness of Fit of Social Network Models,” Journal of the American Statistical Association, vol. 103, no. 1, pp. 248–258, 2008.
A. Gelman and C. R. Shalizi, “Philosophy and the practice of Bayesian statistics,”British Journal of Mathematical and Statistical Psychology, vol. 66, no. 1996, p. 36, 2013.
-  C. R. Shalizi and A. Rinaldo, “Consistency under sampling of exponential random graph models,” Annals of Statistics, vol. 41, no. 2, pp. 508–535, 2013.
M. Salter-Townshend and T. B. Murphy, “Variational Bayesian inference for the Latent Position Cluster Model for network data,”Computational Statistics & Data Analysis, vol. 57, no. 1, pp. 661–671, 2013.
-  C. Reed, Submodular MAP Inference for Scalable Latent Feature Models. PhD thesis, University of Cambridge, 2013.
-  A. A. Amini, A. Chen, P. J. Bickel, and E. Levina, “Pseudo-likelihood methods for community detection in large sparse networks,” Annals of Statistics, vol. 41, no. 4, pp. 2097–2122, 2013.
-  A. E. Raftery, X. Niu, P. D. Hoff, and K. Y. Yeung, “Fast Inference for the Latent Space Network Model Using a Case-Control Approximate Likelihood,” Journal of Computational and Graphical Statistics, vol. 21, no. 4, pp. 901–919, 2012.
-  R. Nishihara, T. Minka, and D. Tarlow, “Detecting Parameter Symmetries in Probabilistic Models,” Preprint, arXiv:1312.5386, 2013.