Log In Sign Up

Testing for Externalities in Network Formation Using Simulation

We discuss a simplified version of the testing problem considered by Pelican and Graham (2019): testing for interdependencies in preferences over links among N (possibly heterogeneous) agents in a network. We describe an exact test which conditions on a sufficient statistic for the nuisance parameter characterizing any agent-level heterogeneity. Employing an algorithm due to Blitzstein and Diaconis (2011), we show how to simulate the null distribution of the test statistic in order to estimate critical values and/or p-values. We illustrate our methods using the Nyakatoke risk-sharing network. We find that the transitivity of the Nyakatoke network far exceeds what can be explained by degree heterogeneity across households alone.


page 3

page 21

page 22


Testing goodness-of-fit and conditional independence with approximate co-sufficient sampling

Goodness-of-fit (GoF) testing is ubiquitous in statistics, with direct t...

General Behaviour of P-Values Under the Null and Alternative

Hypothesis testing results often rely on simple, yet important assumptio...

Simulations for the Q statistic with constant and inverse variance weights for binary effect measures

Cochran's Q statistic is routinely used for testing heterogeneity in met...

Second-level randomness test based on the Kolmogorov-Smirnov test

We analyzed the effect of the deviation of the exact distribution of the...

Network Global Testing by Counting Graphlets

Consider a large social network with possibly severe degree heterogeneit...

Parameterizing Network Graph Heterogeneity using a Modified Weibull Distribution

We present a simple method to quantitatively capture the heterogeneity i...

Optimal Adaptivity of Signed-Polygon Statistics for Network Testing

Given a symmetric social network, we are interested in testing whether i...

A strategic network formation game with transfers

In this section we outline a simple model of strategic network formation where agents may make (bilateral) transfers to one another. Let be a utility function for agent , which maps adjacency matrices – equivalently networks – into utils. The marginal utility for agent associated with (possible) edge is


where is the adjacency matrix associated with the network obtained after deleting edge and the one obtained via link addition.

From Bloch & Jackson (2006), a network is pairwise stable with transfers if the following condition holds.

Definition 1.

(Pairwise stability with Transfers) The network is pairwise stable with transfers if

Definition 1 states that the marginal utility of all links actually present in a pairwise stable network is (weakly) positive, while that associated with links not present is negative. The definition presumes utility is transferable, since it only requires that the sum of utility to and associated with edge is positive. When the sum of the two marginal utilities is positive, there exists a within-dyad transfer such that both agents benefit. Note also that the definition is pairwise: the benefit of a link is evaluated conditional on the remaining network structure. It is possible, for example, that a coalition of players could increase their utility by jointly forming a set of links and making transfers to one another. Imposing more sophisticated play of this type would result in a refinement relative to the set of network configurations which satisfy Definition 1 (cf., Jackson, 2008).

In this chapter we will specialize to utility functions of the form


with . Here and capture agent-level degree heterogeneity (cf., Graham, 2017). If is high, then the baseline utility associated with any link is high for agent ( is an “extrovert”). If is high, then is a particularly attractive partner for all other agents (

is “popular”). We leave the joint distribution of

and unrestricted in what follows.

The term is associated with externalities in link formation. We require that ; additional restrictions might be needed to ensure the existence of a network that is pairwise stable with transfers and/or a test statistic with a non-degenerate null distribution.

Instead of formulating additional high-level conditions on , in what follows we emphasize, and develop results for, two specific examples. In the first equals


which implies that agents receive more utility from links with popular (or high degree) agents.

The second example specifies as


which implies that dyads receive more utility from linking when they share other links in common. This is a transitivity effect.

When the utility function is of the form given in (3) the marginal utility agent gets from a link with is

Pairwise stability then implies that, conditional on the realizations of , , and the value of externality parameter , the observed network must satisfy, for and


with , and .

Equation (6) defines a system of nonlinear simultaneous equations. Any solution to this system – and there will typically be multiple ones – constitutes a pairwise stable (with transfers) network. To make this observation a bit more explicit, similar to Miyauchi (2016), consider the mapping :


Under the maintained assumption that the observed network satisfies Definition 1, the observed adjacency matrix corresponds to the fixed point

Here vectorizes the elements in the lower triangle of an matrix and we define its inverse operator as creating a symmetric matrix with a zero diagonal. In addition to the observed network there may be other such that . The fixed point representation is useful for showing equilibrium existence as well as for characterization (e.g., using Tarski’s (1955) fixed point theorem).

For the two types of network formation externalities we consider, specified in equations (4) and (5) above, the mapping is weakly increasing in for . This allows for the application of Tarski’s (1955) theorem; ensuring existence of at least one pairwise stable equilibrium.

Test formulation

Our goal is to assess the null hypothesis that relative to the alternative that . The extension of what follows to two-sided tests is straightforward. A feature our testing problem is the presence of a high-dimensional nuisance parameter in the form of the degree heterogeneity terms, . Since the value of these terms may range freely over the null, our null hypothesis is a composite one.

The composite nature of the null hypothesis raises concerns about size control. Ideally our test will have good size properties regardless of the particular value of . Assume, for example, that the distribution of the

is right-skewed. In this case we will likely observe high levels of clustering among high

agents. Measured transitivity in the network might be substantial even in the absence of any structural preference for transitive relationships. We want to avoid excessive rejection of our null hypothesis in such settings; we do so by varying the critical value used for rejection with the magnitude of a sufficient statistics for the .

A simple example helps to fix ideas. Under the null we have, for and ,


which corresponds to the -model of network formation (e.g., Chatterjee et al., 2011). Assume that with

with probability

. In this simple model two “high” agents link with probability , while low-to-low and low-to-high links never form. Some simple calculations give an overall density for this network of , and a (population) transitivity index of . For small and or large, transitivity in this network may exceed density substantially even though there is no structural taste for transitive links among agents. Here the transitivity is entirely generated by high degree agents linking with one another with greater frequency and, only incidentally, forming triangles in the process. A simple comparison of density and transitivity in this case is uninformative.

Motivated in part by this inferential challenge, as well as to exploit classic results on testing in exponential families (e.g., Lehmann & Romano, 2005, Chapter 4), it will be convenient in what follows to assume that . Next let denoting a subset of the dimensional Euclidean space in which is, a priori, known to lie, and

Our null hypothesis is


since may range freely over under the null of no externalities in link formation ().333There is an additional (implicit) nuisance parameter associated with equilibrium selection since, under the alternative, there may be many pairwise stable network configurations. We can ignore this complication for our present purposes, but see Pelican & Graham (2019) for additional discussion and details. With a little manipulation we can show that, under (9), the probability of the event takes the exponential family form

with equal to the degree sequence of the network.

Let denote the set of all undirected adjacency matrices with degree counts also equal to and denote the size, or cardinality, of this set. Under the conditional likelihood of given is

Under the null of no externalities all networks with identical degree sequences are equally probable. This insight will form the basis of our test.

Let be some statistic of the adjacency matrix , say its transitivity index. We work with a (test) critical function of the form

We will reject the null if our statistic exceeds some critical value, and accept it – or fail to reject it – if our statistic falls below this critical value. If our statistic exactly equals the critical value, then we reject with probability . The critical value , as well as the probability , are chosen to set the rejection probability of our test under the null equal to (i.e., to control size). In order to find the appropriate values of and we need to know the distribution of under the null.

Conceptually this distribution is straightforward to characterize; particularly if we proceed conditional on the degree sequence observed in the network in hand. Under the null all possible adjacency matrices with degree sequence are equally probable. The null distribution of therefore equals its distribution across all these matrices. By enumerating all the elements of and calculating for each one, we could directly – and exactly – compute this distribution. In practice this is not (generally) computationally feasible. Even for networks that include as few as 10 agents, the set may have millions of elements (see, for example, Table 1 of Blitzstein & Diaconis (2011)). Below we show how to approximate the null distribution of by simulation, leading to a practical method of finding critical values for testing.

If we could efficiently enumerate the elements of we would find by solving


If there is no for which (10) exactly holds, then we would instead find the smallest such that and choose to ensure correct size.

Alternatively we might instead calculate the p-value:


If this probability is very low, say less than 5 percent of all networks in have a transitivity index larger than the one observed in the network in hand, then we might conclude that our network is “unusual” and, more precisely, that it is not a uniform random draw from (our null hypothesis).

Below we show how to approximate the probabilities to the left of the equalities in (10) and (11) by simulation.

Similarity of the test

In our setting, a test , will have size if its null rejection probability (NRP) is less than or equal to for all values of the nuisance parameter:

Since is high dimensional, size control is non-trivial. This motivates, as we have done, proceeding conditionally on the degree sequence.

Let be the set of all graphical degree sequences (see below for a discussion of “graphic” integer sequences). For each our approach is equivalent to forming a test with the property that, for all ,


Such an approach ensures similarity of our test since, by iterated expectations

for any (cf., Ferguson, 1967). By proceeding conditionally we ensure the NRP is unaffected by the value of the degree heterogeneity distribution . Similar tests have proved to be attractive in other settings with composite null hypotheses (cf., Moreira, 2009).

Choosing the test statistic

Ideally the critical function is chosen to maximize the probability of correctly rejecting the null under particular alternatives of interest. It turns out that, because our network formation model is incomplete under the alternative (as we have been silent about equilibrium selection), constructing tests with good power is non-trivial. In Pelican & Graham (2019) we show how to choose the critical function, or equivalently the statistic , to maximize power against particular (local) alternatives. The argument is involved, so here we confine ourselves to a more informal development.

An common approach to choosing a test statistic, familiar from other applications of randomization testing (e.g., Cox, 2006, Chapter 3), is to proceed heuristically. This suggests, for example, choosing to be the transitivity index, or the support measure of Jackson et al. (2012), if the researcher is interested in “testing” for whether agents prefer transitive relationships.

A variation on this approach, inspired by the more formal development in Pelican & Graham (2019), is to set equal to


with and the maximum likelihood estimate (MLE) of .444Chatterjee et al. (2011) present a simple fixed point algorithm for computing this MLE (see also Graham (2017)). The intuition behind (13) is as follows: if it is positive, this implies that high values of the externality term, , are associated with links that have low estimated probability under the null (such that is large). The conjunction of “surprising” links with large values of is taken as evidence that .

Consider the transitivity example with ; statistic (13), with some manipulation, can be shown to equal


Statistic (14) is a measure of the difference between the actual number of triangles in the network and (a particular) expected triangle count computed under the null. To see this assume, as would be approximately true if the graph were an Erdös-Rényi one, that for all . Recalling that and respectively equal the fraction of all triads which are triangles and two-stars, we would have

The term in equals the difference between the numerator of the transitivity index and its denominator times density. For an Erdös-Rényi graph this difference should be approximately zero. In the presence of degree heterogeneity, the second term to the right of the equality in (14) is a null-model-assisted count of the expected number of triangles in . A rejection therefore occurs when many “surprising” triangles are present.

Before describing how to simulate the null distribution of we briefly recap. We start by specifying a sharp null hypothesis. Consider the network in hand with adjacency matrix and corresponding degree sequence . Our null hypothesis is that the observed network coincides with a uniform random draw from (i.e., the set of all networks with identical degree sequences). This null is a consequence of the form on the transferable utility network formation game outlined earlier. The testing procedure is to compare a particular statistic of , say , with its distribution across . If the observed value of is unusually large we take this as evidence against our null.

Although we have motivated our test as one for strategic interactions or externalities, in actuality we are really assessing the adequacy of a particular null model of network formation – namely the -model. Our test may detect many types of violations of this model, albeit with varying degrees of power. Consequently we need to be careful about how we interpret a rejection in practice. At the same time, by choosing the test statistic with some care, we hope to generate good power to detect the violation of interest – that – and hence conclude that externalities in link formation are likely present when we reject.

Simulating undirected networks with fixed degree

This section describes an algorithm, introduced by Blitzstein & Diaconis (2011), for sampling uniformly from the set . Our notation and exposition tracks that of Blitzstein & Diaconis (2011), albeit with less details. As noted previously, direct enumeration of all the elements of is generally not feasible. We therefore require a method of sampling from uniformly and also, at least implicitly, estimating its size. The goal is to replace, for example, the exact p-value (11) with the simulation estimate


where is a uniform random draw from and denotes the number of independent simulation draws selected by the researcher.

Two complications arise. First, it is not straightforward to construct a random draw from . Second, we must draw uniformly from this set. Fortunately the first challenge is solvable using ideas from the discrete math literature. Researchers in graph theory and discrete math have studied the construction of graphs with fixed degrees and, in particular, provided conditions for checking whether a particular degree sequence is graphical (e.g., Sierksma & Hoogeveen, 1991). We say that is graphical if there is feasible undirected network with degree sequence . Not all integer sequences are graphical. The reader can verify, for example, that there is no feasible undirected network of three agents with degree sequence .

As for the second complication, although we can not easily/directly construct a uniform random draw from , we can use importance sampling (e.g., Owen, 2013) to estimate expectations with respect to this distribution.

The basic idea and implementation is due to Blitzstein & Diaconis (2011). A similar, and evidently independently derived, algorithm is presented in Del Genio et al. (2010). While computationally faster approaches are now available, we nevertheless present the method introduced by Blitzstein & Diaconis (2011) for it pedagogical value and easy implementation. Their approach is adequate for small to medium sized problems. Readers interested in applying the methods outlined below to large sparse graphs might consult Rao et al. (1996), McDonald et al. (2007) or Zhang & Chen (2013). Pelican & Graham (2019) introduce a more complicated MCMC simulation algorithm that holds additional graph statistics constant (besides the degree sequence). They also provide references to the fairly extensive literature on adjacency matrix simulation.

While our presentation of the Blitzstein & Diaconis (2011) algorithm is motivated by a particular formal testing problem, our view is that it is also useful for more informally finding “unusual” or “interesting” features of a given network. Are links more transitive than one would expect in networks with similar degree sequences? Is average path length exceptionally short? For this reason, the material presented below may also enter a researcher’s workflow during the data summarization or exploratory analysis stage.

The algorithm

A sequential network construction algorithm begins with a matrix of zeros and sequentially adds links to it until its rows and columns sum to the desired degree sequence. Unfortunately, unless the links are added appropriately, it is easy to get “stuck” (in the sense that at a certain point in the process it becomes impossible to reach a graph with the desired degree and the researcher must restart the process). The paper by Snijders (1991) provides examples and discussion of this phenomena.

As an example consider the graphical degree sequence If we begin with an empty graph and add an edge between agents 3 and 4, we will go from the degree sequence to a residual one of . Unfortunately is not graphical. Adding more edges requires introducing self-loops or a double-edge, neither of which is allowed.

Intuitively we can avoid this phenomenon by first connecting high degree agents. Havel (1955) and Hakimi (1962) showed that this idea works for any degree sequence

Theorem 1.

(Havel-Hakimi) Let , if does not have at least positive entries other than it is not graphical. Assume this condition holds. Let be a degree sequence of length obtained by
[i] deleting the entry of and
  [ii]subtracting 1 from each of the highest elements in (aside from the one).
is graphical if and only if is graphical. If is graphical, then it has a realization where agent is connected to any of the highest degree agents (other than ).

Theorem 1 gives a verifiable condition for whether a degree sequence is graphical. Blitzstein & Diaconis (2011) extended this condition so that we can check whether a degree sequence is graphical if one node is already connected to some other nodes. This modified condition serves as a tool in their importance sampling algorithm.

Theorem 1 is suggestive of a sequential approach to building an undirected network with degree sequence . The procedure begins with a target degree sequence . It starts by choosing a link partner for the lowest degree agent (with at least one link). It chooses a partner for this agent from among those with higher degree. A one is then subtracted from the lowest degree agent and her chosen partner’s degrees. This procedure continues until the residual degree sequence (the sequence of links that remain to be chosen for each agent) is a vector of zeros.

To formally describe such an approach we require some additional notation. Let be the vector obtained by adding a one to the elements of :

Let be the vector obtained by subtracting one from the elements of :

Algorithm 1.

(Blitzstein and Diaconis)A sequential algorithm for constructing a random graph with degree sequence is

  1. Let be an empty adjacency matrix.

  2. If terminate with output

  3. Choose the agent with minimal positive degree .

  4. Construct a list of candidate partners .

  5. Pick a partner with probability proportional to its degree in .

  6. Set and update to .

  7. Repeat steps 4 to 6 until the degree of agent is zero.

  8. Return to step 2.

The input for Algorithm 1 is the target degree sequence and the output is an undirected adjacency matrix (i.e., with ).555Here denotes a conformable column vector of ones.

Notes: Prism graph (a 3-regular graph) on six vertices.

Sources: Authors’ calculations.

Figure 2: Cubic graph with six agents

Consider the 3-regular (i.e., cubic graph) depicted in Figure 2. Each agent in this graph has exactly three links such that its degree sequence equals . In turns out that there are two non-isomorphic cubic graphs on six vertices: the prism graph, shown in the figure, and the utility graph (or complete bipartite graph on two sets of three vertices). We can use Algorithm 1 to generate a random draw from the set of all graphs with a degree sequence of .

As an example of a series of residual degree sequences (updated in Step 6 of the algorithm) associated with a random draw from , for and , consider:

Labelling agents from left-to-right we can see that the first link is added between agents 0 and 1 (the “active” node’s residual degree is bold-faced above). This is illustrated in Figure 3, which begins with the labelled empty graph in the upper-left-hand corner and then sequentially adds links as we move from left-to-right and top-to-bottom. Next a link is added between agents 0 and 4, and then between agents 0 and 2. Observe that the algorithm selected agent 0 as the lowest degree agent in the initial step and continues to connect this vertex with higher degree ones until all needed edges incident to it are present.666In the event of ties for the lowest degree agent, the algorithm chooses the one with the lowest index.

In the 8th iteration of the algorithm an edge is added between agents 3 and 4. If, instead, an edge was added between agents 4 and 5 at this point, the residual sequence degree sequence would have been updated to , which is not graphic. Step 4 of the algorithm prevents the addition of edges which, if added, lead to non-graphic degree sequences. It is in this way that the algorithm avoids getting “stuck”. Getting stuck was a problem with earlier approaches to binary matrix simulation, such as the method of Snijders (1991).

Importance sampling

Algorithm 1 produces a random draw from , however, it does not draw from this set uniformly. A key insight of Blitzstein & Diaconis (2011) is that one can construct importance sampling weights to correct for non-uniformity of the draws from .

Let denote the set of all possible sequences of links outputted by Algorithm 1 given input . Let be the adjacency matrix induced by link sequence . Let and be two different sequences produced by the algorithm. These sequences are equivalent if their “end point” adjacency matrices coincide (i.e., if ). We can partition into a set of equivalence classes, the number of such classes coincides with the number of feasible networks with degree distribution (i.e., with the cardinality of ).

Notes: Illustration of the construction of a random draw from , for and , as generated according to Algorithm 1.

Sources: Authors’ calculations.

Figure 3: Simulation example

Let denote the number of possible link sequences produced by Algorithm 1 that produce ’s end point adjacency matrix (i.e., the number of different ways in which Algorithm 1 can generate a given adjacency matrix). Let be the sequence of agents chosen in step 3 of Algorithm 1 in which is the output. Let be the degree sequences of at the time when each agent was first selected in step 3, then


To see why (16) holds consider two equivalent link sequences and . Because links are added to vertices by minimal degree (see Step 3 of Algorithm 1), the agent sequences coincide for and . This, in turn, means that the exact same links, perhaps in a different order, are added at each “stage” of the algorithm (i.e., when the algorithm iterates through steps 4 to 7 repeatedly for a given agent). The number of different ways to add agent ’s links during such a “stage” is simply and hence (16) follows.

The second component needed to construct importance weights is , the probability that Algorithm 1 produces link sequence . This probability is easy to compute. Each time the algorithm chooses a link in step 5 we simply record the probability with which it was chosen (i.e., the residual degree of the chosen agent divided by the sum of the residual degrees of all agents in the choice set). The product of all these probabilities equals .

With and defined we can now show how to estimate expectations with respect to uniform draws from . Let be some statistic of the adjacency matrix. Here for is a draw from constructed using Algorithm 1. Consider the p-value estimation problem discussed earlier:

Here is the probability attached to the adjacency matrix in the target distribution over . The ratio is called the likelihood ratio or the importance weight. We would like for all .

Observe that ; setting and to the constant statistic, then suggests an estimate of equal to


and hence a p-value estimate of


An attractive feature of (18) is that the importance weights need only be estimated up to a constant. This feature is useful when dealing with numerical overflow issues that can arise when is too large to estimate.

Algorithm (1) is appropriate for simulating undirected networks. Recently Kim et al. (2012) propose a method for simulating from directed networks with both fixed indegree and outdegree sequences. Their methods is based on an extension of Havel-Hakimi type results to digraphs due to Erdös et al. (2010). Pelican & Graham (2019) introduce an MCMC algorithm for simulating digraphs satisfying various side constraints.

Illustration using the Nyakatoke network

The transitivity index for the Nyakatoke network, at 0.1884, is almost three times its associated network density of 0.0698. Is this excess transitivity simply a product of degree heterogeneity alone? To assess this we used Algorithm 1 to take 5,000 draws from the set of adjacency matrices with and degree sequences coinciding with the one observed in Nyakatoke (for reference the Nyakatoke degree distribution is plotted in Figure 4).

Notes: This figure plots the probability (vertical axis) that a random household in Nyakatoke has strictly more risk sharing links than listed on the horizontal axis.

Source: De Weerdt (2004) and authors’ calculations. Raw data available at (Accessed January 2017).

Figure 4: Nyakatoke Degree Distribution

Figure 5 displays estimates of the distribution of two star and triangle counts, as well as the transitivity index (and “optimal” transitivity statistic), with respect to the distribution of uniform draws from ( and coinciding with the one observed in Nyakatoke). Measured transitivity is Nyakatoke is extreme relative to this reference distribution. This suggests that clustering of links is, in fact, a special feature of the Nyakatoke network. It is also interesting to note that the distribution of transitivity in this reference distribution is well to the right of 0.0698 (the density of all graphs in the reference distribution). The skewed degree distribution in Nyakatoke forces a certain amount of transitivity, since high degree nodes are more likely to link with one another. This highlights the value of a test which proceeds conditionally on the degree sequence.

Notes: Histogram of two star counts, triangle counts and transitivity index values across 5,000 draws from ( and coinciding with the one observed in Nyakatoke). The final figure plots the distribution of the “optimal” transitivity statistic given in Equation (13).

Sources: De Weerdt (2004) and authors’ calculations. Raw data available at (Accessed January 2017).

Figure 5: Reference distribution of transitivity index for Nyakatoke network

Figure 6 displays estimates of the distribution of network diameter and average distance. Nyakatoke’s diameter is not atypical across networks with the same degree sequence. However, average distance is significantly longer in Nyakatoke. One interpretation of this fact is that the Nyakatoke includes a distinct periphery of poorly connected/insured households.

Notes: Histogram of values network diameter and average distance across 5,000 draws from ( and coinciding with the one observed in Nyakatoke).

Sources: De Weerdt (2004) and authors’ calculations. Raw data available at (Accessed January 2017).

Figure 6: Reference distribution of network diameter and average distance for Nyakatoke network


  • Blitzstein & Diaconis (2011) Blitzstein, J. & Diaconis, P. (2011). A sequential importance sampling algorithm for generating random graphs with prescribed degrees. Internet Mathematics, 6(4), 489 – 522.
  • Bloch & Jackson (2006) Bloch, F. & Jackson, M. O. (2006). Definitions of equilibrium in network formation games. International Journal of Game Theory, 34(3), 305 – 318.
  • Bloch & Jackson (2007) Bloch, F. & Jackson, M. O. (2007). The formation of networks with transfers among players. Journal of Economic Theory, 113(1), 83 – 110.
  • Chatterjee et al. (2011) Chatterjee, S., Diaconis, P., & Sly, A. (2011). Random graphs with a given degree sequence. Annals of Applied Probability, 21(4), 1400 – 1435.
  • Comola & Fafchamps (2014) Comola, M. & Fafchamps, M. (2014). Testing unilateral and bilateral link formation. Economic Journal, 124(579), 954 – 976.
  • Cox (2006) Cox, D. R. (2006). Principles of Statistical Inference. Cambridge: Cambridge University Press.
  • De Weerdt (2004) De Weerdt, J. (2004). Insurance Against Poverty, chapter Risk-sharing and endogenous network formation, (pp. 197 – 216). Oxford University Press: Oxford.
  • Del Genio et al. (2010) Del Genio, C. I., Kim, H., Toroczkai, Z., & Bassler, K. (2010). Efficient and exact sampling of simple graphs with given arbitrary degree sequence. Plos One, 5(4), e100012.
  • Erdös et al. (2010) Erdös, P. L., Mikos, I., & Toroczkai, Z. (2010). A simple havel-hakimi type algorithm to realize graphical degree sequences of directed graphs. Electronic Journal of Combinatorics, 17(1), R66.
  • Ferguson (1967) Ferguson, T. S. (1967). Mathematical Statistics: A Decision Theoretic Approach. New York: Academic Press.
  • Fisher (1935) Fisher, R. A. (1935). The Design of Experiments. Edinburgh: Oliver and Boyd.
  • Graham (2016) Graham, B. S. (2016). Homophily and transitivity in dynamic network formation. NBER Working Paper 22186, National Bureau of Economic Research.
  • Graham (2017) Graham, B. S. (2017). An econometric model of network formation with degree heterogeneity. Econometrica, 85(4), 1033 – 1063.
  • Hakimi (1962) Hakimi, S. L. (1962). On realizability of a set of integers as degrees of the vertices of a linear graph. i. Journal of the Society for Industrial and Applied Mathematics, 10(3), 496 – 506.
  • Havel (1955) Havel, V. J. (1955). A remark on the existence of finite graph. Časopis Pro Pěstování Matematiky, 80, 477 – 480.
  • Jackson (2008) Jackson, M. O. (2008). Social and Economic Networks. Princeton: Princeton University Press.
  • Jackson et al. (2012) Jackson, M. O., Rodriguez-Barraquer, T., & Tan, X. (2012). Social capital and social quilts: network patterns of favor exchange. American Economic Review, 102(5), 1857–1897.
  • Kim et al. (2012) Kim, H., Del Genio, C. I., Bassler, K. E., & Toroczkai, Z. (2012). Constructing and sampling directed graphs with given degree sequences. New Journal of Physics, 14, 023012.
  • Lehmann & Romano (2005) Lehmann, E. L. & Romano, J. P. (2005). Testing Statistical Hypotheses. New York: Springer, 3rd edition.
  • McDonald et al. (2007) McDonald, J. W., Smith, P. W. F., & Forster, J. J. (2007). Markov chain monte carlo exact inference for social networks. Social Networks, 29(1), 127 – 136.
  • McPherson et al. (2001) McPherson, M., Smith-Lovin, L., & Cook, J. M. (2001). Birds of a feather: homophily in social networks. Annual Review of Sociology, 27(1), 415 – 444.
  • Milo et al. (2002) Milo, R., Shen-Orr, S., Itzkovitz, S., Kashtan, N., Chklovskii, D., & Alon, U. (2002). Network motifs: simple building blocks of complex networks. Science, 298(5594), 824 – 827.
  • Miyauchi (2016) Miyauchi, Y. (2016). Structural estimation of a pairwise stable network with nonnegative externality. Journal of Econometrics, 195(2), 224 – 235.
  • Moreira (2009) Moreira, M. J. (2009). Tests with correct size when instruments can be arbitrarily weak. Journal of Econometrics, 152(2), 131-140, 152(2), 131 – 140.
  • Owen (2013) Owen, A. B. (2013). Monte carlo theory, methods and examples. Book Manuscript.
  • Pelican & Graham (2019) Pelican, A. & Graham, B. S. (2019). Testing for strategic interaction in social and economic network formation. Technical report, University of California - Berkeley.
  • Rao et al. (1996) Rao, A. R., Jana, R., & Bandyopadhyay, S. (1996). A markov chain monte carlo method for generating random (0,1)-matrices with given marginals. Sankhya, 58(2), 225 – 242.
  • Sierksma & Hoogeveen (1991) Sierksma, G. & Hoogeveen, H. (1991). Seven criteria for integer sequences being graphic. Journal of Graph Theory, 15(2), 223 – 231.
  • Snijders (1991) Snijders, T. A. B. (1991). Enumeration and simulation methods for 0-1 matrices with given marginals. Psychometrika, 56(3), 397 – 417.
  • Tarski (1955) Tarski, A. (1955). A lattice-theoretical fixpoint theorem and its applications. Pacific Journal of Mathematics, 5(2), 285 – 309.
  • Zhang & Chen (2013) Zhang, J. & Chen, Y. (2013). Sampling for conditional inference on network data,. Journal of the American Statistical Association, 108(504), 1295 – 1307.