I Introduction
A significant number of complex realworld systems are modeled well by networks. At an abstract level, a network is an ensemble of interconnected agents. The interactions among neighboring agents enable the flow of information across the graph, and give rise to interesting and complex patterns of coordinated behavior.
One problem of immense value in network science is the inverse modeling problem. In this problem, the network structure (topology) is assumed to be largely unknown and one is interested in inferring relationships between agents based on some dataset arising from the agents’ operations. Formulations of this type are of great interest in several application domains, such as communications, computer science, cybersecurity, control, physics, biology, economics, and social sciences.
Inverse problems over networks are challenging because, in the vast majority of applications, direct access to the data exchanged between agents is often unavailable, and the inference of interagent relations must be based on indirect observations. Another source of difficulty in such inverse problems is that the access to observations is generally limited to a subset of the network agents. The process of discovering interagent interactions from indirect measurements is broadly referred to as network tomography. Some useful applications of network tomography include, among other possibilities: tracing the routes of clandestine information flows across communications networks [1, 2, 3, 4, 5, 6]; revealing agent interactions over social networks, where disclosing commonalities within groups of agents might be useful for commercial as well as security purposes [7, 8]; inferential problems related to group testing and identification of defective items [9]
; brain networks, where interactions among neurons might be of paramount importance to the understanding of particular diseases
[10, 11]; anomaly detection in communications networks, where one tries to reveal the activities of intermediate nodes through destinationonly measurements
[12].One useful type of networks is the class of adaptive networks [13, 14]. These networks consist of spatially dispersed agents that continually exchange information through diffusion mechanisms, and which are specifically designed to enable simultaneous adaptation and learning from streaming data (such as tracking targets moving in formation from streaming spatial observations; modeling the preypredator behavior of animal groups on the move; allocating frequency resources over cognitive communication networks) [13, 14]
. Using a powerful form of agent cooperation, adaptive diffusion networks are able to solve rather sophisticated inferential tasks, including: estimation tasks
[14], detection tasks [15, 16], optimization and online learning tasks [17, 18]. This work is focused on tomography over partially observed diffusion networks.Ia Related Work
From a merely theoretical perspective, the aforementioned problems fall under the umbrella of signal processing over graphs [13, 14, 19, 20, 21]. They deal with the objective of retrieving a graph topology (a connection topology, or an “effective” topology corresponding to the exchange of information) from a set of indirect measurements taken at some accessible network locations. There are several works addressing a similar construction, albeit with different specific goals. With no pretence of exhaustiveness, we give a brief summary of the works that we believe are most related to the present article.
In [22], an unknown network topology is reconstructed by taking advantage of the locality properties of the Wiener filter. Exact reconstruction results are provided for selfkin networks, while reconstruction of the smallest selfkin network embracing the true network topology is guaranteed for general networks. In [23], the authors introduce the concept of directed information graphs, which is used to capture the dependencies in networks of interacting processes linked by causal dynamics. The setting is further enriched in [24], where a new metric to learn causal relationships by means of functional dependencies, in a possibly nonlinear dynamical network, is proposed.
More closely related to the network model considered in our work is the problem of estimating a graph when the relationships are encoded into an autoregressive model — see, e.g.,
[25], where several methods to address causal inference in such a context are reviewed. Causal graph processes are also exploited in [26], where a computationally tractable algorithm for graph structure recovery is proposed, along with a detailed convergence analysis. In [27], the authors examine a causal inference problem that is modeled through a vector autoregressive process. The objective is that of reconstructing the important parts of the transition matrix through observation of a subset of the random process. Special technical conditions for exact reconstruction are provided. In
[28], new methods are proposed to learning directed information polytrees where samples are available from only a subset of processes.Some recent works exploit spectral graph properties, in conjunction with sparsity constraints. In [29, 30], the problem of inferring the graph topology from observations of a random signal diffusing over the network is addressed. It is shown that the space of feasible matrices is a convex polytope, and two inferential strategies are proposed to select a point in this space as the topology estimate. In [31]
, the authors exploit convex optimization and sparsity in order to reconstruct an unknown graph from observable indirect relationships generated by diffusive signals defined on the graph nodes. The main idea is identifying a graph shift operator given only its eigenvectors, with the corresponding spectral templates being obtained from the sample covariance of independent graph signals diffused over the network.
IB Main Result
This work complements the previous efforts: it establishes an important identifiability condition and clarifies the asymptotic behavior of the recovery error as a function of the network size. An identification procedure is also developed to carry out the tomography calculations using a standard clustering technique. We summarize the main contributions of this work as follows. We consider a network of agents running a diffusion strategy to solve some inference task of interest (such as a distributed detection problem). The overall network size is unknown, and the outputs of the diffusion algorithm are available from only a limited fraction of agents. The goal is determining the relationships among these agents, namely, establishing whether the datum of an individual agent is influencing another individual agent. We show that, under some regularity conditions, the interaction relationships existing within the observable portion of the network can be recovered, with negligible error for sufficiently large network sizes.
More specifically, we consider the class of ErdösRényi random graphs, where the probability of two agents being connected follows a Bernoulli distribution. For these graphs we discover that, for any fraction of observable agents, the group of interacting agent pairs and the group of noninteracting agent pairs split into two welldefined clusters. These clusters emerge as clearly separate for sufficiently large network sizes. This result is established for diffusion strategies employing
symmetric combination matrices, a feature that enables analytical tractability of the problem. However, from a “physical” viewpoint, separability of the clusters does not appear to be limited to the symmetry assumption. Accordingly, we examine also the relevant case of asymmetric matrices, which is of interest especially in the context of causal networks. With reference to a typical form of asymmetric combination matrices, numerical simulations show that separability of the clusters can be preserved.One distinctive feature of our work is that we establish theoretical achievability results for consistent tomography from partially observed networks. In answering these particular questions, in addition to what has been answered before, our work complements well existing results and recent progresses in the field of network tomography.
Notation.
We use boldface letters to denote random variables, and normal font letters for their realizations. Capital letters refer to matrices, small letters to both vectors and scalars. Sometimes we violate the latter convention, for instance, we denote the total number of network agents by
.For , the th entry of an matrix will be denoted by , or alternatively by . Moreover, the submatrix that lies in the rows of indexed by the set and in the columns indexed by the set , will be denoted by , or alternatively by . If , the submatrix will be abbreviated as , or as .
The symbols , , and
are used to denote the probability, expectation, and variance operators, respectively. The notation
denotes convergence in probability as .Ii The Problem
Iia The Adaptive Diffusion Network
A network of agents collects streaming data from the environment. The datum collected by the th agent at time is denoted by , and the global sequence of data is assumed to be formed by spatially (i.e., w.r.t. index ) and temporally (i.e., w.r.t. index ) independent and identically distributed random variables. Without loss of generality, we assume that the variables have zero mean and unit variance.
In order to track drifts in the phenomenon that the network is monitoring, the agents implement a distributed adaptive strategy, where each agent relies on sharing information with local neighbors. In this work we focus on the CombineThenAdapt (CTA) diffusion mechanism, whose useful properties in terms of estimation and online learning performance have been already studied in some revealing detail in [13, 14, 17, 18]. The CTA algorithm can be described as follows.
First, during the combination step, agent mixes the data received from its neighbors by using a sequence of convex (i.e., nonnegative and adding up to ) combination weights , for , giving rise to the following intermediate variable:
(1) 
Then, during the adaptation step, agent updates its output variable by comparison with the incoming streaming data , employing a (typically small) stepsize :
(2) 
Equations (1) and (2) can be merged into a single step as:
(3) 
For later use, it is convenient to introduce the scaled combination matrix , whose entries are defined as:
(4) 
which, in the sequel, will be simply referred to as the combination matrix. We remark that, since we use a sequence of convex combination weights, the matrix is a rightstochastic matrix.
If we now stack the observations across the network at time into the vector , and the state variables at time into the vector , Eq. (3) can be compactly rewritten as:
(5) 
which, by iteration, allows us to express the diffusion output vector as a function of the streaming data:
(6) 
starting from state , i.e., neglecting the transient term.
IiB Network Tomography
An illustration of the setting considered in this work is given in Fig. 1. An entity external to the network (hereafter named Tomography Center, TC) is interested in reconstructing the interaction profile of the network, namely, it is interested in ascertaining which agent is influencing which other agent.
The TC is assumed to have access to a subset of the network agents, and is able to collect the streams of outputs exchanged by such agents during their communication activity. Letting be the observable subnet, the data available to the TC at time are . We shall focus on the asymptotic regime of large networks (), for the meaningful case where the fraction of observed agents does not vanish. Letting , such regime is defined in terms of the following condition:
(7) 
where takes on the meaning of the asymptotic fraction of observed agents.
In our setting, the TC does not know the overall number of agents in the network. Accordingly, the main goal of the TC is producing an estimate of the interaction profile for the observed
agents. This problem is challenging for the following reasons. Let us ignore for a moment the fact that the network is partially observed, and assume that the TC is able to collect all output sequences from all agents at all times. Using such a dataset, there exist several wellestablished strategies to make inference about the influence that one agent has on another agent. The most intuitive is an estimate of the correlation between the outputs of two agents, which, however, is problematic for
directed flows of information (where agent can influence agent but not vice versa). Such asymmetry would be reflected into the th and th entries of the combination matrix, yielding while . The case of asymmetric influence is well studied within the framework of causal inference, or causation. Many solutions exist for disclosing causal relationships [23, 24, 25, 26, 27, 28].There is, however, a challenging problem that is peculiar to the network setting of this work due to the streaming nature of the data. In general, when the TC starts collecting data, the network would have been in operation since some time already. Therefore, the output signals at the agents would have benefited from sufficient exchanges of information. While this exchange of information is beneficial for solving inference tasks, it nevertheless can become detrimental for reconstructing the network tomography. This is because, over a strongly connected network and after a “transient” phase, all agents would have become “correlated!”
In order to overcome this difficulty, we exploit knowledge of the diffusion mechanism. To this aim, let us introduce the correlation matrix of the diffusion output vector:
(8) 
which, from (6), admits the following closedform representation:
(9) 
where the latter series is guaranteed to converge since all eigenvalues of
are strictly inside the unit disc. The limiting correlation matrix, , can be interpreted as the (unique) solution to the discretetime Lyapunov equation [13]:(10) 
where denotes the identity matrix. We note that is positive definite for each , and so is , due to the stability of [33]. We also introduce the onelag correlation matrix, which, in view of (5), takes on the form:
(11) 
Therefore, we obtain the following relationship:
(12) 
and, at steadystate,
(13) 
In principle, since there exist many ways to estimate and consistently as , expression (13) reveals one possible strategy to estimate from the output of the diffusion process. However, the approach described so far suffers from a critical problem: the network is only partially observed and, hence, not all entries in the matrices and can be estimated. This in turn means that the evaluation of (13) is in general problematic.
In order to estimate the combination (sub)matrix corresponding to the observable agents, i.e.,
(14) 
one might be tempted to simply replace the matrices involved in (13) by their observable counterparts, , and , obtaining the rough estimate:^{1}^{1}1Actually, the matrices computed by the TC will generally be renumbered versions of and , since the labeling used by the TC need not correspond to the ordering in and . This is because the TC does not know the original labeling of the agents in the network, nor the network size. However, our focus is on evaluating the interaction between pairs of agents, irrespective of their labels, and, hence, we can safely keep the notation used in (15).
(15) 
Needless to say, calculation on the righthand side of (15) does not lead to the true , except in some special cases. For this reason, we are denoting the result of (15) by using the hat notation. If we could recover exactly, then we could deduce the desired influence relations. However, given that we only have the estimate , it is not clear at all whether the mutual influence relationships existing between the observed nodes can be consistently retrieved from . Answering this nontrivial question in the affirmative is one of the main contributions of this work.
In order to highlight the key ideas without added complexity, we focus in this article on the case of symmetric combinations matrices. The symmetry assumption is made because, if is symmetric, the autocorrelation matrix of the diffusion output takes on the following convenient form — see (9):
(16) 
which is exploited in the proof of Theorem . The extension of the results to the asymmetric case requires adjustments in the arguments used in the proof of Theorem . For example, this can be pursued by appealing instead to the Kronecker representation of the solution of discretetime Lyapunov equations [34]. We note in passing that, since the matrix is always rightstochastic, the symmetry assumption makes doublystochastic.
Iii Error Caused by Partial Observations
Using the estimate (15), we can write:
(17) 
where denotes the error matrix. In terms of the individual entries, we can write, for :^{2}^{2}2A pair refers to agents and , where span the observable subnet .
(18) 
In this work we are interested in establishing whether the estimated values allow us to identify the condition or , which would reveal whether the agents and influence each other. In order to enable this determination, we start by characterizing the behavior of the error terms in (18).
Theorem 1 (Concentration of errors)
For a symmetric combination matrix, the entries of the error matrix defined in (17) are nonnegative, and satisfy for all :
(19) 
Proof:
See Appendix A.
We have the following corollary, which is particularly tailored to our application.
Corollary 1 (Number of errors exceeding a threshold)
For a symmetric combination matrix, we have, for all , and for any :
(20) 
where is the indicator function (which is equal to one when its argument is true and zero otherwise).
Proof:
Theorem and its corollary provide useful information about the concentration of the entries in the error matrix. In particular, Eq. (19) reveals that the rowsum of the entries in the error matrix cannot exceed , whereas Eq. (20) places an upper bound on the number of entries that exceed any positive threshold.
Consider now a small threshold , and the fraction of offdiagonal (because we are interested in interagent interactions) entries that exceed :
(22) 
In the regime dictated by (7), where as , we see from (20) that such fraction vanishes, namely, that most entries of the error matrix will be small in the asymptotic regime of large networks. Therefore, in view of definition (18), it will hold for large networks that, for ,
(23) 
This useful dichotomy suggests that the nonzero entries of will make the estimated entries stand out above the error floor as increases. As a result, we should be able to decide whether or by comparing the estimated value, , against some threshold.
However, and unfortunately, the behavior of the error matrix alone is not sufficient to conclude that this inferential procedure is feasible, for one crucial reason. For most typical combination matrices, the nonzero entries decrease with as well, so that the nonzero entries appearing in the first line of (23) vanish for large network sizes, along with the errors. This means that the estimated entries, , would vanish even when agents and are interacting (i.e., even when ). For this reason, a closer examination of the behavior of the error quantities is necessary before we can conclude that this inference procedure is viable. Specifically, it is necessary to assess how fast the error signals decay to zero in relation to the desired entries . In order to carry out this analysis, we need to make some assumptions about the network structure. We will select some typical and popular random models, as explained next.
Iv Behavior of Interacting Agents
The interaction profile can be conveniently described in terms of a symmetric interaction matrix , defined by the following conditions:
(24) 
We shall set for all , which corresponds to the assumption that an agent does always use its own state variable in the combination step.
In our analysis, the combination matrix, , will be constructed through the following twostep procedure. First, an interaction matrix is generated according to a random graph model [35, 36]. Then, is determined by a combination policy, , which sets the values of the combination weights corresponding to the nonzero entries of . Formally:
(25) 
Note that must always assign positive weights at the locations corresponding to nonzero entries of , otherwise the interaction matrix related to would be different from . Moreover, since in this article we focus on symmetric combination matrices, we must have that (a condition that would not be directly implied by the symmetry of ).
Iva Random Model for the Interaction Profile
As we have stated, in this work we examine the useful case where the interaction profile of the network is generated according to a random graph model [35, 36]. In particular, we consider the classic ErdösRényi model. This model, which we denote by , is an undirected (i.e., symmetric) graph, where the presence/absence of the edges is determined by a sequence of independent Bernoulli random variables with success (i.e., interaction) probability . Accordingly, the variables , for and , are independent Bernoulli random variables with , and the matrix is symmetric.
One meaningful regime to examine the random graph properties is the regime where the probability decreases as increases [36]. Examining the asymptotic regime where is kept constant while increases would generally produce networks that are too much connected with respect to what happens in typical applications. For instance, in the regime of constant , the network diameter (i.e., the maximum of the shortest distance between any pair of nodes) is equal to with high probability [37]. On the other hand, could not vanish in an arbitrary fast way, otherwise the number of neighbors of an agent would be too small, and the network would become very scarcely connected. A wellknown result that holds true for the ErdösRényi graph is that the following scaling law:
(26) 
with (in an arbitrarily slow fashion, i.e., even with ), ensures that the graph remains connected with probability tending to as diverges [36]. In the following, we shall focus on the regime of connected ErdösRényi graph with vanishing , namely, on the regime where (26) is satisfied with and . Such a regime will be denoted by the symbol .
IvB Statistical Properties of Fundamental Graph Descriptors
Some useful descriptors of the network graph can be conveniently represented in terms of the matrix in (24). In particular, the (interaction) neighborhood of agent (which includes itself) is defined as:
(27) 
while the degree of agent is:
(28) 
We define the network maximal degree as:
(29) 
Let us now highlight some useful statistical properties of the degree and maximal degree variables for the ErdösRenyi model.
We observe from (28) that, for an ErdösRenyi graph, the random variable is a binomial random variable with parameters and , which shall be denoted by . Note also that and , for , are not independent, because of the implied graph symmetry. Now, for a binomial random variable , we have:
(30) 
Under the model, we see from (26) that the product diverges as , and, hence, the variance in (30) vanishes as , implying in particular the following convergence in probability [38]:
(31) 
which reveals that the degrees of the nodes scale as .
It is also of interest to characterize the asymptotic behavior of , which, by being the maximum of degrees, is expected to grow faster than . However, and interestingly, the following lemma shows that it cannot grow much faster.
Lemma 1 (Behavior of maximal degree)
Under the model we have, for all , with :
(32) 
where is Euler’s number.
Proof:
See Appendix B.
According to (32), the maximal degree exceeds the level with negligible probability, i.e., it cannot grow substantially faster than . We note that the probability in (32) is computed conditionally on the event that two agents interact. This choice is made because, in the following, we need to know the behavior of the maximal degree in relation to interacting agent pairs.
IvC Stable Combination Policies
We can now use (32) to characterize the asymptotic behavior of the (offdiagonal) nonzero entries in the combination matrix. In order to illustrate the main idea, we start by examining a popular combination policy, known as the Laplacian rule, which is given by [13]:
(33) 
for some , with . Therefore, from Lemma we can write, for all :
(34)  
Equation (34) has the following important implication: for large enough , any nonzero entry of the Laplacian combination matrix, scaled by the factor , stays “almost always” above a certain threshold (namely, the value ). Therefore, multiplying by the scaling factor would keep the nonzero entries stable (in the sense that they will not vanish) as diverges. The same scaling factor is relevant for other combination policies. Therefore, it makes sense to introduce the following general class of combination policies.
Combinationpolicy class . A combination policy belongs to class if there exists such that, for all , with :
(35) 
where goes to zero as , and where the probability is evaluated under the model.
Let us now examine the physical meaning of (35) in connection to network tomography applications. For a policy belonging to class , we can rephrase (23) as:
(36) 
where the qualification of being “not vanishing” is a consequence of (35). In light of (36), if we will able to show that is still a small quantity, then would be effectively useful for tomography purposes, because the nonzero entry would stand out from the error floor as
gets large. Actually, this heuristic argument will be made rigorous in the proof of Theorem
, which is presented in the next section.Before ending this section, we would like to introduce another useful combination policy that belongs to class , namely, the Metropolis rule, which is given by [13]:
(37) 
Since , for the Metropolis rule we have the following implication, for all :
(38)  
where the latter equality comes from (37). Now, since for two events and , the condition implies that , from (38) we can write:
(39)  
where the latter inequality follows by Lemma . Equation (39) reveals that the Metropolis rule satisfies (35) with the choice .
V Consistent Tomography
In order to establish whether the estimated matrix, , can be used to infer the interaction pattern contained in , we still need to provide a statistical characterization for the entries of . We pursue this goal by characterizing the asymptotic behavior of two conditional distributions: the empirical distribution given that two agents do not interact, and the empirical distribution given that two agents interact. In particular, owing to the scaling factor that we have obtained in the previous section, we shall focus on the scaled matrix .
Let us preliminarily introduce the number of noninteracting () and the number of interacting () agent pairs over the observed set, defined respectively as:
(40) 
where is the the entry of the interaction submatrix corresponding to the observable agents. Next we introduce the number of entries in that stay below some positive level , for the case of noninteracting and interacting agent pairs, respectively:
(41)  
(42) 
Using (40), (41) and (42), we can compute the conditional empirical distributions given that the agents are not interacting and given that they are interacting, defined respectively as:
(43) 
where (resp., ) are conventionally set to when (resp., ). It is also convenient to introduce the complementary distribution . Accordingly, the quantity represents the fraction of entries in that correspond to noninteracting agent pairs and that stay below . In contrast, the quantity represents the fraction of entries in that correspond to interacting agentpairs and that stay above .
The next theorem establishes achievability of consistent tomography through the asymptotic characterization of the aforementioned empirical distributions.
Theorem 2 (Achievability of consistent tomography)
Let the network interaction profile obey a model, and let the combination policy belong to class . Then, for any nonzero fraction of observable agents satisfying (7), we have in the limit as the network size increases ():
(44) 
Proof:
See Appendix C.
Theorem reveals that, from the knowledge of the estimated combination matrix, , it is possible to separate the zero/nonzero entries of the true combination matrix, . In fact, we see from Theorem that the following dichotomous behavior is observed, asymptotically with : when agents and are not interacting, most of the (scaled) estimated matrix entries, , stay below any level (and, hence, also below an arbitrarily small value ); when agents and are interacting, most of the (scaled) estimated matrix entries, , stay above a positive value . Therefore, a separation between the two classes of noninteracting and interacting agent pairs arises, which translates into the emergence of two separate clusters, one corresponding to the region , and the other one corresponding to the region . This situation is illustrated in Fig. 2.
Theorem establishes the separability of the two classes based upon knowledge of , whose computation requires knowledge of the exact correlation submatrices, and — see (15). Since, in principle, such matrices can be estimated with arbitrarily large precision as the steadystate is approached (i.e., as the number of collected diffusion output samples increases) the result in Theorem is an achievability result. In addition, for many of the methods available to estimate correlation matrices, the behavior of the estimation error is known, at least for sufficiently large. A useful extension of the present treatment would be examining the interplay between the two sources of error, namely, the error caused by partial observations, and the error caused by estimation of the correlation submatrices.
When some prior knowledge about the value of and the typical number of neighbors, , is available, Theorem provides a constructive recipe to perform the reconstruction of the interaction profile across the observed network. In fact, the separation between the two classes of interacting and noninteracting agent pairs can be performed by comparing each entry to an intermediate threshold lying between and .
On the other hand, in many contexts it is unrealistic to assume precise knowledge of the parameters and . When such knowledge is not available, the aforementioned reconstruction strategy is not applicable, but the achievability result in Theorem still has a fundamental implication in that it guarantees the existence of the clustering structure! The existence of two separate thresholds, , and the related (asymptotic) separation of the scaled estimated entries in two separate clusters ( and
), opens up the possibility of employing universal and nonparametric pattern recognition strategies to perform cluster separation. In particular, in our numerical experiments, we shall verify the validity of this argument by employing the
means clustering algorithm.Vi Higher Order Asymptotic Analysis
As illustrated in the previous section, and as we can infer from Fig. 2, it is undesirable to have when . For the sake of brevity, let us refer to the agent pair for which such event occurs as being identified as “mistakenlyinteracting”. Examining (40), (41) and (43), we deduce that the number of mistakenlyinteracting pairs is given by:
(45) 
where we used the fact that, under the model, scales as , and vanishes as .
On the other hand, for the number of trulyinteracting agent pairs we have that:
(46) 
The fact that vanishes as increases causes the following issue. According to (45)–(46) and also to (84)–(85), without any information about the speed of convergence of (in comparison to ), we cannot exclude that the number of mistakenlyinteracting pairs is larger than the number of trulyinteracting pairs. In order to ward off the presence of such unpleasant feature, we should prove that the quantity goes to zero faster than , formally:
(47) 
a condition that will be abbreviated as . Verification of such desired requirement is addressed in the forthcoming Proposition . In order to prove Proposition , we require that the combination policy possesses two additional regularity properties, which are again automatically satisfied by the Laplacian and Metropolis rules.
Property P. There exists such that, for all , with :
(48) 
with the inequality being trivially an equality for .
It is useful to make the following comparison between (35) and (48). Equation (35) means that the nonzero entries in the combination matrix, scaled by , do not collapse to zero, i.e., they are stable from below. On the other hand, in view of (31), the upper bound in (48) implies that the nonzero entries in the combination matrix, scaled by , are asymptotically stable from above. This is because approaches asymptotically as so that (48) translates into the asymptotic condition that .
Moreover, it is straightforward to verify that (48) holds for the Laplacian rule, with , as well as for the Metropolis rule, with .
Property P. Consider a certain permutation of the agents, i.e., a renumbering of the rows and columns of the interaction matrix , which leads to the interaction matrix . Such operation can be represented by means of a permutation matrix (see Appendix E), as . Property P holds if:
(49) 
namely, if applying the combination policy to the renumbered interaction matrix is equivalent to renumbering (through the same ) the initial combination matrix .
The meaning of property P is illustrated in Fig. 3. In the leftmost panel, we represent a network graph, along with the corresponding combination matrix obtained by applying the Metropolis rule. In the rightmost panel, the agents are exchanged as detailed in the figure, using the permutation matrix:
(50) 
Then, the Metropolis rule is applied to the new (i.e., renumbered) interaction matrix. It is seen that the resulting combination matrix corresponds to renumbering the original combination matrix using the permutation (50).
Property P is particularly relevant for the following reasons. First, under the ErdösRenyi model, the statistical distribution of the interaction matrix is invariant to agents’ permutations. Owing to (49), such invariance is automatically inherited by the combination matrix. Moreover, property P is satisfied by typical combination policies, such as the Laplacian rule and the Metropolis rule. It is also useful to provide a counterexample that shows why property P is not always verified. Consider a network with three agents, and with the following interaction matrix:
(51) 
The combination policy of our counterexample works as follows. First, a Metropolis rule is applied. Second, the resulting selfcombination weight assigned to agent is slightly increased by adding a small extraweight. The extraweight assigned to agent is then subtracted, in equal parts, from the other nonzero weights of agent , in order to meet the requirement . Finally, the other entries of the combination matrix are adjusted so as to make symmetric and rightstochastic. The final result is (with ):
(52) 
Assume now that agents and are exchanged, which would yield the following interaction matrix:
(53) 
Applying the combination policy described before, we would end up with the following combination matrix:
(54) 
We see that: the interaction matrix in (53) is a renumbered version of the interaction matrix in (51), which corresponds to exchanging agents and , while the combination matrix in (54) is not obtained by applying the same renumbering to the combination matrix in (52). Therefore, property P is violated. The presented counterexample shows that, while in practice constructing a combination policy that violates property P might look rather artificial, from a purely theoretical viewpoint it must be stated that property P is not always verified.
A combination policy satisfying the additional properties P and P describes the following class of combination policies.
Combinationpolicy class . A combination policy belongs to class if it belongs to class and possesses properties P and P.
Proposition , which is stated below, will be proved by resorting to an approximation that will be referred to as the independence approximation. More specifically, the proof of the proposition will rely on characterizing the variance of the entries in the error matrix (see Appendix D). Unfortunately, the exact evaluation of the error variance is generally a formidable task. In order to gain insight into the asymptotic behavior of the variance, in the proof of Proposition we make a simplified evaluation by treating the entries of the various matrices involved in the calculations as independent random variables. The accuracy of the results arising from this approximation is tested by means of numerical experiments — see Fig. 8 in Appendix D. A more rigorous analysis would require estimating the order of the error introduced by the independence approximation.
Proposition 1 (Rate of convergence of )
Let the network interaction profile obey a model, and let the combination policy belong to class . Then, the result in (44) holds true because . In addition, for any nonzero fraction of observable agents satisfying (7), and for all , we have:
(55) 
where the approximation in (55) arises from the independence approximation used in the proof.
Proof:
See Appendix D.
Vii Illustrative Examples
We now examine the practical significance of the asymptotic results derived in the previous sections, with reference to three combination matrices that are rather popular in the literature of adaptive networks. The first two strategies lead to symmetric combination matrices, which therefore match the hypotheses of our theorems. The third strategy corresponds to an asymmetric combination matrix. Even if the asymmetric case is not contemplated by our theorems, it is relevant in practice and, as we shall see from the forthcoming experiments, consistent tomography works also for such a relevant case. The presentation of the examples is organized through the following steps.

We consider first the case that the projections of the correlation matrices, and , onto the observable part of the network, are available without error. For this case, we compute the inversion of the observable part, which leads to the matrix , through:
(56) 
We use the offdiagonal entries of , and apply a means clustering algorithm in order to split these entries into two clusters. The cluster with smallest arithmetic average is labeled as “noninteracting”, while the other cluster is labeled as “interacting”. We remark that such classification strategy is implemented in a universal, fully nonparametric way.

Then, we enlarge the setting to the case that the projections of the correlation matrices are estimated from the diffusion outputs. In the simulations, the observations fed into the diffusion algorithm,
, follow a standard normal distribution. As an estimator for
, we use the empirical correlation, namely:(57) where, for , the th row of the matrix is given by:
(58) with the indices spanning the observable set . The estimate is computed similarly. We remark that in this work we do not focus on optimizing the performance of the correlation matrix estimators, since our focus is on ascertaining the fundamental limits of tomography. There are already considerable works in the literature on perfecting correlation estimation from ensemble data. One challenge that arises in the network case is the interplay between the matrix size and the number of samples used for the estimation.
Now, using (56) with the true correlation matrices replaced by their estimated counterparts, we get the following estimate:
(59) 
We run the means clustering algorithm over the entries of in (59).
Viia Laplacian Combination Rule
Under the Laplacian combination rule, the offdiagonal combination weights are zero when the agents are not interacting, and are otherwise equal to a constant across the network. Therefore, we see that the weights reflect perfectly the structure of the underlying network graph. In fact, several important properties of the graph are encoded in the properties of the Laplacian matrix [21].
In Fig. 4, leftmost panel, we display the offdiagonal entries of the (scaled) true combination matrix, , corresponding to the observable network portion. For clarity of presentation, the matrix has been vectorized by means of columnmajor ordering, and the (vectorized) pairs have been rearranged in such a way that the zero entries appear before the nonzero entries. The same ordering used for will be then applied to the matrices displayed in the remaining panels, i.e., the horizontal axis is homogeneous across the different panels, so as to allow a fair comparison. If agents and do not interact, the pertinent matrix entry is marked with a blue circle, whereas if they do interact, a red square is used. The observed stepfunction behavior comes from the fact that, for the Laplacian combination rule, the nonzero weights are constant across the network.
In the middle panel we focus on steps S1) and S2): we display the scaled estimated matrix, , computed under perfect knowledge of and , and we display the classification performed by the means algorithm. In the rightmost panel we focus instead on steps S3) and S4): we display the scaled estimated matrix, , computed with the empirical estimates and , and we display the classification performed by the means algorithm. In the latter two panels, matrix entries are marked with different colors and symbols, depending on the results of the means clustering algorithm: bluecircle markers if agents and have been classified as noninteracting, and redsquare markers if agents and have been classified as interacting.
The network considered in Fig. 4 consists of agents, where only agents are observable (). According to the connection properties of the ErdösRényi model, see (26), the interaction probability is chosen as . The parameter of the Laplacian combination matrix is . As regards the diffusion algorithm, we choose a typical value for the stepsize, i.e., .
Let us begin with examining the output of steps S1) and S2), middle panel. As we can see, the experiments confirm in toto what is expected from the theoretical analysis: the entries of the matrix appear to be wellseparable, since: the unavoidable error induced by partial observation of the network, is relatively small, implying that zero entries of are concentrated around zero; the nonzero entries of the true combination matrix (leftmost panel) are bounded from below, and, since the error is positive, this fact keeps the nonzero entries of (middle panel) sufficiently far away from the origin.
Next we move on to steps S3) and S4), namely, we move on to examining the behavior in the presence of imperfect knowledge of the correlation matrices. In particular, we use samples to perform estimation of the correlation matrix from the diffusion output. This situation is examined in the rightmost panel of Fig. 4. By comparison with the middle panel, we see that the estimated clusters are more “noisy”, which makes sense since the procedure must be affected by the error in estimating and . This notwithstanding, tomography is still effective, meaning that the number of observations collected to estimating the correlation matrices is sufficiently high. Few classification errors are committed: the redsquare markers appearing among the bluecircle markers, and the bluecircle marker appearing among the redsquare markers (approximately at position ), represent mistakenly classified agent pairs.
The results arising from the above example are collected, perhaps in a more revealing form, in the inset plots of Fig. 4. The displayed network graphs (corresponding only to the observable subnet) are drawn with the following rules. An edge drawn from to means that agent is influenced (leftmost panel) or is estimated to be influenced (middle and rightmost panels) by agent . When an edge is erroneously detected (i.e., the edge is in fact not present but the tomography algorithm “sees” it), it is marked in magenta. Likewise, when an edge is not detected (i.e., the edge is present but the tomography algorithm misses it), it is marked in cyan. The impact of imperfect knowledge of the correlation matrices can be noticed in the inset plot, where we see that some errors are committed in reconstructing the profile of the observed subnet.
ViiB Metropolis Combination Rule
In Fig. 5, the analysis presented in the previous section is applied to a different combination matrix, namely, the Metropolis combination matrix. This example is useful because, in a Metropolis rule, the nonzero weights are no longer constant, and exhibit a certain dynamics related to the different neighborhood sizes corresponding to the different agents. It is important to examine the impact of such dynamics on the performance of the tomography algorithm. From the leftmost panel of Fig. 5, we see that the nonzero entries of the true combination matrix exhibit a certain variability. This notwithstanding, since the Metropolis rule belongs to class , with , the scaled nonzero entries are still lower bounded in probability. The separability of the clusters is preserved, and considerations similar to those drawn in the previous section as regards the Laplacian rule apply.
Still, there is an important difference between the two kinds of combination policies. By inspecting carefully the rightmost panel in Fig. 5 and the related inset, we see that, differently from what happened in the rightmost panel of Fig. 4, the network profile is reconstructed perfectly. This suggests that estimation of the correlation matrices is easier for Metropolis rules than for Laplacian rules. The latter effect could be probably ascribed to the improved convergence properties of Metropolis combination matrices (with respect to Laplacian combination matrices), which should allow a more accurate estimation of the correlation matrices for a given number of samples.
ViiC Uniform Averaging Rule
One of the most important goals of causation is ascertaining whether one agent influences another agent. Such influence is not necessarily symmetrical. Although we have obtained an analytical characterization of the error matrix for the case of symmetric combination matrices, it is useful to examine whether the conclusions drawn for the symmetric case find some correspondence in the asymmetric case. To this aim, the network interaction profile is now generated according to a binomial random graph model, , where the variables
Comments
There are no comments yet.