## 1 Introduction

Temporal networks are of increasing importance as models for systems across across social, biological and technological domains [1, 2, 3]. Their study has been predicated on data and high-resolution, high-velocity data are becoming more common across fields [4]. In a temporal network, each link turns on and off over time, forming an intricate pattern of dynamic connectivity [5]. In many real-world instances—such as social networks captured from smartphones [6], gene regulatory networks obtained via high-throughput experiments [7], and brain networks measured through neuroimaging [8, 9]—these rich dynamics are unavailable. Instead, we can only know the activation patterns of nodes: the information regarding when a node connects to some other node, but not which specific one. In some cases, this is by design: to preserve user privacy, a smartphone maker may prevent an installed application from accessing user communication records [10, 11, 12], but allow that application access to, for example, geolocation [13] or accelerometer signals [14]. In other cases, it may be too costly or otherwise infeasible to monitor the activities of every edge. In many biological systems, we simply lack the experimental tools to directly capture in situ edge activities [7, 15, 9].

While node activities are generally more available in empirical temporal network data, the loss of information compared to edge activities is currently not understood. This leads us to ask the following question: if a researcher loses, or otherwise cannot ascertain, the edge activity data, but does have access to the node activities, how well can they recover the richer edge activity data? Solely based on nodal activation, at best, this recovery is only possible in a small number of special cases. However, if the network’s static structure is available, e.g. either directly available from a separate measurement or can be inferred via side information, then it may be possible to recover, even if only approximately, the absent edge activity data. Such side information can take the form of additional experiments or joining across datasets. For brain networks, for example, tract-tracing [16] or fluorescence microscopy [17] studies revealing network connectivity can complement functional neuroimaging [9], while for social behavior, data brokers [18] may be able to gather the structure of user social networks by cross-referencing public and private social activity datasets [11, 12].

There is both promise and peril in the recovery of edge activities. In the case of biological systems, this opens up new possibilities for understanding the fundamental dynamics of living cells. In the context of social networks, rich data raises privacy concerns. Data brokers, who may be able to aggregate across both public and nonpublic sources of information, have the potential to circumvent privacy protections and reveal more about individuals than would be possible without such high-resolution data.

In this paper, we make the following contributions. In Sec. 2 we define and motivate the problem of information loss in temporal networks and how to recover it. We apply solution methods to a representative network corpus, including studying how well network analysis tasks can be performed on recovered data (Sec. 3) and better understanding both theoretically and empirically the topological and dynamical sources of recovery error (Sec. 4). We conclude with a discussion providing context for the recovery problem across different scientific domains in Sec. 5, including both the benefits for advancing network measurement and the concerns with protecting privacy raised by this work.

## 2 Information loss and recovery in temporal networks

Suppose a researcher is interested in studying a time-evolving, or temporal, network, fully defined through its edge activations. For many networks, such edge data are not always readily available. It may be impossible, costly or otherwise impractical to monitor the activities of every edge in the network. In some cases, one must even observe every pair of nodes, a costly, quadratic operation. Therefore, in practice one often lacks the ideal edge activity data. We argue that progress on studying the network’s time dynamics can still be made, however, by studying the time series of node activities, when available. Node activities, which describe when a node is active but not with whom, are less informative but more readily available, as monitoring a singleton is less resource-intensive than monitoring a pair. Further, there are fewer singletons that need to be monitored than duos, meaning there will be (unless the network is exceptionally sparse) fewer node time series than edge time series. Yet, while node activity data are more practical, the loss of information is typically significant. The amount of information lost depends on network structure, which determines how edge activities are projected down to the node activities. (See Fig. 1A, illustrated using ‘Copenhagen’, the physical proximity component of the Copenhagen Networks Study dataset [19], a smartphone-derived social network which is part of our corpus of temporal networks; described below).

We now formalize the information loss and recovery problem. Consider a network of nodes (or vertices) and edges (or links). Let by an matrix of edge activity data, where is the number of interactions between and during timestep (or during a time window ). Each row of corresponds to a time series of values for an edge in the network. These time series’ encode the temporal activities of the network, when edges are active and at what strength. A reduced view of the temporal network can be given by the node activity matrix . Here is an matrix where each row captures the activity of a corresponding node in the network.

The edge and node activity matrices are related to one another via the graph *incidence matrix* .
The (unoriented) incidence matrix is an binary matrix that details which edges are incident on a node.
Using , we then have

(1) |

This relationship shows us that the edge activity matrix is more fundamental than the node matrix given the network structure, as can be derived from . Figure 1B illustrates this information loss by the reduction in size as becomes (Eq. 1).

Now, suppose we do not have the edge activities but we do have the node activities. The question now becomes: can we recover from using the network structure (Fig. 1B,C)? Equation 1 is a system of linear equations in the standard form , which can usually be solved by, for example, . However, is a rectangular matrix and, except for certain edge cases such as a network with no cycles, Eq. 1 is an underdetermined system. This means there will be either no solutions or an infinite number of solutions such that ( denotes the Frobenius norm). From this, one can conclude that it is generally not possible to fully recover given and . However, while exact recovery of may not be possible, approximations may still be possible and, if accurate (i.e., if ), they may be sufficient for subsequent network analysis tasks (see also Sec. 3).

The classical approach to solving the underdetermined system (1) is to find the least-norm solution [20]. While in general does not exist, the pseudoinverse

can be efficiently computed, using, for instance, singular value decomposition (SVD), and then used to find the minimum or least-norm solution:

. We show below (Sec. 4) that the accuracy of information recovery using the least-norm solution depends on a combination of topological and dynamical sparsity.In this context sparsity is also the fundamental drawback with respect to the least-norm solutions—they do not promote sparse solutions, meaning that will be a dense matrix, which is unlikely to be a good representation of unless all edges are active at all times. A period of little network activity may correspond to many zeros appearing within the time series. Moreover, sparsity in the time series may be intermixed with sparsity in the network structure.

To address this, sparsity-promoting solution techniques, optimization problems that explicitly reward zero values in solutions, have in recent years become a pillar of statistical learning methods [21, 22, 23, 24]. Thus, to improve on the classical approach, we formulate finding sparse solutions to Eq. 1 as an optimization problem whose objective function consists of a least-squares error term and a regularization or penalty term that promotes zero values in discovered solutions (Supporting Information (SI) LABEL:si:eqn:optimization

). This optimization problem can be interpreted as a (multi-target) Lasso regression

[21] without an intercept term and with an added constraint enforcing nonnegative regression coefficients, capturing the properties of an edge activity matrix. We use an regularization to find sparse solutions for multiple problems (columns of and ) jointly by treating each edge as a group over time [25, 26]. Further, this formulation is related to the similar problem of compressed sensing [27, 28]. In our case, solutions were found using coordinate descent; see SI LABEL:si:sec:findingsparsesolutionsfor full details, including implementation details and Bayesian selection of the regularization hyperparameter

^{1}

^{1}1An implementation is available at github.com/bagrow/recovering-information-temporal-networks..

To understand how well temporal network information can be recovered using the least-norm and sparse methods, we assembled a temporal network corpus consisting of five networks representing a range of different systems.
The first, ‘Copenhagen’, used in Fig. 1 to illustrate the information loss and recovery problem, is a social network derived from smartphone bluetooth proximity data that serve as a proxy of face-to-face interactions [19, 29].
The remaining networks are
‘Hospital’, another proximity-based social network derived from wearable sensors carried by healthcare workers [30];
‘Ant Colony’, a physical interaction network taken from manually-annotated video footage of a *Temnothorax rugatulus* colony [31];
‘Manufacturing Email’, an interaction network derived from internal emails sent between employees of a mid-size manufacturing firm [32];
and ‘College Message’, a social network derived from messages sent between members of an online community of University of California, Irving college students [33].
These networks cover a span of time scales, from minutes (Ant Colony) to days (Hospital), weeks (Copenhagen) and months (Manufacturing Email, College Message).
Full details for all networks, including data processing steps and network statistics, are given in SI LABEL:si:sec:dataset.

For each network, information loss (replacing with ) was simulated using Eq. 1. Figure 2 shows heatmaps of the node activity and edge activity matrices for each network. In the figure, rows of the node and edge matrices are drawn to scale to illustrate the extent of information loss across the corpus. For each network, we report and , the numbers of nodes and edges; their ratio describes the “aspect ratio” of the incidence matrix which reflects how much information was lost when moving from to and consequently how challenging we may expect the recovery problem to be.

Figure 2 also shows the recovered from for both the least-norm and sparse solution methods discussed in Sec. 2. (All matrices in a given row in Fig. 2 use the same colorbar.) Comparing these recovered matrices to shows that both approaches managed to capture the qualitative overall features of . However, we see that the sparse solution is far closer in appearance to the true .

In particular, the sparsity pattern of (zero entries in matrices are illustrated with white in Fig. 2) is almost entirely absent in the least-norm solutions while being captured well in the sparse solutions. Qualitatively, visual comparison of (sparse) and demonstrates the potential for quite good information recovery. Quantitatively, Table 1 measures the recovery accuracy by reporting the correlations between matrix entries. We report both Pearson and Spearman correlations to capture both linear and nonparametric associations between and . Despite the challenge we would expect when attempting to recover the lost temporal information, when examining the table we see that many networks show a high correlation between the original and the recovered . One exception is Manufacturing Email, where both solutions achieved correlations lower than . Even though it is not the largest network in the corpus, Manufacturing Email is the densest, with . This relatively high density leads to increased aggregation of the edge time series’ and thus we expect that undoing the subsequent information loss will be more difficult as a result.

(ln) | (sp) | (ln) | (sp) | |
---|---|---|---|---|

Copenhagen | 0.7660 | 0.5060 | 0.4512 | 0.7308 |

Hospital | 0.5494 | 0.5752 | 0.4067 | 0.6004 |

Ant Colony | 0.5698 | 0.8432 | 0.2727 | 0.6844 |

Manufacturing Email | 0.4275 | 0.2650 | 0.3957 | 0.3879 |

College Message | 0.7104 | 0.8788 | 0.2508 | 0.7560 |

We further explore the effects of network structure and the intersecting roles of topological and dynamical sparsity in Sec. 4.

## 3 Network tasks after recovery

The recovered edge activity data will only be useful if the data are useful for subsequent tasks. Here we examine how well the recovered data can be used on two exemplar tasks: estimating tie strength from temporal activity and extracting the network’s multi-scale backbone

[34].Tie strengths, typically represented with edge weights accounting for the total quantity of interactions between nodes, are captured by summing over time periods: . When only is available, how well does ? We computed using three methods. One, a baseline, uses a linear kernel to bypass the computation of and compute directly from : , where

is the row vector corresponding to node

in . The second and third methods estimate using the least-norm and sparse solution, respectively, and summing over time periods to derive . After computing each using we compare to derived directly from .As shown in Fig. 3, good recovery of tie strength is possible across the network corpus. In particular, the sparse solution more accurately infers the underlying tie strength than either the node-based baseline or the least-norm approach. Three networks show correlations and even the most difficult network, Manufacturing Email, shows , better performance for that network than may be expected from Table 1. While all networks show high recovery performance, the sparse solutions are especially accurate for the Copenhagen, Ant Colony, and College Message networks, achieving a 23.6% (Copenhagen) to 45.4% (Ant Colony) improvement in correlation compared to the least-norm approach.

While the high performance of the sparse solution naturally follows from its accuracy when inferring , in some ways, it is also surprising. The sparsity-promoting penalty imposed on solutions induces a bias in the solution and we might expect that bias to reduce the accuracy of the aggregated quantity as opposed to the least-norm solution where solution errors over and under may cancel out during summation. That correlations remain high despite this further underscores the usefulness of sparse estimation.

Our second example of a post-recovery task is to extract a network’s multiscale backbone [34].
The multiscale backbone represents the central or key set of edges undergirding the network’s topology.
The multiscale backbone is found by examining the local distribution of edge weights [34].
Specifically, an edge () incident to node is flagged as belonging to the backbone if , where , is the degree of , and is the multiscale backbone strength parameter.
(Technically, edge is retained if this inequality is satisfied on either or ; see Serrano *et al.* [34] for details.)

For each network, we compare the backbone inferred from the original data with the backbone found on the recovered data. To do so, we interpret backbone discovery as a binary classification or partitioning of edges: each edge is flagged as either a backbone edge or non-backbone edge and we compare these classifications using false positive and true positive rates. However, there is an additional layer of complexity: as the multiscale backbone method depends on parameter , the “true” solution (i.e., the solution found on ) will depend on and an experimenter may not know when working with . We therefore distinguish between two parameters by denoting as the value used on and the value used on and we consider both cases and when comparing results.

To illustrate the extracted backbones, we begin by drawing the Copenhagen network twice in Fig. 4A, once highlighting the edges of the “oracle” backbone computed using and again highlighting the backbone found using the sparse . While there are some visual discrepancies between the two backbones, overall they agree considerably. Moving to a quantitative assessment, in Fig. 4B we plot the Receiver Operating Characteristic (ROC) curve comparing -backbones and

-backbones on the Copenhagen network. The dashed line illustrates the expected classifier performance if randomly labeling edges as backbone or non-backbone; we see generally far better performance (as typified by low false positive rates and high true positive rates) regardless of

and .Moving beyond the Copenhagen network, Fig. 4C shows classifier performance across the entire corpus, for both the least-norm and sparse recovered . We simplify the ROC curve by plotting the AUC (area under the ROC curve) as a function of (individual ROC curves for all methods and networks are shown in LABEL:si:sec:multiscalebackbone, LABEL:si:fig:compare-ROC-curves-edgeweight). Overall, we observe high AUC values, far from the AUC = value for a null classifier. Further, for any given network and value of , the sparse solution performs better at classifying backbone edges than the least-norm solution. Performance degrades as increases, when edges were less selectively added to the “true” backbone, and improves considerably at lower , when edges were more selectively added. This trend in performance indicates that the strongest and thus most important members of the multiscale backbone can be inferred accurately even when the detailed information within is lost or absent.

## 4 Understanding recovery errors

We now explore the errors made when recovering using the classic, least-norm and modern, sparsity-promoting solution methods applied above. Using our network corpus, we also relate how information loss is affected by different aspects of network structure and temporal activity.

It is instructive to first consider the classic approach to solving the underdetermined system (Eq. 1). Using the pseudoinverse to compute the least-norm solution will not be perfectly accurate:

(2) |

because since is a right-inverse, not a left-inverse, of . Further, since , we cannot characterize the solution using the residual. Instead, we consider in theory the difference between and . We show (LABEL:si:sec:leastnormerrorbound) that

(3) |

where is the Frobenius norm. In other words, the error bound Eq. 3 tells us how much our recovered edge activity matrix will be near to or far from the actual edge activity matrix depends on the product of two terms, the norm of the true solution and the density of the network, specifically .

Both terms on the right of Eq. 3 relate the difficulty of the recovery problem to the sparsity of the data. Denser networks will have larger relative to , leading to a wider . Likewise, networks with denser temporal activity will have more edges active at any given time, leading to fewer zero elements in and potentially larger values for non-zero elements, both of which contribute to a larger . As either form of sparsity decreases, the gap between and is likely to grow, leading to larger recovery errors.

Turning from the least-norm solution, we now focus on the sparse solution method. Sparse estimation techniques have a rich history of theoretical study [35, 22, 26, 24]

. Here it is key to understand the shape of the optimization problem’s loss function. If the loss function is strongly convex, then a single minimum exists and, for optimizations such as ours, theoretical bounds can be put on the estimation error. If, however, the loss function is not strongly convex but only convex, then there will exist directions for which the loss is flat, and the optimization solution can become arbitrarily far from the minimum. For our optimization, at timestep

, i.e., column of and , the least-squares loss is always convex. It is strongly convex when the Hessian matrix for this loss,, has eigenvalues bounded away from zero. Unfortunately, a Gram matrix of the form

where is will have rank at most and so will be rank-deficient in the high-dimensional setting () [24]. Standard practice then is to seek weaker requirements than strong convexity for the loss function, known as restricted strong convexity [36], where strong convexity is required only for a subset of estimation error vectors. For a linear model, this means that we seek to lower bound the restricted eigenvalues (RE) of [37]. If zero is a eigenvalue of , we will be unable to guarantee a bound on the estimation error, although in practice it is still possible to find good solutions [22].By combining results from the literature of sparse statistical learning with results from spectral graph theory, we prove (LABEL:si:sec:networkstructureanderrorbounds) that RE holds when the smallest eigenvalue of the adjacency matrix of the network’s *line graph*

is greater than -2. This occurs only when every connected component of the network is a tree or has one cycle only, of odd length

[38]. Intuitively, this makes sense: the incidence matrix of a tree for instance will be narrower than it is wide () and the system (Eq. 1) will not be underdetermined. Yet this is also a strong requirement—no networks in the corpus are close to being trees. However, what matters for RE to hold is the network’s structure at each time step, not the overall structure (see SI), and some networks, Ant Colony and College Message in particular, often meet this requirement (LABEL:si:fig:mineigenvalueBmatTBmat). And, as in many practical situations, even without the theoretical guarantee, we have observed here successful use of recovered data in practice (Figs. 4 and 3 above).Beyond the theoretical analysis, let us examine the estimation error between and empirically, using our network corpus. In Fig. 5A,B we plot the (relative) estimation error at time as a function of the active node and active edge fractions , respectively. Both active fractions strongly correlate with estimation errors across the network corpus. Somewhat surprisingly, we find comparable correlations with error for both and : and , respectively. We expand further on this correlation in SI LABEL:si:tab:correlations. Being able to use to judge the magnitude of errors just as well as is interesting because in practice the actual value of will not be known, but will be. A strong correlation here would allow a researcher to anticipate the magnitude of errors made during recovery, an important step to support their investigations in practice, when true solutions are unknown. In that regard, even a simple measure of overall density, , which is also the “aspect ratio” of , is a good predictor for the average estimation error (Fig. 5B inset).

Further examining the scatter plots in Fig. 5A,B we see that the two sparsest networks, Ant Colony and College Message, have lower active fractions and lower recovery errors. Conversely, the densest network, Manufacturing Email, has a very high active node fraction and high estimation errors. Copenhagen occupies an interesting middle ground, with a bimodality in active fraction and estimation error, meaning there are time periods where the network is more strongly activated and recovery is difficult and other periods where the network’s activity is sparse and recovery is less error-prone. Examining the distributions of active fractions over time (Fig. 5C,D) we see that only a minority of edges are active at a given time (C) but those edges will land upon and activate a larger fraction of the nodes (D). This is most extreme for Manufacturing Email, where typically less than 20% of active edges will give rise to 60% active nodes.

While the distributions of node and edge activity fractions are useful to study independently, the interaction of the two also matters. In particular, do active edges follow a pattern in their distribution over the network, or are they effectively randomly distributed? In Fig. 5E we plot for each network vs. over . Along with these scatter plots we include a randomized null model where we selected the same number of distinct edges uniformly at random and then determined from how many unique nodes were incident upon that set of edges. For small relative to , we expect as it will be unlikely for two edges chosen at random to fall on a common node (cf. graph matchings [39]). Indeed, this appears to hold for Ant Colony and to a lesser extent College Message; both are well described by the randomized null model. In comparison, Copenhagen and Hospital, both proximity-based social networks, stay close to the null model only for the smallest values of . At higher values of , both deviate significantly, indicating that edge activations cluster around a comparatively smaller set of nodes. This clustering of active edges will make the recovery problem more challenging. Manufacturing Email shows a similar clustering but to a lesser extent, while the high number of nodes active at the same time (cf. Fig. 5D) makes recovery a challenge. Notice that transforming from and to and introduces a scale-dependency related to the overall density of the network: becomes where is the average degree of the network and, further, is twice the aspect ratio of . In many ways, the steepness of the curves in panel E at small summarizes the topological difficulty of the recovery problem.

Another view for how edge activations distribute over nodes is given in Fig. 5F, where we study the fraction of times a node is active as a function of its degree. As only a single incident active edge is needed to activate a node, we anticipate that high degree hub nodes will be disproportionately activated compared to low degree nodes. In fact, all networks in the corpus exhibit this trend. We also observe that the steepness of this trend correlates well with the difficulty of the recovery problem. For comparison, each scatter plot in Fig. 5F also shows two null models (curved lines) capturing the expected proportion of times active if edges are activated at random. The first null model (dashed lines) preserves the overall number of active edges (the number of nonzero elements in ) while the second, more strict model (solid lines) preserves the active number of edges at each time (the number of nonzero elements in each column ). Compared to the actual times active, some networks are well explained by one null model (Manufacturing Email, College Message) or even both (Ant Colony). Other networks (Copenhagen, Hospital) are not explained by either null, indicating a non-uniformly random distribution in how active edges appear across high- and low-degree nodes. Copenhagen is especially interesting to point out as the bimodality seen above (Fig. 5A,B) exhibits here as two regimes of nodes, one active far less often than expected, and another active as much or more often than expected from the null models.

Lastly, we examine a number of other dynamical and network properties in Fig. 5G. Figure 5G1 shows distributions of , describing the (unnormalized) magnitudes of dynamical activity we can expect on edges in the network; Copenhagen, the largest network, exhibits a few periods of very high activity but otherwise exhibits a typical scale similar to Hospital. Hospital, however, exhibits a wide, nearly uniform spread in low and high activity. Other networks, especially the sparse Ant Colony, show less activities per timestep. Next, supplementing the distributions of and (Fig. 5C,D), Fig. 5G2 and Fig. 5G3 show the distributions of times active for both nodes and links. These follow the active fractions closely but two networks, Copenhagen and Manufacturing Email, have subsets of frequently active edges not apparent in the distributions of shown in Fig. 5C. Another measure of dynamical network density, the temporal mean degree is shown in Fig. 5G4 (excluding times where ). Here we can see that Hospital and Manufacturing Email have similar high densities, but Manufacturing Email is consistently dense while Hospital has periods of high and low density, in line with the spread of estimation errors seen in Fig. 5A,B. Finally, we report in Fig. 5G5 the degree assortativities [40] of each network over time, where is the network of active nodes and edges at time . Most networks tend to be degree dissortative () at any given time (with Copenhagen being a notable exception), meaning that links tend to form between high and low degree nodes. Often the time-dependent network is more dissortative than the static, cumulative network , with Copenhagen again being an exception. See SI LABEL:si:subsec:comparisonNetworkFeatures, LABEL:si:fig:networkpropertiesanderrors for further comparison between network features and recovery errors. In general, the interplay between both static and time-varying network properties (Fig. 5G), coupled with non-random patterns of edge activations (Fig. 5E,F) will influence the challenge of the recovery problem, as seen here in the per-timestep estimation errors (Fig. 5A,B).

## 5 Discussion

We have studied the problem of information loss in temporal network data. We asked the commonly occurring question: when detailed edge activity data are unavailable but node activities are, can edge activities be recovered from node activities and the static network itself? This recovery problem (generally) maps into an underdetermined linear system governed by the network structure, and we examined how to recover information loss using both classical and modern approaches to finding solutions to underdetermined systems. Using both theoretical analyses and empirical investigations, we showed that the difficulty of information recovery is governed by a decomposition of the topological and dynamical properties of the network being investigated. We found that temporal networks will often have a high degree of dynamic recoverability, with surprisingly good recovery performance for multiple networks, underscoring the importance of knowing the network structure: a good picture of network structure makes it easier to determine when recovery will be accurate and when it will be inaccurate. In particular, the density of the network structure challenges recovery, with denser networks making recovery more error-prone. Conversely, dynamical sparsity, where less network activity tends to occur, can make recovery more accurate. These competing forms of density and sparsity can be exploited, and we found that sparsity-promoting methods applied to the recovery problem find good solutions, sometimes surprisingly so, for both recovery itself and for informing subsequent network analysis tasks that depend on the recovered information.

Our results here point towards a number of questions that warrant further study. The recovery problem is an inference problem, not prediction, but it would be interesting to see how well past values of can predict future values of . As recovery errors already occur without prediction, we expect compounding errors when trying to predict unless the system under study happens to exhibit stationarity. Likewise, what are the effects of measurement error? In other words, how much more difficult is recovery when is not known perfectly? This question should be addressed and is an important direction for future work. Errors-in-variables methods are the natural starting point. Although this problem may be especially challenging as sparse estimators are known to be unstable under uncertainty, the use of specialized estimators may be fruitful [41]. And local recovery, where only subgraph dynamics are considered, are worth investigating, as local problems are often sufficient around network regions of interest and those subgraphs may be more tractable to analyze than the global network, both from a data collection perspective and from the perspective of the recovery problem.

This work underscores how important it is, across a variety of problem domains, to know the underlying network structure, as the incidence matrix feeds into the solution methods we consider. But this raises an important question: what are the scientific consequences and broader impacts, both positive and negative, that may arise as a result of gathering static network data and then recovering, with perhaps unexpected accuracy, temporality?

We discuss two scenarios. First, consider the example of an application (app) maker distributing their application on smartphones. Smartphones contain many sensors and collect much sensitive data [42, 14], such as location information [13], but also have privacy safeguards in place [43]. While a potentially rich source of social network and human behavioral data, outside their own application, the application maker will not have access to sensitive information such as communication records. Now suppose that app maker requests access to user address books. These address books constitute local, egocentric snapshots of the users’ static social network, and are especially informative after disambiguation and record linkage across datasets [44]. When combined with sensor data or other records, therefore, these static networks snapshots can be used to infer not just the intrinsic activities of users (the node activities) but potentially even the activities of specific social ties (the edge activities). While inferences along these lines will certainly require significant data, and are especially interesting in certain use cases, such as inferring the illicit communications of criminals, this scenario underscores the privacy concerns here, as it means an entity such as a social media platform provider can begin to infer more information than expected about individuals by distributing applications and cross-referencing multiple datasets.

For the second scenario, we consider the implications for network neuroscience [2]. Imaging studies reveal spatiotemporal dynamics, generating time series that act as for an aggregated network where nodes represent brain regions-of-interest [45]. More useful would be to capture the edge dynamics governing when different ROIs “talk” to one another; is not accessible. However, our results here imply that, as network structure is revealed, for instance with tract- or fiber-tracing methods such as diffusion spectrum imaging [16, 46] or fluorescence microscopy [17], then can be inferred and combined with to approximate the missing . Accurate inference of these hidden dynamics can inform studies that move beyond the structural connectome and explore the brain’s dynamical “dynome” [47].

## Acknowledgments

We are grateful to M. Almassalkhi and H. Ossareh for useful discussions. JPB acknowledges support by Google Open Source under the Open-Source Complex Ecosystems And Networks (OCEAN) project, U.S. Department of Energy’s Advanced Research Projects Agency - Energy (ARPA-E) award DE-AR0000694, NASA under grant 80NSSC20M0213, and by the Broad Agency Announcement Program and Cold Regions Research and Engineering Laboratory (ERDC-CRREL) under Contract No. W913E521C0003. SL thanks the Villum Foundation (Nation Scale Social Networks) for support.

## References

- [1] P. Holme and J. Saramäki, “Temporal networks,” Physics reports, vol. 519, no. 3, pp. 97–125, 2012.
- [2] D. S. Bassett and O. Sporns, “Network neuroscience,” Nature neuroscience, vol. 20, no. 3, pp. 353–364, 2017.
- [3] A. Li, S. P. Cornelius, Y.-Y. Liu, L. Wang, and A.-L. Barabási, “The fundamental advantages of temporal networks,” Science, vol. 358, no. 6366, pp. 1042–1046, 2017.
- [4] P. Holme, “Modern temporal network theory: a colloquium,” The European Physical Journal B, vol. 88, no. 9, pp. 1–30, 2015.
- [5] N. Masuda and R. Lambiotte, A guide to temporal networks. World Scientific, 2016.
- [6] Y.-A. de Montjoye, S. Gambs, V. Blondel, G. Canright, N. de Cordes, S. Deletaille, K. Engø-Monsen, M. Garcia-Herranz, J. Kendall, C. Kerry, G. Krings, E. Letouzé, M. Luengo-Oroz, N. Oliver, L. Rocher, A. Rutherford, Z. Smoreda, J. Steele, E. Wetter, A. S. Pentland, and L. Bengtsson, “On the privacy-conscientious use of mobile phone data,” Scientific Data, vol. 5, no. 1, p. 180286, 2018.
- [7] Z. Bar-Joseph, A. Gitter, and I. Simon, “Studying and modelling dynamic biological processes using time-series gene expression data,” Nature Reviews Genetics, vol. 13, no. 8, pp. 552–564, 2012.
- [8] D. S. Bassett, N. F. Wymbs, M. A. Porter, P. J. Mucha, J. M. Carlson, and S. T. Grafton, “Dynamic reconfiguration of human brain networks during learning,” Proceedings of the National Academy of Sciences, vol. 108, no. 18, pp. 7641–7646, 2011.
- [9] J. Faskowitz, F. Z. Esfahlani, Y. Jo, O. Sporns, and R. F. Betzel, “Edge-centric functional network representations of human cerebral cortex reveal overlapping system-level architecture,” Nature neuroscience, vol. 23, no. 12, pp. 1644–1654, 2020.
- [10] E.-Á. Horvát, M. Hanselmann, F. A. Hamprecht, and K. A. Zweig, “One plus one makes three (for social networks),” PloS one, vol. 7, no. 4, p. e34740, 2012.
- [11] D. Garcia, “Leaking privacy and shadow profiles in online social networks,” Science advances, vol. 3, no. 8, p. e1701172, 2017.
- [12] J. P. Bagrow, X. Liu, and L. Mitchell, “Information flow reveals prediction limits in online social activity,” Nature human behaviour, vol. 3, no. 2, pp. 122–128, 2019.
- [13] Y.-A. De Montjoye, C. A. Hidalgo, M. Verleysen, and V. D. Blondel, “Unique in the crowd: The privacy bounds of human mobility,” Scientific reports, vol. 3, no. 1, pp. 1–5, 2013.
- [14] J. L. Kröger, P. Raschke, and T. R. Bhuiyan, “Privacy implications of accelerometer data: a review of possible inferences,” in Proceedings of the 3rd International Conference on Cryptography, Security and Privacy, pp. 81–87, 2019.
- [15] A. F. M. Altelaar, J. Munoz, and A. J. R. Heck, “Next-generation proteomics: towards an integrative view of proteome dynamics,” Nature Reviews Genetics, vol. 14, no. 1, pp. 35–48, 2013.
- [16] V. J. Wedeen, P. Hagmann, W.-Y. I. Tseng, T. G. Reese, and R. M. Weisskoff, “Mapping complex tissue architecture with diffusion spectrum magnetic resonance imaging,” Magnetic resonance in medicine, vol. 54, no. 6, pp. 1377–1386, 2005.
- [17] J. Livet, T. A. Weissman, H. Kang, R. W. Draft, J. Lu, R. A. Bennis, J. R. Sanes, and J. W. Lichtman, “Transgenic strategies for combinatorial expression of fluorescent proteins in the nervous system,” Nature, vol. 450, no. 7166, pp. 56–62, 2007.
- [18] G. Anthes, “Data brokers are watching you,” Commun. ACM, vol. 58, p. 28–30, Dec. 2014.
- [19] P. Sapiezynski, A. Stopczynski, D. D. Lassen, and S. Lehmann, “Interaction data from the Copenhagen Networks Study,” Scientific Data, vol. 6, no. 1, pp. 1–10, 2019.
- [20] R. Penrose, “On best approximate solutions of linear matrix equations,” Mathematical Proceedings of the Cambridge Philosophical Society, vol. 52, no. 1, pp. 17–19, 1956.
- [21] R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society: Series B (Methodological), vol. 58, no. 1, pp. 267–288, 1996.
- [22] E. Candes and T. Tao, “The Dantzig selector: Statistical estimation when is much larger than ,” Annals of statistics, vol. 35, no. 6, pp. 2313–2351, 2007.
- [23] W. Xu, E. Mallada, and A. Tang, “Compressive sensing over graphs,” in 2011 Proceedings IEEE INFOCOM, pp. 2087–2095, IEEE, 2011.
- [24] T. Hastie, R. Tibshirani, and M. Wainwright, Statistical learning with sparsity: the Lasso and generalizations. Chapman and Hall/CRC, 2015.
- [25] M. Yuan and Y. Lin, “Model selection and estimation in regression with grouped variables,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 68, no. 1, pp. 49–67, 2006.
- [26] K. Lounici, M. Pontil, S. Van De Geer, and A. B. Tsybakov, “Oracle inequalities and optimal inference under group sparsity,” Annals of statistics, vol. 39, no. 4, pp. 2164–2204, 2011.
- [27] W. Lu, T. Dai, and S.-T. Xia, “Binary matrices for compressed sensing,” IEEE Transactions on Signal Processing, vol. 66, no. 1, pp. 77–85, 2017.
- [28] M. Zhao, M. D. Kaba, R. Vidal, D. P. Robinson, and E. Mallada, “Sparse recovery over graph incidence matrices,” in 2018 IEEE Conference on Decision and Control (CDC), pp. 364–371, IEEE, 2018.
- [29] A. Stopczynski, V. Sekara, P. Sapiezynski, A. Cuttone, M. M. Madsen, J. E. Larsen, and S. Lehmann, “Measuring large-scale social networks with high resolution,” PLOS ONE, vol. 9, no. 4, p. e95978, 2014.
- [30] P. Vanhems, A. Barrat, C. Cattuto, J.-F. Pinton, N. Khanafer, C. Régis, B.-a. Kim, B. Comte, and N. Voirin, “Estimating potential infection transmission routes in hospital wards using wearable proximity sensors,” PLOS ONE, vol. 8, no. 9, pp. 1–9, 2013.
- [31] B. Blonder and A. Dornhaus, “Time-ordered networks reveal limitations to information flow in ant colonies,” PLOS ONE, vol. 6, no. 5, pp. 1–8, 2011.
- [32] R. Michalski, S. Palus, and P. Kazienko, “Matching organizational structure and social network extracted from email communication,” in Business Information Systems (W. Abramowicz, ed.), (Berlin, Heidelberg), pp. 197–206, Springer Berlin Heidelberg, 2011.
- [33] P. Panzarasa, T. Opsahl, and K. M. Carley, “Patterns and dynamics of users’ behavior and interaction: Network analysis of an online community,” Journal of the American Society for Information Science and Technology, vol. 60, no. 5, pp. 911–932, 2009.
- [34] M. Á. Serrano, M. Boguná, and A. Vespignani, “Extracting the multiscale backbone of complex weighted networks,” Proceedings of the national academy of sciences, vol. 106, no. 16, pp. 6483–6488, 2009.
- [35] S. Chen and D. Donoho, “Basis pursuit,” in Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers, vol. 1, pp. 41–44, IEEE, 1994.
- [36] S. N. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu, “A unified framework for high-dimensional analysis of -estimators with decomposable regularizers,” Statistical science, vol. 27, no. 4, pp. 538–557, 2012.
- [37] P. J. Bickel, Y. Ritov, and A. B. Tsybakov, “Simultaneous analysis of Lasso and Dantzig selector,” The Annals of statistics, vol. 37, no. 4, pp. 1705–1732, 2009.
- [38] M. Doob, “An interrelation between line graphs, eigenvalues, and matroids,” Journal of Combinatorial Theory, Series B, vol. 15, no. 1, pp. 40–50, 1973.
- [39] L. Lovász and M. D. Plummer, Matching theory, vol. 367. American Mathematical Soc., 2009.
- [40] M. E. J. Newman, “Mixing patterns in networks,” Physical review E, vol. 67, no. 2, p. 026126, 2003.
- [41] M. Rosenbaum and A. B. Tsybakov, “Sparse recovery under matrix uncertainty,” The Annals of Statistics, vol. 38, no. 5, pp. 2620 – 2651, 2010.
- [42] J. C. Sipior, B. T. Ward, and L. Volonino, “Privacy concerns associated with smartphone use,” Journal of Internet Commerce, vol. 13, no. 3-4, pp. 177–193, 2014.
- [43] I. Mohamed and D. Patel, “Android vs ios security: A comparative study,” in 2015 12th International Conference on Information Technology - New Generations, pp. 725–730, 2015.
- [44] K. Shu, S. Wang, J. Tang, R. Zafarani, and H. Liu, “User identity linkage across online social networks: A review,” SIGKDD Explor. Newsl., vol. 18, p. 5–17, Mar. 2017.
- [45] D. C. Van Essen, K. Ugurbil, E. Auerbach, D. Barch, T. E. Behrens, R. Bucholz, A. Chang, L. Chen, M. Corbetta, S. W. Curtiss, et al., “The human connectome project: a data acquisition perspective,” Neuroimage, vol. 62, no. 4, pp. 2222–2231, 2012.
- [46] M. I. Menzel, E. T. Tan, K. Khare, J. I. Sperl, K. F. King, X. Tao, C. J. Hardy, and L. Marinelli, “Accelerated diffusion spectrum imaging in the human brain using compressed sensing,” Magnetic Resonance in Medicine, vol. 66, no. 5, pp. 1226–1233, 2011.
- [47] N. J. Kopell, H. J. Gritton, M. A. Whittington, and M. A. Kramer, “Beyond the connectome: the dynome,” Neuron, vol. 83, no. 6, pp. 1319–1328, 2014.

Comments

There are no comments yet.