A complex system is an integrated system with many parts, and the emergent behaviors of the entire system are determined by how these parts interact , i.e., the network structure. For example, the topology of a social network determines how fast the opinions or ideas can spread in a social media [2, 3]; the structure of a supply chain network between companies influences the safety of the whole market because risk may propagate along the links [4, 5]; the topology of the cooperation network plays critical role for scientific innovation and individual development for young scientists [6, 7]. However, the data of network structure is always incomplete or even unavailable either because measuring binary links is costly or the data of weak ties is missing [8, 9, 5]. Therefore, it is urgent to find a way to infer the complete network structure according to non-structural information [10, 11].
Link prediction, as the traditional task in network science, tries to infer the lost links in network structure according to the linking patterns of existing connections [12, 13]. Although numerous algorithms have been developed to complete the missing links of a large network with high accuracy [14, 15], all of these approaches require the complete nodes information but it is always unavailable in practice [8, 16]. link prediction cannot solve the inference problem under the condition that the network contains unobservable nodes. In real cases, we can either obtain nodes information of the only partial network or without any information about links [17, 18], as a result, conventional link prediction algorithms cannot work.
Network completion methods have been developed in recent years trying to tackle the first problem we will discuss in the paper, that is, to infer the missing connections on unobserved nodes according to the linking patterns between observable nodes. The methods can be categorized into traditional statistical-based methods and graph neural network liked methods. Give examples of traditional methods represented by expectation maximum(EM) algorithm, Myunghwan Kim and Jure Leskovec applied Kronecker graph model and EM to complete the network according to the observed linking patterns . Although their algorithms obtain a relatively high accuracy of recovering missing links, there is an implicit requirement, the underlying network structure is required to follow the self-similar property as possible, which is violated by some network . Following the same expectation maximum(EM) algorithm, in a recent NC work, Xue, Yuankun and Bogdan, Paul developed a causal inference method to recover the complete network structure from the adversarial interventions . On the other hand, as the booming development of deep learning on graphs [22, 23, 24], researchers applied graph convolution network (GCN) liked models on network completion problems. Da Xu et al regard the complete network as the growth of the partial network , then they train the GCN to learn the growing process with the partially observed network and generalized to the complete unknown network structure. Cong Tran, Won-Yong Shin, and Andreas Spitz et al solve the problem by training a graph generating model to learn the connection patterns among a large set of similar graphs for training and applied the trained generating model to complete the missing information . All of these network completion methods depend on a partially observed network structure because they try to discover the latent patterns of the observed connections and to infer the unknown structures. Nevertheless, in some cases the network structures are totally implicit and only some signals of partial nodes can be observed, such as biological network  and social network . How can we infer the whole network structure without any information of connection patterns?
In fact, time series data on nodes as another important information source [27, 28], which is always available in practice, is ignored by mentioned previous works. For example, in an online social network, we can only observe the discrete retweet events between a large set of users, neither their features like sex, education, etc. and their connection information is unavailable; in a stock market, all the information that we can obtain is the prices of different stocks, the connections between the stocks are unknown. Thus, can we develop a method to infer the network structure according to the time series data representing the nodes’states? A large number of methods have been proposed for reconstructing network from time series data. One class of them is based on the method of statistical inference such as Granger causality [29, 30], and correlation measurements [31, 32, 33]. Even though, these methods may fail to reveal the structural connection. Another class of methods was developed for reconstructing structural connections directly under certain assumptions. For example, methods such as driving response  or compressed sensing [14, 35, 36, 37] either require the functional form of the differential equations, or the target-specific dynamics, or the sparsity of time series data. However, getting this information is very difficult. Thus, a general framework for reconstructing network topology, completing missing structures, and learning dynamics from the time series data of various types of dynamics, including discrete and binary ones, is necessary.
In this paper, we develop a framework for network inference from the time series data. We discuss two kinds of network inference problems. The network reconstruction problem is defined as the whole network structure reconstruction based on observed nodes’ states time series data. In this problem, all nodes are observable. The second problem is network completion, in which, only partial nodes’ states time series data is available, and we will infer the complete network structure according to this piece of information under the conditions either the connections between observable nodes are known or unknown. Both problems are formulated as the same kind of optimization problem, which is to find an optimized network structure and the approximator of the network dynamics such that the errors between the observed time series and the generated time series according to the candidate network structure and dynamical rules is minimized. Both problems are solved in the same framework called Gumbel-Graph-Network (GGN) , which is a combination of the network generator based on Gumbel softmax sampling and the dynamics learner based on Graph Network. Gumbel softmax sampling is a technique to simulate the sampling process with a differentiable computation process. Equipped with this technique, we can train a network generator with a gradient descent method. Graph Neural Network (GNN) is a new deep learning architecture on graph. By learning a bunch of functions defined on nodes and links representing propagation and aggregation, GNN can recover the complex dynamical process defined on a graph like rigid body movement , coupled oscillators , traffic flows , pollution spreading , etc. By deploying the powerful capability of graph neural network in learning, we can simulate the underlying dynamics based on the real network.
Ii Network Inference Methodologies
In this paper, we focus on the network inference problems of the structure and dynamic based on states evolution time series of all or partial nodes. According to the two different application scenarios, we divide the problem into two sub-problems: (1) Network reconstruction problem: inferring interconnected structure and dynamic of network evolution with all nodes observable; (2) Network completion problem: recovering the structure and nodes’ states of the entire network based on partial network structure and time series data of the observable nodes. In this section, we first introduce the formal definitions of the two sub-problems and then describe the specific models to solve the problems separately.
Suppose our studied system has an interaction structure described by a binary graph with an adjacency matrix , where is the set of nodes, or interchangeably referred to as vertices, and is the total number of nodes, is the set of edges between the nodes, and is a binary matrix of which each entry equals or .
The network dynamic is defined on the graph , where is the dynamical rule which mapping the states of nodes in the system at time to the states at time , where , and is the dimension of the states. Next, we propose the definition of the network reconstruction problem:
Ii-a Problem Definition
Definition 1 (Network Reconstruction).
The so-called network reconstruction problem refers to deduce the unknown network structure and dynamic rule from the known nodes’ states time series when they can be observed. We can generate a time series with length from the system dynamic , denoted by , where , and is the initial states, where is the total number of time series (the number of samples from different initial conditions). The network reconstruction problem is defined as an optimization problem that finds a set of optimal parameters
, to minimize the error value between the state estimation value and ground-truth, which is the objective function formula1.
Here, is a dynamical rule parameterized by to estimate . Starting from state , we iteratively apply it to the states in the previous steps, and then obtain the estimated evolutionary trajectory similar to ; while is the estimate of the Adjacency Matrix with the parameter . After minimizing the objective function, from the parameter we can calculate the optimized dynamical rule and also we can get an estimate of the network structure by sampling from the parameter . We hope that will be close enough to the ground truth .
Definition 2 (Network completion).
Different from the task of reconstruction, the network completion task is defined in such a case: when part of individuals in the network are unobservable, only the generation time series and network structure of the observable individuals can be observed. As Figure 1 described, under these conditions, we still need to infer all the unknown information from the observable individuals, including the dynamical rule, the states information of the unknown nodes, and the structure information between the unknown nodes and the observed nodes.
Let’s formulate the problem as follows: we assume that the graph can be divided into two parts: observed structure and unobservable structure , accordingly, the adjacency matrix, the states of the nodes, are also divided into two parts and , respectively. Where,
indicates the concatenation of tensors with appropriate ways. Network completion problem is then to find a set ofoptimal parameter combinations such that the estimated and the real values of the observed partial time series are as consistent as possible, that is Equation 3
for . Where , are the estimates of and obtained from the parameters respectively. And
where is an estimate of the unknown nodes’ initial state parameterized by , where is the number of unobserved nodes.
Ii-B Network reconstruction with Gumbel Graph Network framework
To solve both the problems formulated in the previous sections, we extended our previous work, a general deep learning framework called Gumbel Graph Network (GGN) . As shown in Figure 2, our general idea is to use a graph network, which is called as a dynamic learner, on the generated candidate network to realize the dynamical rule , where, are learnable weights of the graph network. And the candidate network is generated by a series of gumbel softmax sampling processes parameterized by a matrix , that is,
is the probability of connection between nodeand node , and
are i.i.d random variables of the standard Gumbel distribution, andis the temperature parameter. When goes to infinity, will converge to or . Equation 7 simulates the sampling process of generating with the probability , however, it is derivable such that it can be adjusted by the gradient descent method.
In the dynamic learner module, we use multiple layered perceptrons (MLP) to simulate the one step of the complex non-linear process in real dynamic, that is, to complete the mapping from to . The module can be replaced by CNN or RNN module.
where, are the weights of MLPs. The readers can be referred to the paper  for the details. By contrasting with the ground truth of states, we calculate the gradients of all parameters and update the adjacency matrix parameters and module accordingly.
We alternatively train the network generator and dynamic learner for obtaining the best parameters .
|Algorithm 1 : NC-GGN algorithm|
|1 Input: observed adjacency ; observed states ;|
|Length of Prediction Steps P; Length of Dynamic Learner, Initial States, Gumbel Generator Train Steps D,I,K;|
|2 Output: predict adjacency ;|
|4 Initialize Dynamics Learner parameters|
|5 Initialize Initial States Learner parameters|
|6 Initialize Gumbel Generator parameters|
|7 Initial states of missing nodes|
|# Training Dynamics Learner|
|9 for d=1,,D do|
|11 for t=1,,P do|
|12 Dynamics Learner|
|14 loss Compute Loss|
|15 update with the gradient of loss|
|# Training States Learner|
|17 Missing Edge info: Gumbel Generator|
|19 for i=1,,I do|
|21 for t=1,,P do|
|22 Dynamics Learner|
|24 loss Compute Loss|
|25 update with the gradient of loss|
|# Training Network Generator|
|28 for k=1,,K do|
|30 Missing Edge info: Gumbel Generator|
|32 for t=1,,P do|
|33 Dynamics Learner|
|35 loss Compute Loss|
|36 update with the gradient of loss|
Ii-C Network Completion Gumbel Graph Network
Similar to the previous subsection, we propose the Network Completion Gumbel Graph Network (NC-GGN) to work out the network completion problem. The inputs of our model are the states of observed nodes and the observed adjacency matrix . Correspondingly, the outputs are the complete network structure and the future states of all nodes.
We also use the GGN framework to learn the dynamic and structure. However, the difference is we must additionally learn the state variables of unobserved nodes because all of their states are missing. This will make the problem harder than the network reconstruction problem.
Thus, we designed three modules in our model: (1) The dynamic learner, to predict the states of the node at time by using the states information at time and the structure information of the observable node ; (2) The initial state learner, to randomly generate the initial states of the missing nodes with a set of learnable parameters ; (3) The network generator, to generate the candidate connections between missing nodes and observable nodes.
In the dynamic learner and network generator modules, we exploited the same techniques like graph network and Gumbel softmax sampling. However, in this way, we can only observe the evolutionary information of known nodes, so we only use the observed node states as supervised information. As to the initial state learning module, the main goal is to find the optimal initial states of the missing nodes by using the same objective function with the optimization method: gradient descent. The generation process of the initial states of unobservable nodes can be formulated in the Equation 9:
where is the generation function of initial states parameterized by , and the form of the function depends on the problems. For the simplest case, is just the identical mapping, which means learnable parameters
s are the initial states. We can use a stochastic gradient descent algorithm to optimize all the modules with the automatic differential techniques.
To detail the whole process of network completion, we will layout the pseudo-codes of NC-GGN in Algorithm 1.
We separately train the three modules mentioned previously in one epoch, and each module for multiple rounds. Modules update the parameters within themselves only. That means, for example, only s are updated in dynamic learning modules and keep s and s unchanged.
We start training the dynamic learner for rounds and the initial state for rounds in an epoch, and the network generator module for
rounds. We repeat the training process until the model converges or the loss function does not drop.
Iii Experimental Results
Our framework and algorithms can work on state time series of any format, such as continuous real vectors, discrete tensors, or binary strings. To apply the framework in the scenarios of social science, we construct two examples.
The first case is the spreading process of opinions. Suppose we can only observe the retweet events from user A to user B with time stamps , and try to reconstruct the social network structure behind the users. The observed retweet events can be converted binary states time series data. Suppose there are users, and all the users on the heads of the propagating event chains are infected by the opinion. We set the initial states of these sources as and all other users as . If the propagating event took place at time , from user A to B. Then the state of user B will be converted from to . And all the users will keep their states unchanged in other cases. In this way, we can convert propagating events into binary states time series. Hence, our algorithms can be applied to reconstruct or complete the network and dynamics.The second example is to predict the prices or volumes of a bunch of stocks. And by using the network reconstruction or completion algorithms, we may also reconstruct the connections between these stocks. The connections may reveal latent information such as joint ownership or economic connections, and facilitate our understanding of the market. Suppose the price or the volume can be described by real values at each time step, then we can obtain the real-valued vectors as the time series. Our algorithms can be applied to this example. However, due to the limitation of data availability and computing resource, we generate data from artificial simulations to test how our algorithms work.
To test our framework, we create two data sets by simulations. One data set contains binary states generated by the Voter model, which can simulate the information spreading process on social networks, and the other set contains continuous time series data generated by the Coupled Map Lattice model, which can emulate the fluctuation of the stock price. All simulations are implemented on the small-world networks generated by Watts-Strogatz (WS) model. We will introduce the two simulation models in detail.
Voter model. The voter model introduced by Richard A. Holley and Thomas M. Liggett in 1975 can simulate the spreading dynamics of opinions, ideas, information on a network. Suppose there are interacting agents connected to form a network. Initially, each agent has a distinct “opinion” represented by or . At each time , any agent will have a chance to change his ”opinion”, and the probability to adopt opinion is determined by the relative fraction of in all of ’s neighbors.
We generated simulated data on Watts–Strogatz Network with a reconnection probability of . We simulate on networks with size 10, 20 or 30 for 200 times (samples) with different initial states, and each simulation runs for 50 steps. Each step in one experiment is a sample of data, so there are 10000 data samples in total. All the 10000 data samples are separated into training, validation and testing data sets with 70%, 15%, and 15% for both network reconstruction task and network completion task. we randomly removed nodes and their edges from the WS network with size 10, 20 as observed incomplete graph on the network completion task.
Coupled map lattices. A Coupled map lattices (CML) model is a dynamical system with discrete time, discrete space, and continuous state variables proposed by Kaneko in 1992 . We suppose the CML model can generate chaotic time series which can emulate stock price fluctuations. Each element on a lattice consists of a logistic map coupled to their neighbors, this can be written as
where is treated as the state of the element of node at time , represents node ’s neighbors, is the coupling constant which can tune the system behavior to the chaos. And as the local map , it usually takes the logistic map:
For network reconstruct task, we generated simulated data on Watts–Strogatz Network with a reconnection probability of . We run the simulation for 5000 times (samples) with different initial states on networks with different sizes 10, 20 or 30, respectively. For network completion task, we generated simulated data on the same network structure as reconstruct task. We run the simulation for 2000 and 6000 times (samples) with different initial states on networks with different sizes 10 or 20, respectively. For both tasks, each simulation runs for 100 steps. We group every 10 steps as one data record. So there are 50000 data records in total at each size on the network reconstruct task, and 20,000 and 60,000 data records for 10 or 20 sized networks respectively on the network completion task. All the data samples are separated into training, validation and testing data sets with 70%, 15% and 15%. we randomly removed nodes and their edges from the WS network as observed incomplete graph on the network completion task.
Iii-B Performance Metrics
On the network inference task, we evaluate the performance of our model mainly by the accuracy of nodes’ states prediction and the accuracy of network structure prediction. The accuracy of nodes’ states is calculated by mean absolute error (MAE). The accuracy of network structure can be done by comparing the adjacency matrices between the prediction value and the ground truth, which can be regarded as a binary classification problem. Therefore, the evaluation index of the binary classification problem like AUC, ACC, TPR and so forth are used to measure the performance of network inference. The measures we used are listed in the following items:
MAE(mean absolute error): MAE is a measure of the difference between two vectors, it is the total absolute value of the differences.
AUC(area under the roc curve): AUC evaluates how similar the probabilities of network edges from the network generator are to the real adjacency matrix. AUC is defined as the area under the ROC curve, which is the indicator for a comprehensive evaluation of TPR (True Positive Rate) and FPR (False Positive Rate). The closer the AUC is to 1, the more clearly our model can tell whether there are edges or not.
ACC(net): We sample an estimated adjacency matrix with a value of either 0 or 1 from the estimated edge probabilities. ACC(net) is the proportion of elements that correctly estimated of the adjacency matrix . And the ACC(net)-missing is the ACC(net) of the missing network structure. The values range from 0-1, and the closer the 1 value is, the better the result of the model is.
ACC(states): For continuous time series data, ACC(states) refers to the MSE(Mean Square Error) between the predicted states and the ground truth states . While for discrete time series, MAE(Mean Absolute Error) is obtained by predicting the state after sampling dispersed , ACC(states) of discrete time series equals . In network completion tasks, the s are divided into two classes, one is observed nodes, the other is missing nodes. The states’ accuracy of these two classes of nodes is called observed ACC(states), missing ACC(states) respectively.
TPR(True Positive Rate): TPR measures the proportion of actual positives that are correctly identified in adjacent matrix . And the closer the TPR score is to 1, the lower the error rate is.
FPR(False Positive Rate): FPR measures the proportion of actual positives that are wrongly identified in sampled adjacent matrix . The error rate decreases when the FPR score approaches 0.
Iii-C Nodes alignment problem
In the evaluation for network completion, because there are no nodes’ labels, we need to find the matching between the nodes of missing and the ground truth to evaluate the effect of network completion. We used a greedy algorithm to find a nodes alignment. The main idea of the greedy algorithm is to compare the corresponding column or the row between and and find the most similar vector correspondence. Here, a hamming distance is used to measure the similarity. By traversing the missing nodes, we can find the matching relationship between the estimate value and the ground truth of the missing nodes. The estimate of the rearranged adjacency matrix is used for calculating all performance metrics.
Iii-D Experimental Setup
The parameter settings on our three modules are as follows: (1) In the dynamics learner module, we use 4-layered MLP as a function of information aggregation between nodes. The activation function of each layer is ReLU. The model parameter
is a randomly initialized set of parameters. When the number of neurons in the hidden layers is set to 64, 32, 16, 8, respectively, the network reconstruction task can achieved good results. As to the network completion task, on the Voter dataset, the settings are similar. On the CML dataset, the embedding dimension of hidden layer is set to 32 dimensions, and that of the 2-4 layers is 16, 8, and 4 dimensions, respectively. Within each epoch, we trained the dynamics learner module 30 times. (2) In the initial state learning module of the network completion task, we use different methods to deal with it. In the case of discrete state, the nodes’ states are usually coded by one-hot vectors to indicate which kind of state the node belongs to. We use sigmoid as the function
to classify the parameters into the interval, indicating the probability that the node belongs to a certain state. In the case of continuous states, the state has no interval limit, and the function is the identity mapping of the parameter
. On the CML dataset, we find that we can achieve similar completion accuracy without updating the initial states of unknown nodes when updating the missing initial states, so we set the initial states as random values in the training process. (3) In the network generator module, we randomly generate an initial set of learnable parameters from the normal distributionfor training.The above three modules are optimized using the Adam algorithm, and the learning rate is respectively.
Since the initial states learning module in the network completion task only learned the initial states of the unknown nodes of the training set, but not the states of the unknown nodes of the test set. Therefore, during the test phase, we fix the learned dynamics learner and adjacency matrix to generate a set of trainable unknown nodes’ states for the test set. Similar to the initial states learning module of the training set, we optimize the initial state of the unknown nodes in the test set and calculate the accuracy of the dynamic predictor.
Iii-E Inference Accuracy
Iii-E1 Accuracy of inference with different network sizes
Our approach is compared with the neural relational inference model on the task of network reconstruction.
NRI(Neural Relational Inference Model) applies a variational auto-encoder method to learn the underlying interaction graph and the complex system dynamics from the observational dynamical data. We ran the NRI Model on the Voter dataset and the CML dataset using settings consistent with the original paper of Kipf etc .
As to the comparative model for network completion task, according to our investigation, there are almost no models can be applied directly to solve the network completion based on time series data. At the same time, modifying other types of existing models for network completion will be a very complicated problem. Therefore, in the network completion tasks, we do not conduct comparative experiments.We demonstrate the accuracy of the network inference model across all data sets on different network sizes.
In table I, we show the performances of the GGN and NRI model on the network reconstruction task.
|Missing AUC||Missing ACC(net)||TPR||FPR||Missing ACC(states)||Observed ACC(states)|
We empirically show that the average metrics scores in the experiments on three scales. In terms of the accuracy of structure inference, AUCs can reach above 98% and ACC above 95% in networks of different sizes. The values of TPR and FPR of the reconstructed adjacency matrix are close to the optimal values. The accuracy of ACC(states) is over 86% in the Voter data set. The error rate (MAE) is close to 0 in CML data sets, indicating that GGN can better fit the dynamics of the network. In addition, it can be also observed that GGN performs better than the NRI model on network reconstruction tasks.
We carry out the experiment of the NC-GGN model on network completion on the same data set, where the percent of missing nodes is set to 10%. In table II, the first number in the first column represents the total network size, and the second number is the number of nodes being removed. For example, 20-2 indicates that there are 20 nodes in a complete dynamical system, and only 18 nodes are observed. The information on the two nodes was completely missing.
Both in the inference of missing structure and nodes’ states, our model NC-GGN has achieved high inference accuracy on a 10-node scale network. As the scale of the network increases, the number of possible connections increases greatly, and the difficulty of completion also increases, as a result, the accuracy decreases. The accuracy of Voter data sets is lower than that of the CML data sets, which is due to the fact that the number of CML samples is more than that of the Voter data sets. We can also see that the accuracy of the network completion task is lower than that of the network reconstruction task.
Iii-E2 Accuracy with different missing proportions
In this part, we investigate the effect of the observed network completeness on network completion problem. We adjust the completeness of the network by tuning the proportion of missing nodes. The higher the proportion of missing nodes is, the less complete the system is. We create a number of partially observable network with 10% to 70% missing Nodes on CML data sets based on 20-node WS networks. We plot the AUCs with the missing proportions in figure 3.
We mentioned that before the proportion of missing nodes is increasing, and the difficulty of network completion increases gradually. When the proportion of missing nodes changes from 0.1 to 0.3, the performance of completion decreases significantly. GGN fails to complete the network if the nodes’ missing proportion exceeds 0.3 because the AUC values are indistinguishable with the random guess.
In this paper, we take the form of Graph Neural Network to solve two types of Network Inference problems: Network Reconstruction and Network Completion. First, we formulate the network inference problems based on time series data as the optimization problems. Second, in the model part, we extended our previous framework GGN to NC-GGN framework for network completion, which can be applied to a variety of dynamical time series data. Third, in the experimental part, we demonstrated GGN model can reconstruct network structure accurately based on time series data without any prior knowledge of network structure. Meanwhile, NC-GGN can infer the hidden node’s states and structural information from 90% of known network structure and dynamical data. Besides, the performance of network completion task is influenced by the proportion of missing nodes seriously.
There are still many aspects that can be improved in our current work. For example, all of the experiments are carried out on small-sized networks with less than 30 nodes. In the future, the network scale can be increased from two aspects by increasing the computational power or by simplifying the model. Besides, the amount of data needed for network inference is relatively large, the reduction for data requirement will affect the inference performance of the model. Furthermore, the result of network completion can be further improved by increasing the accuracy of the missing initial states.
The research is supported by the National Natural Science Foundation of China(NSFC) under the grant numbers 61673070.
-  R. Albert and A.-L. Barabási, “Statistical mechanics of complex networks,” Reviews of modern physics, vol. 74, no. 1, p. 47, 2002.
-  V. Sood and S. Redner, “Voter model on heterogeneous graphs,” Physical review letters, vol. 94, no. 17, p. 178701, 2005.
-  A. Lu, C. Sun, and Y. Liu, “The impact of community structure on the convergence time of opinion dynamics,” Discrete Dynamics in Nature and Society, vol. 2017, 2017.
-  W. Klibi and A. Martel, “Scenario-based supply chain network risk modeling,” European Journal of Operational Research, vol. 223, no. 3, pp. 644–658, 2012.
-  G. Cimini, T. Squartini, D. Garlaschelli, and A. Gabrielli, “Systemic risk analysis on reconstructed economic and financial networks,” Scientific reports, vol. 5, p. 15758, 2015.
-  M. E. Newman, “Scientific collaboration networks. ii. shortest paths, weighted networks, and centrality,” Physical review E, vol. 64, no. 1, p. 016132, 2001.
-  S. X. Zeng, X. M. Xie, and C. M. Tam, “Relationship between cooperation networks and innovation performance of smes,” Technovation, vol. 30, no. 3, pp. 181–194, 2010.
-  G. Kossinets, “Effects of missing data in social networks,” Social networks, vol. 28, no. 3, pp. 247–268, 2006.
-  K. Anand, I. van Lelyveld, Á. Banai, S. Friedrich, R. Garratt, G. Hałaj, J. Fique, I. Hansen, S. M. Jaramillo, H. Lee et al., “The missing links: A global study on uncovering financial network structures from partial data,” Journal of Financial Stability, vol. 35, pp. 107–119, 2018.
-  T. Squartini, G. Caldarelli, G. Cimini, A. Gabrielli, and D. Garlaschelli, “Reconstruction methods for networks: the case of economic and financial systems,” Physics Reports, 2018.
-  R. Guimerà and M. Sales-Pardo, “Missing and spurious interactions and the reconstruction of complex networks,” Proceedings of the National Academy of Sciences, vol. 106, no. 52, pp. 22 073–22 078, 2009.
J. Kunegis and A. Lommatzsch, “Learning spectral graph transformations for
link prediction,” in
Proceedings of the 26th Annual International Conference on Machine Learning. ACM, 2009, pp. 561–568.
-  L. Lü, C.-H. Jin, and T. Zhou, “Similarity index based on local paths for link prediction of complex networks,” Physical Review E, vol. 80, no. 4, p. 046122, 2009.
-  D. Wang, D. Pedreschi, C. Song, F. Giannotti, and A.-L. Barabasi, “Human mobility, social ties, and link prediction,” in Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. Acm, 2011, pp. 1100–1108.
-  M. Zhang and Y. Chen, “Link prediction based on graph neural networks,” in Advances in Neural Information Processing Systems, 2018, pp. 5165–5175.
-  C. Tran, W.-Y. Shin, A. Spitz, and M. Gertz, “Deepnc: Deep generative network completion,” arXiv preprint arXiv:1907.07381, 2019.
-  R. Guthke, U. Möller, M. Hoffmann, F. Thies, and S. Töpfer, “Dynamic network reconstruction from gene expression data applied to immune response during bacterial infection,” Bioinformatics, vol. 21, no. 8, pp. 1626–1634, 2004.
-  C. Tran, W.-Y. Shin, and A. Spitz, “Community detection in partially observable social networks,” arXiv preprint arXiv:1801.00132, 2017.
-  M. Kim and J. Leskovec, “The network completion problem: Inferring missing nodes and edges in networks,” in Proceedings of the 2011 SIAM International Conference on Data Mining. SIAM, 2011, pp. 47–58.
-  J. Leskovec, D. Chakrabarti, J. Kleinberg, C. Faloutsos, and Z. Ghahramani, “Kronecker graphs: An approach to modeling networks,” Journal of Machine Learning Research, vol. 11, no. Feb, pp. 985–1042, 2010.
-  Y. Xue and P. Bogdan, “Reconstructing missing complex networks against adversarial interventions,” Nature communications, vol. 10, no. 1, p. 1738, 2019.
-  Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu, “A comprehensive survey on graph neural networks,” arXiv preprint arXiv:1901.00596, 2019.
-  L. Du, Y. Wang, G. Song, Z. Lu, and J. Wang, “Dynamic network embedding: An extended approach for skip-gram based network embedding.” in IJCAI, 2018, pp. 2086–2092.
-  L. Du, Z. Lu, Y. Wang, G. Song, Y. Wang, and W. Chen, “Galaxy network embedding: A hierarchical community structure preserving approach.” in IJCAI, 2018, pp. 2079–2085.
-  D. Xu, C. Ruan, K. Motwani, E. Korpeoglu, S. Kumar, and K. Achan, “Generative graph convolutional network for growing graphs,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019, pp. 3167–3171.
-  F. Geier, J. Timmer, and C. Fleck, “Reconstructing gene-regulatory networks from time series, knock-out data, and prior knowledge,” BMC systems biology, vol. 1, no. 1, p. 11, 2007.
-  R. S. Krajec and A. G. Gounares, “Highlighting of time series data on force directed graph,” Apr. 26 2016, uS Patent 9,323,863.
-  W. Lin, N. Hubacher, and M. E. Khan, “Variational message passing with structured inference networks,” arXiv preprint arXiv:1803.05589, 2018.
-  A. Brovelli, M. Ding, A. Ledberg, Y. Chen, R. Nakamura, and S. L. Bressler, “Beta oscillations in a large-scale sensorimotor cortical network: directional influences revealed by granger causality,” Proceedings of the National Academy of Sciences, vol. 101, no. 26, pp. 9849–9854, 2004.
-  C. J. Quinn, T. P. Coleman, N. Kiyavash, and N. G. Hatsopoulos, “Estimating the directed information to infer causal relationships in ensemble neural spike train recordings,” Journal of computational neuroscience, vol. 30, no. 1, pp. 17–44, 2011.
-  J. M. Stuart, E. Segal, D. Koller, and S. K. Kim, “A gene-coexpression network for global discovery of conserved genetic modules,” science, vol. 302, no. 5643, pp. 249–255, 2003.
-  V. M. Eguiluz, D. R. Chialvo, G. A. Cecchi, M. Baliki, and A. V. Apkarian, “Scale-free brain functional networks,” Physical review letters, vol. 94, no. 1, p. 018102, 2005.
-  B. Barzel and A.-L. Barabási, “Network link prediction by global silencing of indirect correlations,” Nature biotechnology, vol. 31, no. 8, p. 720, 2013.
-  M. Timme, “Revealing network connectivity from response dynamics,” Physical review letters, vol. 98, no. 22, p. 224101, 2007.
-  W.-X. Wang, R. Yang, Y.-C. Lai, V. Kovanis, and C. Grebogi, “Predicting catastrophes in nonlinear dynamical systems by compressive sensing,” Physical review letters, vol. 106, no. 15, p. 154101, 2011.
-  W.-X. Wang, Y.-C. Lai, C. Grebogi, and J. Ye, “Network reconstruction based on evolutionary-game data via compressive sensing,” Physical Review X, vol. 1, no. 2, p. 021021, 2011.
-  Z. Shen, W.-X. Wang, Y. Fan, Z. Di, and Y.-C. Lai, “Reconstructing propagation networks with natural diversity and identifying hidden sources,” Nature communications, vol. 5, p. 4323, 2014.
-  Z. Zhang, Y. Zhao, J. Liu, S. Wang, R. Tao, R. Xin, and J. Zhang, “A general deep learning framework for network reconstruction and dynamics learning,” Applied Network Science, vol. 4, no. 1, pp. 1–17, 2019.
-  T. Kipf, E. Fetaya, K.-C. Wang, M. Welling, and R. Zemel, “Neural relational inference for interacting systems,” arXiv preprint arXiv:1802.04687, 2018.
-  B. Yu, H. Yin, and Z. Zhu, “Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting,” arXiv preprint arXiv:1709.04875, 2017.
-  R. A. Holley, T. M. Liggett et al., “Ergodic theorems for weakly interacting infinite systems and the voter model,” The annals of probability, vol. 3, no. 4, pp. 643–663, 1975.
-  K. Kaneko, “Overview of coupled map lattices,” Chaos: An Interdisciplinary Journal of Nonlinear Science, vol. 2, no. 3, pp. 279–282, 1992.