1 Introduction
Graph neural networks (GNNs) have emerged in recent years as an effective tool for analyzing graphstructured data [29, 8, 37, 33]
. These architectures bring the expressive power of deep learning into nonEuclidean data such as graphs, and have demonstrated convincing performance in several graph mining tasks, including graph classification
[21], link prediction [36], and community detection [4, 5]. So far, GNNs have been mainly applied to tasks that involve static graphs. However, most realworld networks are dynamic, i. e., nodes and edges are added and removed over time. Despite the success of GNNs in various applications, it is still not clear if these models are useful for learning in dynamic scenarios. Although some models have been applied to this type of data, most studies have focused on predicting a lowdimensional representation (i. e., embedding) of the graph for the next time step [16, 17, 22, 9, 30, 27]. These representations can then be used in downstream tasks [16, 9, 20, 27]. However, predicting the topology of the graph (and not its lowdimensional representation) is a task that has not been properly addressed yet.Graph generation, another important task in graph mining, has attracted a lot of attention from the deep learning community in recent years. The objective of this task is to generate graphs that exhibit specific properties, e. g., degree distribution, node triangle participation, community structure, etc. Traditionally, graphs are generated based on some network generation model such as the ErdősRényi model. These models focus on modeling one or more network properties, and neglect the others. Neural network approaches, on the other hand, can better capture the properties of graphs since they follow a supervised approach [35, 3, 10]
. These architectures minimize a loss function such as the reconstruction error of the adjacency matrix or the value of a graph comparison algorithm.
Capitalizing on recent developments in neural networks for graphstructured data and graph generation, we propose in this paper, to the best of our knowledge, the first framework for predicting the evolution of the topology of networks in time. The proposed framework can be viewed as an encoderpredictordecoder architecture. The “encoder” network takes a sequence of graphs as input and uses a GNN to produce a lowdimensional representation for each one of these graphs. These representations capture structural information about the input graphs. Then, the “predictor” network employs a recurrent architecture which predicts a representation for the future instance of the graph. The “decoder” network corresponds to a graph generation model which utilizes the predicted representation, and generates the topology of the graph for the next time step. The proposed model is evaluated over a series of experiments on synthetic and realworld datasets, and is compared against several baseline methods. To measure the effectiveness of the proposed model and the baselines, the generated graphs need to be compared with the groundtruth graph instances using some graph comparison algorithm. To this end, we use the WeisfeilerLehman subtree kernel which scales to very large graphs and has achieved stateoftheart results on many graph datasets [31]. Results show that the proposed model yields good performance, and in most cases, outperforms the competing methods.
The rest of this paper is organized as follows. Section 2 provides an overview of the related work and elaborates our contribution. Section 3 introduces some preliminary concepts and definitions related to the graph generation problem, followed by a detailed presentation of the components of the proposed model. Section 4 evaluates the proposed model on several tasks. Finally, Section 5 concludes.
2 Related Work
Our work is related to random graph models. These models are very popular in graph theory and network science. The ErdősRényi model [7], the preferential attachment model [2], and the Kronecker graph model [14]
are some typical examples of such models. To predict how a graph structure will evolve over time, the values of the parameters of these models can be estimated based on the corresponding values of the observed graph instances, and then the estimated values can be passed on to these models to generate graphs.
Other work along a similar direction includes neural network models which combine GNNs with RNNs [30, 19, 27]
. These models use GNNs to extract features from a graph and RNNs for sequence learning from the extracted features. Other similar approaches do not use GNNs, but they instead perform random walks or employ deep autoencoders
[22, 9]. All these works focus on predicting how the node representations or the graph representations will evolve over time. However, some applications require predicting the topology of the graph, and not just its lowdimensional representation. The proposed model constitutes the first step towards this objective.3 EvoNet: A Neural Network for Predicting Graph Evolution
In this Section, we first introduce basic concepts from graph theory, and define our notation. We then present EvoNet, the proposed framework for predicting the evolution of graphs. Since the proposed model comprises of several components, we describe all these components in detail.
3.1 Preliminaries
Let be an undirected, unweighted graph, where is the set of nodes and is the set of edges. We will denote by the number of vertices and by the number of edges. We define a permutation of the nodes of as a bijective function , under which any graph property of should be invariant. We are interested in the topology of a graph which is described by its adjacency matrix with a specific ordering of the nodes ^{1}^{1}1For simplicity, the ordering will be omitted in what follows.. Each entry of the adjacency matrix is defined as where . In what follows, we consider the “topology”, “structure” and “adjacency matrix” of a graph equivalent to each other.
In many realworld networks, besides the adjacency matrix that encodes connectivity information, nodes and/or edges are annotated with a feature vector, which we denote as
and , respectively. Hence, a graph object can be also written in the form of a triplet . In this paper, we use this triplet to represent all graphs. If a graph does not contain node/edge attributes, we assign attributes to it based on local properties (e. g., degree, core number, number of triangles, etc).An evolving network is a graph whose topology changes as a function of time. Interestingly, almost all realworld networks evolve over time by adding and removing nodes and/or edges. For instance, in social networks, people make and lose friends over time, while there are people who join the network and others who leave the network. An evolving graph is a sequence of graphs where represents the state of the evolving graph at time step . It should be noted that not only nodes and edges can evolve over time, but also node and edge attributes. However, in this paper, we keep node and edge attributes fixed, and we allow only the node and edge sets of the graphs to change as a function of time. The sequence can thus be written as . We are often interested in predicting what “comes next” in a sequence, based on data encountered in previous time steps. In our setting, this is equivalent to predicting based on the sequence . In sequential modeling, we usually do not take into account the whole sequence, but only those instances within a fixed small window of size before , which we denote as . We refer to these instances as the graph history. The problem is then to predict the topology of given its history.
3.2 Proposed Architecture
The proposed architecture is very similar to a typical sequence learning framework. The main difference lies in the fact that instead of vectors, in our setting, the elements of the sequence correspond to graphs. The combinatorial nature of graphstructured data increases the complexity of the problem and calls for more sophisticated architectures than the ones employed in traditional sequence learning tasks. Specifically, the proposed model consists of three components: (1) a graph neural network (GNN) which generates a vector representation for each graph instance, (2) a recurrent neural network (RNN) for sequential learning, and (3) a graph generation model for predicting the graph topology at the next time step. This framework can also be viewed as an encoderpredictordecoder model. The first two components correspond to an encoder network which maps the sequence of graphs into a sequence of vectors and another network that predicts a representation for the next in the sequence graph. The decoder network consists of the last component of the model, and transforms the above representation into a graph. Figure
1 illustrates the proposed model. In what follows, we present the above three components of EvoNet.3.2.1 Encoding Graphs using Graph Neural Networks
Graph Neural Networks (GNNs) have recently emerged as a dominant paradigm for performing machine learning tasks on graphs. Several GNN variants have been proposed in the past years. All these models employ some message passing procedure to update node representations. Specifically, each node updates its representation by aggregating the representations of its neighbors. After
iterations of the message passing procedure, each node obtains a feature vector which captures the structural information within its hop neighborhood. Then, GNNs compute a feature vector for the entire graph using some permutation invariant readout function such as summing the representations of all the nodes of the graph. As described below, the learning process can be divided into three phases: (1) aggregation, (2) update, and (3) readout.Aggregation.
In this phase, the network computes a message for each node of the graph. To compute that message for a node, the network aggregates the representations of its neighbors. Formally, at time , a message vector is computed from the representations of the neighbors of :
(1) 
where AGGREGATE is a permutation invariant function. Furthermore, for the network to be endtoend trainable, this function needs to be differentiable. In our implementation, AGGREGATE
is a multilayer perceptron (MLP) followed by a sum function.
Update.
The new representation of is then computed by combining its current feature vector with the message vector :
(2) 
The UPDATE function also needs to be differentiable. To combine the two feature vectors (i. e., and
), we employ the Gated Recurrent Unit proposed in
[18]:(3) 
Omitting biases for readability, we have:
(4) 
where the and matrices are trainable weight matrices,
is the sigmoid function, and
and are the parameters of the reset and update gates for a given node.Readout.
The Aggregation and Update steps are repeated for time steps. The emerging node representations are aggregated into a single vector which corresponds to the representation of the entire graph, as follows:
(5) 
where READOUT is a differentiable and permutation invariant function. This vector captures the topology of the input graph. To generate , we utilize Set2Set [32]. Other functions such as the sum function were also considered, but were found less effective in preliminary experiments.
3.2.2 Predicting Graph Representations using Recurrent Neural Networks
Given an input sequence of graphs, we use the GNN described above to generate a vector representation for each graph in the sequence. Then, to process this sequence, we use a recurrent neural network (RNN). RNNs use their internal state (i. e., memory) to preserve sequential information. These networks exhibit temporal dynamic behavior, and can find correlations between sequential events. Specifically, an RNN processes the input sequence in a series of time steps (i. e., one for each element in the sequence). For a given time step , the hidden state at that time step is updated as:
(6) 
where
is a nonlinear activation function. A generative RNN outputs a probability distribution over the next element of the sequence given its current state
. RNNs can be trained to predict the next element (e. g., graph) in the sequence, i. e., they can learn the conditional distribution. In our implementation, we use a Long ShortTerm Memory (LSTM) network that reads sequentially the vectors
produced by the GNN, and generates a vector that represents the embedding of . The embedding incorporates topological information and will serve as input to the graph generation module. The GNN component presented above can be seen as a form of an encoder network. It takes as input a sequence of graphs and projects them into a lowdimensional space. Then, this component takes the sequence of graph representations as input and predicts the representation of the graph at the next time step.3.2.3 Graph Generation
To generate a graph that corresponds to the evolution of the current graph instance, we capitalize on a recentlyproposed framework for learning generative models of graphs [35]. This framework models a graph in an autoregressive manner (i. e., a sequence of additions of new nodes and edges) to capture the complex joint probability of all nodes and edges in the graph. Formally, given a node ordering , it considers a graph as a sequence of vectors:
(7) 
where is the adjacency vector between node and the nodes preceding it . We adapt this framework to our supervised setting.
The objective of the generative model is to maximize the likelihood of the observed graphs of the training set. Since a graph can be expressed as a sequence of adjacency vectors (given a node ordering), we can consider instead the distribution , which can be decomposed in an autoregressive manner into the following product:
(8) 
This product can be parameterized by a neural network. Specifically, following [35], we use a hierarchical RNN consisting of two levels: (1) the graphlevel RNN which maintains the state of the graph and generates new nodes and thus learns the distribution and (2) the edgelevel RNN which generates links between each generated node and previouslygenerated nodes and thus learns the distribution . More formally, we have:
(9) 
where is the state vector of the graphlevel RNN (i. e., ) that encodes the current state of the graph sequence and is initialized by , the predicted embedding of the graph at the next time step . The output of the graphlevel RNN corresponds to the initial state of the edgelevel RNN (i. e., ). The resulting value is then squashed by a sigmoid function to produce the probability of existence of an edge . In other words, the model learns the probability distribution of the existence of edges and a graph can then be sampled from this distribution, which will serve as the predicted topology for the next time step .
To train the model, the crossentropy loss between existence of each edge and its probability of existence is minimized:
(10) 
Node ordering.
It should be mentioned that node ordering has a large impact on the efficiency of the above generative model. Note that a good ordering can help us avoid the exploration of all possible node permutations in the sample space. Different strategies such as the BreadthFirstSearch ordering scheme can be employed to improve scalability [35]. However, in our setting, the nodes are distinguishable, i. e., node of and node of correspond to the same entity. Hence, we can impose an ordering onto the nodes of the first instance of our sequence of graphs, and then utilize the same node ordering for the graphs of all subsequent time steps (we place new nodes at the end of the ordering).
4 Experiments and Results
In this Section, we evaluate the performance of EvoNet on synthetic and realworld datasets for predicting the evolution of graph topology, and we compare it against several baseline methods.
4.1 Datasets
We use both synthetic and realworld datasets. The synthetic datasets consist of sequences of graphs where there is a specific pattern on how each graph emerges from the previous graph instance, i. e., add/remove some graph structure at each time step. The realworld datasets correspond to single graphs whose nodes incorporate temporal information. We decompose these graphs into sequences of snapshots based on their timestamps. The size of the graphs in each sequence ranges from tens of nodes to several thousand of nodes.
Path graph.
A path graph can be drawn such that all vertices and edges lie on a straight line. We denote a path graph of nodes as . In other words, the path graph is a tree with two nodes of degree , and the other nodes of degree . We consider two scenarios. In both cases the first graph in the sequence is . In the first scenario, at each time step, we add one new node to the previous graph instance and we also add an edge between the new node and the last according to the previous ordering node. The second scenario follows the same pattern, however, every three steps, instead of adding a new node, we remove the first according to the previous ordering node (along with its edge).
Cycle graph.
A cycle graph is a graph on nodes containing a single cycle through all the nodes. Note that if we add an edge between the first and the last node of , we obtain . Similar to the above case, we use as the first graph in the sequence, and we again consider two scenarios. In the first scenario, at each time step, we increase the size of the cycle, i. e., from , we obtain by adding a new node and two edges, the first between the new node and the first according to the previous ordering node and the second between the new node and the last according to the previous ordering node. In the second scenario, every three steps, we remove the first according to the ordering node (along with its edges), and we add an edge between the second and the last according to the ordering nodes.
Ladder graph.
The ladder graph is a planar graph with vertices and edges. It is the cartesian product of two path graphs, as follows: . As the name indicates, the ladder graph can be drawn as a ladder consisting of two rails and rungs between them. We consider the following scenario: at each time step, we attach one rung () to the tail of the ladder (the two nodes of the rung are connected to the two last according to the ordering nodes).
For all graphs, we set the attribute of each node equal to its degree, while we set the attribute of all edges to the same value (e. g., to ).
4.1.1 RealWorld Datasets
Besides synthetic datasets, we also evaluate EvoNet on six realworld datasets.^{2}^{2}2All our datasets are publicly available through websites of [15] and [28]. They can be divided into three groups based on the nature of their sources.
Bitcoin transaction networks.
Contains graphs derived from the Bitcoin transaction network, a whotrustwhom network of people who trade using Bitcoin [13, 12]. Due to the anonymity of Bitcoin users, platforms seek to maintain a record of users’ reputation in Bitcoin trades to avoid fraudulent transactions. The nodes of the network represent Bitcoin users, while an edge indicates that a trade has been executed between its two endpoint users. Each edge is annotated with an integer between and , which indicates the rating of the one user given by the other user. The datasets are collected separately from two platforms: Bitcoin OTC and Bitcoin Alpha. For all graphs in these two datasets, we set the attribute of each node equal to the average rating that the user has received from the rest of the community, and the attribute of each edge equal to the rating between its two endpoint users.
Social networks.
Contains graphs generated from an online social network at the University of California, Irvine [24, 25]. It has two datasets: one is derived from the private message exchange between users; the other is based on the same user community, but focuses on their activity in the forum, i. e., public comment on a specific topic. The nodes of the networks represent users and the edges represent a message exchange or a shared interest (on a topic). All graphs in these two datasets are unweighted and unlabeled, thus we simply set the attribute of each node equal to its degree.
Email exchange networks.
Contains two datasets derived from two sources. The first is generated using email data from a large European research institution [26], i. e., all incoming and outgoing email between members of the research institution. The second is collected from the 2016 Democratic National Committee (DNC) email leak [28], where the links denote email exchanges between DNC members. Similar to social network datasets, the graphs in these two datasets are also unweighted and unlabelled, thus we treat them the same way.
More details about these datasets are given in Table 1.
Pos.Edges  Timespan  

Begin  End  
BTCOTC  20101108  20160125  
BTCAlpha  20101108  20160122  
UCIForum  —  20040515  20041026  
UCIMessage  —  20040415  20041026  
EUCore  —  19700101  19720314  
DNC  —  20130916  20160525 
4.2 Baselines
We compare EvoNet against several random graph models: (1) the ErdősRényi model [7], (2) the Stochastic Block model [11, 1], (3) the Barabási–Albert model [2], and (4) the Kronecker Graph model [14]. These are the traditional methods to study the topology evolution of temporal graphs, by proposing a driven mechanism behind the evolution. To be precise, these models begin with an initial graph and a rule to connect new emerged nodes with existing ones, then gradually grow the initial graph to the expected size following this rule, i. e., in Barabási–Albert model, we begin with a triangle and follow the preferential attachment rule, in which the probability of having an edge between a newly added node and an existing one is proportional to the current degree of the existing node.
4.3 Evaluation Metric and Evaluation Setup
4.3.1 Synthetic Datasets
In general, it is very challenging to measure the performance of a graph generative model since it requires comparing two graphs to each other, a longstanding problem in mathematics and computer science [6]. We propose to use graph kernels to compare graphs to each other, and thus to evaluate the quality of the generated graphs. Graph kernels have emerged as one of the most effective tools for graph comparison in recent years [23]. A graph kernel is a symmetric positive semidefinite function which takes two graphs as input, and measures their similarity. In our experiments, we employ the WeisfeilerLehman subtree kernel which counts labelbased subtreepatterns [31]. Note that we also normalize the kernel values, and thus the emerging values lie between and .
As previously mentioned, each dataset corresponds to a sequence of graphs where each sequence represents the evolution of the topology of a single graph in time steps. We use the first of these graph instances for training and the rest of them serve as our test set. The window size is set equal to , which means that we feed consecutive graph instances to the model and predict the topology of the instance that directly follows the last of these input instances. Each graph of the test set along with its corresponding predicted graph is then passed on to the WeisfeilerLehman subtree kernel which measures their similarity and thus the performance of the model.
The hyperparameters of EvoNet are chosen based on its performance on a validation set. The parameters of the random graph models are set under the principle that the generated graphs need to share similar properties with the groundtruth graphs. For instance, in the case of the ErdősRényi model, the probability of adding an edge between two nodes is set to some value such that the density of the generated graph is identical to that of the groundtruth graph. However, since the model should not have access to such information (e. g., density of the groundtruth graph), we use an MLP to predict this property based on past data (i. e., the number of nodes and edges of the previous graph instances). This is in par with how the proposed model computes the size of the graphs to be generated (i. e., using also an MLP).
4.4 Results
ModelStat.  BTCOTC  BTCALPHA  UCIForum  UCIMesg  EUCore  DNC  

Mean  ile  Mean  ile  Mean  ile  Mean  ile  Mean  ile  Mean  ile  
ER  
SBM  
BA  
Power  
KronRand  
KronFix  
EvoNet 
We next present the experimental results and compare the performance of EvoNet against that of the baselines.
Synthetic datasets.
Figure 2 illustrates the experimental results on the synthetic datasets. Since the graph structures contained in the synthetic datasets are fairly simple, it is easy for the model to generate graphs very similar to the groundtruth graphs (normalized kernel values
). Hence, instead of reporting the kernel values, we compare the size of the predicted graphs against that of the groundtruth graphs. The figures visualize the increase of graph size on real sequence (orange) and predicted sequence (blue). For path graphs, in spite of small variance, we have an accurate prediction on the graph size. For ladder graph, we observe a mismatch at the beginning of the sequence for small size graphs but then a coincidence of the two lines on large size graphs. This mismatch on small graphs may be due to a more complex structure in ladder graphs such as cycles, as supported by the results of cycle graph on the right figure, where we completely mispredict the size of cycle graphs. In fact, we fail to reconstruct the cycle structure in the prediction, with all the predicted graphs being path graphs. This failure could be related to the limitations of GNN mentioned in
[34].Dynamic graph embedding.
It is also important to check whether, in our encodedecoder framework, the learned code, which we refer to as “dynamic graph embedding”, is really meaningful.^{3}^{3}3By “meaningful”, we mean that the code(embedding) captures both structural feature of the graph class and temporal evolution of the series. Thus it can be applied to predict the graph at the future timestep. We design two experiments to verify the effectiveness of our embedding, with the help of synthetic graphs. In the first experiment, we take as input two sequences of graphs belonging to the same class but following different evolution dynamics. Specifically we took path graph and path graph with removal. In the second experiment, we control the evolution dynamic and vary the structures of graphs, where we use path graphs and ladder graphs following the same evolution of increasing size. The dynamic graph embeddings of different datasets learned from these experiments are recorded and visualized in Figure 3. Each point represents the projections of embeddings of each graph in the sequence into a dimensional space by Principle Component Analysis (PCA). As we can see from the figure, embeddings learned from different datasets, either with different dynamics or with different structure, are both well separated, which suggests that the embeddings are meaningful, and those from the same dataset form special patterns such as a line in the space, which suggests a temporal dependency between these embeddings as they are learned from sequential data.
RealWorld datasets.
Finally, we analyze the performance of our model on the six real datasets. We obtain the similarities between each pair of real and predicted graphs in the sequence and draw a histogram to illustrate the distribution of similarities. Due to page limit, we choose to only show the histogram plot of BTCOTC and UCIMessage datasets in Figure 4. Among all the traditional random graph models, Kronecker graph model (with learnable parameter) performs the best, however on both datasets, our proposed method EvoNet (in blue) outperforms tremendously all other methods, with an average similarity of on BTCOTC dataset and on UCIMessage dataset. Detailed statistics and results for other datasets can be found in Table 2, where we can find that our proposed model performs consistently better than the traditional methods.^{4}^{4}4Interested readers are kindly referred to supplemental materials for a full illustration of the results on all datasets.
Overall, despite a failure in capturing some specific structures discovered in synthetic datasets, our experiments demonstrate the advantage of EvoNet over the traditional random graph models on predicting the evolution of dynamic graphs, especially for real world data with complex structures.
5 Conclusion
In this paper, we proposed EvoNet, a model that predicts the evolution of dynamic graphs, following an encoderdecoder framework. The proposed model consists of three components: (1) a graph neural network which transforms graphs to vectors, (2) a recurrent architecture which reads the input sequence of graph embeddings and predicts the embedding of the graph at the next time step, and (3) a graph generation model which takes this embedding as input and predicts the topology of the graph. We also proposed an evaluation methodology for this task which capitalizes on the wellestablished family of graph kernels. We apply the above methodology to demonstrate the predictive power EvoNet. Experiments show that the proposed model outperforms traditional random graph methods on both synthetic and realworld datasets. We should note that there is still space for improvement. Improving the efficiency of the proposed model and its scalability on large graphs are potential directions for future work.
References
 Mixed membership stochastic blockmodels. Journal of machine learning research 9 (Sep), pp. 1981–2014. Cited by: §4.2.
 Statistical mechanics of complex networks. Reviews of modern physics 74 (1), pp. 47. Cited by: §2, §4.2.
 Netgan: generating graphs via random walks. arXiv preprint arXiv:1803.00816. Cited by: §1.
 Community detection with graph neural networks. stat 1050, pp. 27. Cited by: §1.
 Supervised community detection with line graph neural networks. arXiv preprint arXiv:1705.08415. Cited by: §1.

Thirty years of graph matching in pattern recognition
.International Journal of Pattern Recognition and Artificial Intelligence
18 (03), pp. 265–298. Cited by: §4.3.1.  On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci 5 (1), pp. 17–60. Cited by: §2, §4.2.
 Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pp. 1263–1272. Cited by: §1.
 Dyngem: deep embedding method for dynamic graphs. arXiv preprint arXiv:1805.11273. Cited by: §1, §2.
 Graphite: iterative generative modeling of graphs. arXiv preprint arXiv:1803.10459. Cited by: §1.
 Stochastic blockmodels: first steps. Social networks 5 (2), pp. 109–137. Cited by: §4.2.
 Rev2: fraudulent user prediction in rating platforms. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pp. 333–341. Cited by: §4.1.1.
 Edge weight prediction in weighted signed networks. In 2016 IEEE 16th International Conference on Data Mining (ICDM), pp. 221–230. Cited by: §4.1.1.
 Kronecker graphs: an approach to modeling networks. Journal of Machine Learning Research 11 (Feb), pp. 985–1042. Cited by: §2, §4.2.
 SNAP Datasets: Stanford large network dataset collection. Note: http://snap.stanford.edu/data Cited by: footnote 2.
 Deepgraph: graph structure predicts network growth. arXiv preprint arXiv:1610.06251. Cited by: §1.
 Attributed network embedding for learning in a dynamic environment. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 387–396. Cited by: §1.
 Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493. Cited by: §3.2.1.
 Dynamic graph convolutional networks. arXiv preprint arXiv:1704.06199. Cited by: §2.
 Subgraph pattern neural networks for highorder graph evolution prediction. In ThirtySecond AAAI Conference on Artificial Intelligence, Cited by: §1.
 Weisfeiler and leman go neural: higherorder graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 4602–4609. Cited by: §1.
 Continuoustime dynamic network embeddings. In Companion Proceedings of the The Web Conference 2018, pp. 969–976. Cited by: §1, §2.
 Graph Kernels: A Survey. arXiv preprint arXiv:1904.12218. Cited by: §4.3.1.
 Clustering in weighted networks. Social Networks 31, pp. 155–163. Cited by: §4.1.1.
 Triadic closure in twomode networks: redefining the global and local clustering coefficients. Social Networks 35 (2), pp. 159 – 167. External Links: ISSN 03788733 Cited by: §4.1.1.
 Motifs in temporal networks. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, WSDM ’17, New York, NY, USA, pp. 601–610. External Links: ISBN 9781450346757, Link, Document Cited by: §4.1.1.
 Evolvegcn: evolving graph convolutional networks for dynamic graphs. arXiv preprint arXiv:1902.10191. Cited by: §1, §2.
 The network data repository with interactive graph analytics and visualization. In AAAI, External Links: Link Cited by: §4.1.1, footnote 2.
 The Graph Neural Network Model. IEEE Transactions on Neural Networks 20 (1), pp. 61–80. Cited by: §1.
 Structured sequence modeling with graph convolutional recurrent networks. In International Conference on Neural Information Processing, pp. 362–373. Cited by: §1, §2.
 Weisfeilerlehman graph kernels. Journal of Machine Learning Research 12 (Sep), pp. 2539–2561. Cited by: §1, §4.3.1.
 Order matters: sequence to sequence for sets. arXiv preprint arXiv:1511.06391. Cited by: §3.2.1.
 A comprehensive survey on graph neural networks. arXiv preprint arXiv:1901.00596. Cited by: §1.
 How powerful are graph neural networks?. arXiv preprint arXiv:1810.00826. Cited by: §4.4.
 Graphrnn: generating realistic graphs with deep autoregressive models. arXiv preprint arXiv:1802.08773. Cited by: §1, §3.2.3, §3.2.3, §3.2.3.
 Link prediction based on graph neural networks. In Advances in Neural Information Processing Systems, pp. 5165–5175. Cited by: §1.
 Graph neural networks: a review of methods and applications. arXiv preprint arXiv:1812.08434. Cited by: §1.
Appendix A Extra Experiment Results With Real Datasets
a.1 Histogram of Similarities
See Figure S1.
Appendix B Extra Experiment Results with Synthetic Datasets
b.1 Graph Size Comparison
See Figure S2.
b.2 Histogram of Similarities
See Figure S3.
b.3 Some Examples of Predicted Graphs
See Figure S4, S5, S6, S7, S8, S9, respectively for Path graphs, Ladder graphs with small size, Ladder graphs with large size, Cycle graphs, Path graphs with removal, Cycle graphs with adding extra structures.
Comments
There are no comments yet.