EvoNet: A Neural Network for Predicting the Evolution of Dynamic Graphs

03/02/2020 ∙ by Changmin Wu, et al. ∙ Ecole Polytechnique 0

Neural networks for structured data like graphs have been studied extensively in recent years. To date, the bulk of research activity has focused mainly on static graphs. However, most real-world networks are dynamic since their topology tends to change over time. Predicting the evolution of dynamic graphs is a task of high significance in the area of graph mining. Despite its practical importance, the task has not been explored in depth so far, mainly due to its challenging nature. In this paper, we propose a model that predicts the evolution of dynamic graphs. Specifically, we use a graph neural network along with a recurrent architecture to capture the temporal evolution patterns of dynamic graphs. Then, we employ a generative model which predicts the topology of the graph at the next time step and constructs a graph instance that corresponds to that topology. We evaluate the proposed model on several artificial datasets following common network evolving dynamics, as well as on real-world datasets. Results demonstrate the effectiveness of the proposed model.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Graph neural networks (GNNs) have emerged in recent years as an effective tool for analyzing graph-structured data [29, 8, 37, 33]

. These architectures bring the expressive power of deep learning into non-Euclidean data such as graphs, and have demonstrated convincing performance in several graph mining tasks, including graph classification

[21], link prediction [36], and community detection [4, 5]. So far, GNNs have been mainly applied to tasks that involve static graphs. However, most real-world networks are dynamic, i. e., nodes and edges are added and removed over time. Despite the success of GNNs in various applications, it is still not clear if these models are useful for learning in dynamic scenarios. Although some models have been applied to this type of data, most studies have focused on predicting a low-dimensional representation (i. e., embedding) of the graph for the next time step [16, 17, 22, 9, 30, 27]. These representations can then be used in downstream tasks [16, 9, 20, 27]. However, predicting the topology of the graph (and not its low-dimensional representation) is a task that has not been properly addressed yet.

Graph generation, another important task in graph mining, has attracted a lot of attention from the deep learning community in recent years. The objective of this task is to generate graphs that exhibit specific properties, e. g., degree distribution, node triangle participation, community structure, etc. Traditionally, graphs are generated based on some network generation model such as the Erdős-Rényi model. These models focus on modeling one or more network properties, and neglect the others. Neural network approaches, on the other hand, can better capture the properties of graphs since they follow a supervised approach [35, 3, 10]

. These architectures minimize a loss function such as the reconstruction error of the adjacency matrix or the value of a graph comparison algorithm.

Capitalizing on recent developments in neural networks for graph-structured data and graph generation, we propose in this paper, to the best of our knowledge, the first framework for predicting the evolution of the topology of networks in time. The proposed framework can be viewed as an encoder-predictor-decoder architecture. The “encoder” network takes a sequence of graphs as input and uses a GNN to produce a low-dimensional representation for each one of these graphs. These representations capture structural information about the input graphs. Then, the “predictor” network employs a recurrent architecture which predicts a representation for the future instance of the graph. The “decoder” network corresponds to a graph generation model which utilizes the predicted representation, and generates the topology of the graph for the next time step. The proposed model is evaluated over a series of experiments on synthetic and real-world datasets, and is compared against several baseline methods. To measure the effectiveness of the proposed model and the baselines, the generated graphs need to be compared with the ground-truth graph instances using some graph comparison algorithm. To this end, we use the Weisfeiler-Lehman subtree kernel which scales to very large graphs and has achieved state-of-the-art results on many graph datasets [31]. Results show that the proposed model yields good performance, and in most cases, outperforms the competing methods.

The rest of this paper is organized as follows. Section 2 provides an overview of the related work and elaborates our contribution. Section 3 introduces some preliminary concepts and definitions related to the graph generation problem, followed by a detailed presentation of the components of the proposed model. Section 4 evaluates the proposed model on several tasks. Finally, Section 5 concludes.

2 Related Work

Our work is related to random graph models. These models are very popular in graph theory and network science. The Erdős-Rényi model [7], the preferential attachment model [2], and the Kronecker graph model [14]

are some typical examples of such models. To predict how a graph structure will evolve over time, the values of the parameters of these models can be estimated based on the corresponding values of the observed graph instances, and then the estimated values can be passed on to these models to generate graphs.

Other work along a similar direction includes neural network models which combine GNNs with RNNs [30, 19, 27]

. These models use GNNs to extract features from a graph and RNNs for sequence learning from the extracted features. Other similar approaches do not use GNNs, but they instead perform random walks or employ deep autoencoders

[22, 9]. All these works focus on predicting how the node representations or the graph representations will evolve over time. However, some applications require predicting the topology of the graph, and not just its low-dimensional representation. The proposed model constitutes the first step towards this objective.

3 EvoNet: A Neural Network for Predicting Graph Evolution

In this Section, we first introduce basic concepts from graph theory, and define our notation. We then present EvoNet, the proposed framework for predicting the evolution of graphs. Since the proposed model comprises of several components, we describe all these components in detail.

3.1 Preliminaries

Let be an undirected, unweighted graph, where is the set of nodes and is the set of edges. We will denote by the number of vertices and by the number of edges. We define a permutation of the nodes of as a bijective function , under which any graph property of should be invariant. We are interested in the topology of a graph which is described by its adjacency matrix with a specific ordering of the nodes 111For simplicity, the ordering will be omitted in what follows.. Each entry of the adjacency matrix is defined as where . In what follows, we consider the “topology”, “structure” and “adjacency matrix” of a graph equivalent to each other.

In many real-world networks, besides the adjacency matrix that encodes connectivity information, nodes and/or edges are annotated with a feature vector, which we denote as

and , respectively. Hence, a graph object can be also written in the form of a triplet . In this paper, we use this triplet to represent all graphs. If a graph does not contain node/edge attributes, we assign attributes to it based on local properties (e. g., degree, -core number, number of triangles, etc).

An evolving network is a graph whose topology changes as a function of time. Interestingly, almost all real-world networks evolve over time by adding and removing nodes and/or edges. For instance, in social networks, people make and lose friends over time, while there are people who join the network and others who leave the network. An evolving graph is a sequence of graphs where represents the state of the evolving graph at time step . It should be noted that not only nodes and edges can evolve over time, but also node and edge attributes. However, in this paper, we keep node and edge attributes fixed, and we allow only the node and edge sets of the graphs to change as a function of time. The sequence can thus be written as . We are often interested in predicting what “comes next” in a sequence, based on data encountered in previous time steps. In our setting, this is equivalent to predicting based on the sequence . In sequential modeling, we usually do not take into account the whole sequence, but only those instances within a fixed small window of size before , which we denote as . We refer to these instances as the graph history. The problem is then to predict the topology of given its history.

3.2 Proposed Architecture

The proposed architecture is very similar to a typical sequence learning framework. The main difference lies in the fact that instead of vectors, in our setting, the elements of the sequence correspond to graphs. The combinatorial nature of graph-structured data increases the complexity of the problem and calls for more sophisticated architectures than the ones employed in traditional sequence learning tasks. Specifically, the proposed model consists of three components: (1) a graph neural network (GNN) which generates a vector representation for each graph instance, (2) a recurrent neural network (RNN) for sequential learning, and (3) a graph generation model for predicting the graph topology at the next time step. This framework can also be viewed as an encoder-predictor-decoder model. The first two components correspond to an encoder network which maps the sequence of graphs into a sequence of vectors and another network that predicts a representation for the next in the sequence graph. The decoder network consists of the last component of the model, and transforms the above representation into a graph. Figure

1 illustrates the proposed model. In what follows, we present the above three components of EvoNet.

Figure 1: Illustration of the proposed architecture

3.2.1 Encoding Graphs using Graph Neural Networks

Graph Neural Networks (GNNs) have recently emerged as a dominant paradigm for performing machine learning tasks on graphs. Several GNN variants have been proposed in the past years. All these models employ some message passing procedure to update node representations. Specifically, each node updates its representation by aggregating the representations of its neighbors. After

iterations of the message passing procedure, each node obtains a feature vector which captures the structural information within its -hop neighborhood. Then, GNNs compute a feature vector for the entire graph using some permutation invariant readout function such as summing the representations of all the nodes of the graph. As described below, the learning process can be divided into three phases: (1) aggregation, (2) update, and (3) readout.

Aggregation.

In this phase, the network computes a message for each node of the graph. To compute that message for a node, the network aggregates the representations of its neighbors. Formally, at time , a message vector is computed from the representations of the neighbors of :

(1)

where AGGREGATE is a permutation invariant function. Furthermore, for the network to be end-to-end trainable, this function needs to be differentiable. In our implementation, AGGREGATE

is a multi-layer perceptron (MLP) followed by a sum function.

Update.

The new representation of is then computed by combining its current feature vector with the message vector :

(2)

The UPDATE function also needs to be differentiable. To combine the two feature vectors (i. e., and

), we employ the Gated Recurrent Unit proposed in

[18]:

(3)

Omitting biases for readability, we have:

(4)

where the and matrices are trainable weight matrices,

is the sigmoid function, and

and are the parameters of the reset and update gates for a given node.

Readout.

The Aggregation and Update steps are repeated for time steps. The emerging node representations are aggregated into a single vector which corresponds to the representation of the entire graph, as follows:

(5)

where READOUT is a differentiable and permutation invariant function. This vector captures the topology of the input graph. To generate , we utilize Set2Set [32]. Other functions such as the sum function were also considered, but were found less effective in preliminary experiments.

3.2.2 Predicting Graph Representations using Recurrent Neural Networks

Given an input sequence of graphs, we use the GNN described above to generate a vector representation for each graph in the sequence. Then, to process this sequence, we use a recurrent neural network (RNN). RNNs use their internal state (i. e., memory) to preserve sequential information. These networks exhibit temporal dynamic behavior, and can find correlations between sequential events. Specifically, an RNN processes the input sequence in a series of time steps (i. e., one for each element in the sequence). For a given time step , the hidden state at that time step is updated as:

(6)

where

is a non-linear activation function. A generative RNN outputs a probability distribution over the next element of the sequence given its current state

. RNNs can be trained to predict the next element (e. g., graph) in the sequence, i. e., they can learn the conditional distribution

. In our implementation, we use a Long Short-Term Memory (LSTM) network that reads sequentially the vectors

produced by the GNN, and generates a vector that represents the embedding of . The embedding incorporates topological information and will serve as input to the graph generation module. The GNN component presented above can be seen as a form of an encoder network. It takes as input a sequence of graphs and projects them into a low-dimensional space. Then, this component takes the sequence of graph representations as input and predicts the representation of the graph at the next time step.

3.2.3 Graph Generation

To generate a graph that corresponds to the evolution of the current graph instance, we capitalize on a recently-proposed framework for learning generative models of graphs [35]. This framework models a graph in an autoregressive manner (i. e., a sequence of additions of new nodes and edges) to capture the complex joint probability of all nodes and edges in the graph. Formally, given a node ordering , it considers a graph as a sequence of vectors:

(7)

where is the adjacency vector between node and the nodes preceding it . We adapt this framework to our supervised setting.

The objective of the generative model is to maximize the likelihood of the observed graphs of the training set. Since a graph can be expressed as a sequence of adjacency vectors (given a node ordering), we can consider instead the distribution , which can be decomposed in an autoregressive manner into the following product:

(8)

This product can be parameterized by a neural network. Specifically, following [35], we use a hierarchical RNN consisting of two levels: (1) the graph-level RNN which maintains the state of the graph and generates new nodes and thus learns the distribution and (2) the edge-level RNN which generates links between each generated node and previously-generated nodes and thus learns the distribution . More formally, we have:

(9)

where is the state vector of the graph-level RNN (i. e., ) that encodes the current state of the graph sequence and is initialized by , the predicted embedding of the graph at the next time step . The output of the graph-level RNN corresponds to the initial state of the edge-level RNN (i. e., ). The resulting value is then squashed by a sigmoid function to produce the probability of existence of an edge . In other words, the model learns the probability distribution of the existence of edges and a graph can then be sampled from this distribution, which will serve as the predicted topology for the next time step .

To train the model, the cross-entropy loss between existence of each edge and its probability of existence is minimized:

(10)
Node ordering.

It should be mentioned that node ordering has a large impact on the efficiency of the above generative model. Note that a good ordering can help us avoid the exploration of all possible node permutations in the sample space. Different strategies such as the Breadth-First-Search ordering scheme can be employed to improve scalability [35]. However, in our setting, the nodes are distinguishable, i. e., node of and node of correspond to the same entity. Hence, we can impose an ordering onto the nodes of the first instance of our sequence of graphs, and then utilize the same node ordering for the graphs of all subsequent time steps (we place new nodes at the end of the ordering).

4 Experiments and Results

In this Section, we evaluate the performance of EvoNet on synthetic and real-world datasets for predicting the evolution of graph topology, and we compare it against several baseline methods.

4.1 Datasets

We use both synthetic and real-world datasets. The synthetic datasets consist of sequences of graphs where there is a specific pattern on how each graph emerges from the previous graph instance, i. e., add/remove some graph structure at each time step. The real-world datasets correspond to single graphs whose nodes incorporate temporal information. We decompose these graphs into sequences of snapshots based on their timestamps. The size of the graphs in each sequence ranges from tens of nodes to several thousand of nodes.

Path graph.

A path graph can be drawn such that all vertices and edges lie on a straight line. We denote a path graph of nodes as . In other words, the path graph is a tree with two nodes of degree , and the other nodes of degree . We consider two scenarios. In both cases the first graph in the sequence is . In the first scenario, at each time step, we add one new node to the previous graph instance and we also add an edge between the new node and the last according to the previous ordering node. The second scenario follows the same pattern, however, every three steps, instead of adding a new node, we remove the first according to the previous ordering node (along with its edge).

Cycle graph.

A cycle graph is a graph on nodes containing a single cycle through all the nodes. Note that if we add an edge between the first and the last node of , we obtain . Similar to the above case, we use as the first graph in the sequence, and we again consider two scenarios. In the first scenario, at each time step, we increase the size of the cycle, i. e., from , we obtain by adding a new node and two edges, the first between the new node and the first according to the previous ordering node and the second between the new node and the last according to the previous ordering node. In the second scenario, every three steps, we remove the first according to the ordering node (along with its edges), and we add an edge between the second and the last according to the ordering nodes.

Ladder graph.

The ladder graph is a planar graph with vertices and edges. It is the cartesian product of two path graphs, as follows: . As the name indicates, the ladder graph can be drawn as a ladder consisting of two rails and rungs between them. We consider the following scenario: at each time step, we attach one rung () to the tail of the ladder (the two nodes of the rung are connected to the two last according to the ordering nodes).

For all graphs, we set the attribute of each node equal to its degree, while we set the attribute of all edges to the same value (e. g., to ).

4.1.1 Real-World Datasets

Besides synthetic datasets, we also evaluate EvoNet on six real-world datasets.222All our datasets are publicly available through websites of [15] and [28]. They can be divided into three groups based on the nature of their sources.

Bitcoin transaction networks.

Contains graphs derived from the Bitcoin transaction network, a who-trust-whom network of people who trade using Bitcoin [13, 12]. Due to the anonymity of Bitcoin users, platforms seek to maintain a record of users’ reputation in Bitcoin trades to avoid fraudulent transactions. The nodes of the network represent Bitcoin users, while an edge indicates that a trade has been executed between its two endpoint users. Each edge is annotated with an integer between and , which indicates the rating of the one user given by the other user. The datasets are collected separately from two platforms: Bitcoin OTC and Bitcoin Alpha. For all graphs in these two datasets, we set the attribute of each node equal to the average rating that the user has received from the rest of the community, and the attribute of each edge equal to the rating between its two endpoint users.

Social networks.

Contains graphs generated from an online social network at the University of California, Irvine [24, 25]. It has two datasets: one is derived from the private message exchange between users; the other is based on the same user community, but focuses on their activity in the forum, i. e., public comment on a specific topic. The nodes of the networks represent users and the edges represent a message exchange or a shared interest (on a topic). All graphs in these two datasets are unweighted and unlabeled, thus we simply set the attribute of each node equal to its degree.

Email exchange networks.

Contains two datasets derived from two sources. The first is generated using email data from a large European research institution [26], i. e., all incoming and outgoing email between members of the research institution. The second is collected from the 2016 Democratic National Committee (DNC) email leak [28], where the links denote email exchanges between DNC members. Similar to social network datasets, the graphs in these two datasets are also unweighted and unlabelled, thus we treat them the same way.

More details about these datasets are given in Table 1.

Pos.Edges Timespan
Begin End
BTC-OTC 2010-11-08 2016-01-25
BTC-Alpha 2010-11-08 2016-01-22
UCI-Forum 2004-05-15 2004-10-26
UCI-Message 2004-04-15 2004-10-26
EU-Core 1970-01-01 1972-03-14
DNC 2013-09-16 2016-05-25
Table 1: Statistics of the 6 real-world datasets used in our experiments.

4.2 Baselines

We compare EvoNet against several random graph models: (1) the Erdős-Rényi model [7], (2) the Stochastic Block model [11, 1], (3) the Barabási–Albert model [2], and (4) the Kronecker Graph model [14]. These are the traditional methods to study the topology evolution of temporal graphs, by proposing a driven mechanism behind the evolution. To be precise, these models begin with an initial graph and a rule to connect new emerged nodes with existing ones, then gradually grow the initial graph to the expected size following this rule, i. e., in Barabási–Albert model, we begin with a triangle and follow the preferential attachment rule, in which the probability of having an edge between a newly added node and an existing one is proportional to the current degree of the existing node.

4.3 Evaluation Metric and Evaluation Setup

4.3.1 Synthetic Datasets

Figure 2: Results of synthetic datasets. Comparison of graph size (path, ladder and cycle graphs from left to right): predicted size (blue) VS. real size (orange).

In general, it is very challenging to measure the performance of a graph generative model since it requires comparing two graphs to each other, a long-standing problem in mathematics and computer science [6]. We propose to use graph kernels to compare graphs to each other, and thus to evaluate the quality of the generated graphs. Graph kernels have emerged as one of the most effective tools for graph comparison in recent years [23]. A graph kernel is a symmetric positive semidefinite function which takes two graphs as input, and measures their similarity. In our experiments, we employ the Weisfeiler-Lehman subtree kernel which counts label-based subtree-patterns [31]. Note that we also normalize the kernel values, and thus the emerging values lie between and .

As previously mentioned, each dataset corresponds to a sequence of graphs where each sequence represents the evolution of the topology of a single graph in time steps. We use the first of these graph instances for training and the rest of them serve as our test set. The window size is set equal to , which means that we feed consecutive graph instances to the model and predict the topology of the instance that directly follows the last of these input instances. Each graph of the test set along with its corresponding predicted graph is then passed on to the Weisfeiler-Lehman subtree kernel which measures their similarity and thus the performance of the model.

The hyperparameters of EvoNet are chosen based on its performance on a validation set. The parameters of the random graph models are set under the principle that the generated graphs need to share similar properties with the ground-truth graphs. For instance, in the case of the Erdős-Rényi model, the probability of adding an edge between two nodes is set to some value such that the density of the generated graph is identical to that of the ground-truth graph. However, since the model should not have access to such information (e. g., density of the ground-truth graph), we use an MLP to predict this property based on past data (i. e., the number of nodes and edges of the previous graph instances). This is in par with how the proposed model computes the size of the graphs to be generated (i. e., using also an MLP).

4.4 Results

Figure 3: D projection of dynamic embeddings learned from datasets with different structures or different dynamics
Figure 4: Similarity histograms on BTC-OTC (left) and UCI-Message (right) datasets. Blue one is the result of EvoNet, which is compared against random graph models.
ModelStat. BTC-OTC BTC-ALPHA UCI-Forum UCI-Mesg EU-Core DNC
Mean ile Mean ile Mean ile Mean ile Mean ile Mean ile
ER
SBM
BA
Power
Kron-Rand
Kron-Fix
EvoNet
Table 2: Statistics on the similarity distribution of different models: ER stands for Erdős–Rényi model. SBM stands for Stochastic Block Model. BA is Barabási–Albert Model. POWER is another model, similar to the Barabási–Albert, that grows graphs with powerlaw degree distribution. Kron-Rand represents the Kronecker Graph Model with learnable parameter while Kron-Fix represents the Kronecker Graph Model with fixed parameters.

We next present the experimental results and compare the performance of EvoNet against that of the baselines.

Synthetic datasets.

Figure 2 illustrates the experimental results on the synthetic datasets. Since the graph structures contained in the synthetic datasets are fairly simple, it is easy for the model to generate graphs very similar to the ground-truth graphs (normalized kernel values

). Hence, instead of reporting the kernel values, we compare the size of the predicted graphs against that of the ground-truth graphs. The figures visualize the increase of graph size on real sequence (orange) and predicted sequence (blue). For path graphs, in spite of small variance, we have an accurate prediction on the graph size. For ladder graph, we observe a mismatch at the beginning of the sequence for small size graphs but then a coincidence of the two lines on large size graphs. This mismatch on small graphs may be due to a more complex structure in ladder graphs such as cycles, as supported by the results of cycle graph on the right figure, where we completely mispredict the size of cycle graphs. In fact, we fail to reconstruct the cycle structure in the prediction, with all the predicted graphs being path graphs. This failure could be related to the limitations of GNN mentioned in

[34].

Dynamic graph embedding.

It is also important to check whether, in our encode-decoder framework, the learned code, which we refer to as “dynamic graph embedding”, is really meaningful.333By “meaningful”, we mean that the code(embedding) captures both structural feature of the graph class and temporal evolution of the series. Thus it can be applied to predict the graph at the future timestep. We design two experiments to verify the effectiveness of our embedding, with the help of synthetic graphs. In the first experiment, we take as input two sequences of graphs belonging to the same class but following different evolution dynamics. Specifically we took path graph and path graph with removal. In the second experiment, we control the evolution dynamic and vary the structures of graphs, where we use path graphs and ladder graphs following the same evolution of increasing size. The dynamic graph embeddings of different datasets learned from these experiments are recorded and visualized in Figure 3. Each point represents the projections of embeddings of each graph in the sequence into a -dimensional space by Principle Component Analysis (PCA). As we can see from the figure, embeddings learned from different datasets, either with different dynamics or with different structure, are both well separated, which suggests that the embeddings are meaningful, and those from the same dataset form special patterns such as a line in the space, which suggests a temporal dependency between these embeddings as they are learned from sequential data.

Real-World datasets.

Finally, we analyze the performance of our model on the six real datasets. We obtain the similarities between each pair of real and predicted graphs in the sequence and draw a histogram to illustrate the distribution of similarities. Due to page limit, we choose to only show the histogram plot of BTC-OTC and UCI-Message datasets in Figure 4. Among all the traditional random graph models, Kronecker graph model (with learnable parameter) performs the best, however on both datasets, our proposed method EvoNet (in blue) outperforms tremendously all other methods, with an average similarity of on BTC-OTC dataset and on UCI-Message dataset. Detailed statistics and results for other datasets can be found in Table 2, where we can find that our proposed model performs consistently better than the traditional methods.444Interested readers are kindly referred to supplemental materials for a full illustration of the results on all datasets.

Overall, despite a failure in capturing some specific structures discovered in synthetic datasets, our experiments demonstrate the advantage of EvoNet over the traditional random graph models on predicting the evolution of dynamic graphs, especially for real world data with complex structures.

5 Conclusion

In this paper, we proposed EvoNet, a model that predicts the evolution of dynamic graphs, following an encoder-decoder framework. The proposed model consists of three components: (1) a graph neural network which transforms graphs to vectors, (2) a recurrent architecture which reads the input sequence of graph embeddings and predicts the embedding of the graph at the next time step, and (3) a graph generation model which takes this embedding as input and predicts the topology of the graph. We also proposed an evaluation methodology for this task which capitalizes on the well-established family of graph kernels. We apply the above methodology to demonstrate the predictive power EvoNet. Experiments show that the proposed model outperforms traditional random graph methods on both synthetic and real-world datasets. We should note that there is still space for improvement. Improving the efficiency of the proposed model and its scalability on large graphs are potential directions for future work.

References

  • E. M. Airoldi, D. M. Blei, S. E. Fienberg, and E. P. Xing (2008) Mixed membership stochastic blockmodels. Journal of machine learning research 9 (Sep), pp. 1981–2014. Cited by: §4.2.
  • R. Albert and A. Barabási (2002) Statistical mechanics of complex networks. Reviews of modern physics 74 (1), pp. 47. Cited by: §2, §4.2.
  • A. Bojchevski, O. Shchur, D. Zügner, and S. Günnemann (2018) Netgan: generating graphs via random walks. arXiv preprint arXiv:1803.00816. Cited by: §1.
  • J. Bruna and X. Li (2017) Community detection with graph neural networks. stat 1050, pp. 27. Cited by: §1.
  • Z. Chen, X. Li, and J. Bruna (2017) Supervised community detection with line graph neural networks. arXiv preprint arXiv:1705.08415. Cited by: §1.
  • D. Conte, P. Foggia, C. Sansone, and M. Vento (2004)

    Thirty years of graph matching in pattern recognition

    .

    International Journal of Pattern Recognition and Artificial Intelligence

    18 (03), pp. 265–298.
    Cited by: §4.3.1.
  • P. Erdős and A. Rényi (1960) On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci 5 (1), pp. 17–60. Cited by: §2, §4.2.
  • J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl (2017) Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1263–1272. Cited by: §1.
  • P. Goyal, N. Kamra, X. He, and Y. Liu (2018) Dyngem: deep embedding method for dynamic graphs. arXiv preprint arXiv:1805.11273. Cited by: §1, §2.
  • A. Grover, A. Zweig, and S. Ermon (2018) Graphite: iterative generative modeling of graphs. arXiv preprint arXiv:1803.10459. Cited by: §1.
  • P. W. Holland, K. B. Laskey, and S. Leinhardt (1983) Stochastic blockmodels: first steps. Social networks 5 (2), pp. 109–137. Cited by: §4.2.
  • S. Kumar, B. Hooi, D. Makhija, M. Kumar, C. Faloutsos, and V. Subrahmanian (2018) Rev2: fraudulent user prediction in rating platforms. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pp. 333–341. Cited by: §4.1.1.
  • S. Kumar, F. Spezzano, V. Subrahmanian, and C. Faloutsos (2016) Edge weight prediction in weighted signed networks. In 2016 IEEE 16th International Conference on Data Mining (ICDM), pp. 221–230. Cited by: §4.1.1.
  • J. Leskovec, D. Chakrabarti, J. Kleinberg, C. Faloutsos, and Z. Ghahramani (2010) Kronecker graphs: an approach to modeling networks. Journal of Machine Learning Research 11 (Feb), pp. 985–1042. Cited by: §2, §4.2.
  • J. Leskovec and A. Krevl (2014) SNAP Datasets: Stanford large network dataset collection. Note: http://snap.stanford.edu/data Cited by: footnote 2.
  • C. Li, X. Guo, and Q. Mei (2016) Deepgraph: graph structure predicts network growth. arXiv preprint arXiv:1610.06251. Cited by: §1.
  • J. Li, H. Dani, X. Hu, J. Tang, Y. Chang, and H. Liu (2017) Attributed network embedding for learning in a dynamic environment. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 387–396. Cited by: §1.
  • Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel (2015) Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493. Cited by: §3.2.1.
  • F. Manessi, A. Rozza, and M. Manzo (2017) Dynamic graph convolutional networks. arXiv preprint arXiv:1704.06199. Cited by: §2.
  • C. Meng, S. C. Mouli, B. Ribeiro, and J. Neville (2018) Subgraph pattern neural networks for high-order graph evolution prediction. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §1.
  • C. Morris, M. Ritzert, M. Fey, W. L. Hamilton, J. E. Lenssen, G. Rattan, and M. Grohe (2019) Weisfeiler and leman go neural: higher-order graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 4602–4609. Cited by: §1.
  • G. H. Nguyen, J. B. Lee, R. A. Rossi, N. K. Ahmed, E. Koh, and S. Kim (2018) Continuous-time dynamic network embeddings. In Companion Proceedings of the The Web Conference 2018, pp. 969–976. Cited by: §1, §2.
  • G. Nikolentzos, G. Siglidis, and M. Vazirgiannis (2019) Graph Kernels: A Survey. arXiv preprint arXiv:1904.12218. Cited by: §4.3.1.
  • T. Opsahl and P. Panzarasa (2009) Clustering in weighted networks. Social Networks 31, pp. 155–163. Cited by: §4.1.1.
  • T. Opsahl (2013) Triadic closure in two-mode networks: redefining the global and local clustering coefficients. Social Networks 35 (2), pp. 159 – 167. External Links: ISSN 0378-8733 Cited by: §4.1.1.
  • A. Paranjape, A. R. Benson, and J. Leskovec (2017) Motifs in temporal networks. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, WSDM ’17, New York, NY, USA, pp. 601–610. External Links: ISBN 9781450346757, Link, Document Cited by: §4.1.1.
  • A. Pareja, G. Domeniconi, J. Chen, T. Ma, T. Suzumura, H. Kanezashi, T. Kaler, and C. E. Leisersen (2019) Evolvegcn: evolving graph convolutional networks for dynamic graphs. arXiv preprint arXiv:1902.10191. Cited by: §1, §2.
  • R. A. Rossi and N. K. Ahmed (2015) The network data repository with interactive graph analytics and visualization. In AAAI, External Links: Link Cited by: §4.1.1, footnote 2.
  • F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini (2008) The Graph Neural Network Model. IEEE Transactions on Neural Networks 20 (1), pp. 61–80. Cited by: §1.
  • Y. Seo, M. Defferrard, P. Vandergheynst, and X. Bresson (2018) Structured sequence modeling with graph convolutional recurrent networks. In International Conference on Neural Information Processing, pp. 362–373. Cited by: §1, §2.
  • N. Shervashidze, P. Schweitzer, E. J. v. Leeuwen, K. Mehlhorn, and K. M. Borgwardt (2011) Weisfeiler-lehman graph kernels. Journal of Machine Learning Research 12 (Sep), pp. 2539–2561. Cited by: §1, §4.3.1.
  • O. Vinyals, S. Bengio, and M. Kudlur (2015) Order matters: sequence to sequence for sets. arXiv preprint arXiv:1511.06391. Cited by: §3.2.1.
  • Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu (2019) A comprehensive survey on graph neural networks. arXiv preprint arXiv:1901.00596. Cited by: §1.
  • K. Xu, W. Hu, J. Leskovec, and S. Jegelka (2018) How powerful are graph neural networks?. arXiv preprint arXiv:1810.00826. Cited by: §4.4.
  • J. You, R. Ying, X. Ren, W. L. Hamilton, and J. Leskovec (2018) Graphrnn: generating realistic graphs with deep auto-regressive models. arXiv preprint arXiv:1802.08773. Cited by: §1, §3.2.3, §3.2.3, §3.2.3.
  • M. Zhang and Y. Chen (2018) Link prediction based on graph neural networks. In Advances in Neural Information Processing Systems, pp. 5165–5175. Cited by: §1.
  • J. Zhou, G. Cui, Z. Zhang, C. Yang, Z. Liu, and M. Sun (2018) Graph neural networks: a review of methods and applications. arXiv preprint arXiv:1812.08434. Cited by: §1.

Appendix A Extra Experiment Results With Real Datasets

a.1 Histogram of Similarities

See Figure S1.

Appendix B Extra Experiment Results with Synthetic Datasets

b.1 Graph Size Comparison

See Figure S2.

b.2 Histogram of Similarities

See Figure S3.

b.3 Some Examples of Predicted Graphs

See Figure S4, S5, S6, S7, S8, S9, respectively for Path graphs, Ladder graphs with small size, Ladder graphs with large size, Cycle graphs, Path graphs with removal, Cycle graphs with adding extra structures.

(a)
(b)
(c)
(d)
Figure S1: Similarity histograms on real datasets. Blue one is the result of EvoNet, which is compared against random graph models. 0(a): BTC-Alpha dataset; 0(b): UCI-Forum dataset; 0(c): Emails Eu-Core dataset; 0(d): DNC emails dataset.
Figure S2: Comparison of graph size: predicted size (blue) VS. real size (orange). Left: Path graphs with removal; Right: Cycle graphs with adding extra structures
(a)
(b)
(c)
(d)
(e)
Figure S3: Similarity histograms on synthetic datasets. Blue one is the result of EvoNet, which is compared against random graph models. 2(a): Path graphs; 2(b): Ladder graphs; 2(c): Cycle graphs; 2(d): Path graphs with removal; 2(e): Cycle graphs with adding extra structures.
Figure S4: Some examples of predictions on Path datasets: the left column is the real graphs and the right column is the predicted ones.
Figure S5: Some examples of predictions on Ladder datasets (with small size graphs): the left column is the real graphs and the right column is the predicted ones.
Figure S6: Some examples of predictions on Ladder datasets (with large size graphs): the left column is the real graphs and the right column is the predicted ones.
Figure S7: Some examples of predictions on Cycle datasets: the left column is the real graphs and the right column is the predicted ones.
Figure S8: Some examples of predictions on Path datasets with removal: the left column is the real graphs and the right column is the predicted ones.
Figure S9: Some examples of predictions on Cycle datasets with adding extra structures: the left column is the real graphs and the right column is the predicted ones.