NetGAN: Generating Graphs via Random Walks

03/02/2018 ∙ by Aleksandar Bojchevski, et al. ∙ 0

We propose NetGAN - the first implicit generative model for graphs able to mimic real-world networks. We pose the problem of graph generation as learning the distribution of biased random walks over the input graph. The proposed model is based on a stochastic neural network that generates discrete output samples and is trained using the Wasserstein GAN objective. NetGAN is able to produce graphs that exhibit the well-known network patterns without explicitly specifying them in the model definition. At the same time, our model exhibits strong generalization properties, as highlighted by its competitive link prediction performance, despite not being trained specifically for this task. Being the first approach to combine both of these desirable properties, NetGAN opens exciting further avenues for research.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 14

page 15

page 16

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Generative models for graphs have a longstanding history, with applications including data augmentation, anomaly detection and recommendation

(Chakrabarti & Faloutsos, 2006). Explicit probabilistic models such as Barabási-Albert or stochastic blockmodels are the de-facto standard in this field (Goldenberg et al., 2010). However, it has also been shown on multiple occasions that our intuitions about structure and behavior of graphs may be misleading. For instance, heavy-tailed degree distributions in real graphs were in strong disagreement with the models existing at the time of their discovery (Barabási & Albert, 1999). More recent works like Dong et al. (2017) and Broido & Clauset (2018) keep bringing up other surprising characteristics of real-world networks that question the validity of the established models. This leads us to the question: “How do we define a model that captures all the essential (potentially still unknown) properties of real graphs?”

(a) Original graph

44%edge overlap
(b) Graph generated by NetGAN
(c) Degree distribution comparison
Figure 1: (a) Subgraph of the Citeseer network and (b) the respective subset of the graph generated by NetGAN. Both have similar structure but are not identical. (c) shows that the degree distributions of the two graphs are very close.

An increasingly popular way to address this issue in other fields is by switching from explicit (prescribed) models to implicit

ones. This transition is especially notable in computer vision, where generative adversarial networks (GANs)

(Goodfellow et al., 2014) significantly advanced the state of the art over the classic prescribed approaches like mixtures of Gaussians (Blanken et al., 2007). GANs achieve unparalleled results in scenarios such as image and 3D objects generation (e.g., Karras et al., 2017; Berthelot et al., 2017; Wu et al., 2016). However, despite their massive success when dealing with real-valued data, adapting GANs to handle discrete objects like graphs or text remains an open research problem (Goodfellow, 2016). In fact, discreteness is only one of the obstacles when applying GANs to network data. Large repositories of graphs that all come from the same distribution are not available. This means that in a typical setting one has to learn from a single graph. Additionally, any model operating on a graph necessarily has to be permutation invariant, as graphs are isomorphic under node reordering.

In this work we introduce NetGAN – the first implicit generative model for graphs and networks that tackles all of the above challenges. We formulate the problem of learning the graph topology as learning the distribution of biased random walks over the graph. Like in the typical GAN setting, the generator – in our case defined as a stochastic neural network with discrete output samples – learns to generate random walks that are plausible in the real graph, while the discriminator then has to distinguish them from the true ones that are sampled from the original graph.

The main requirement for a graph generative model is the ability to generate realistic graphs. In the experimental section we compare NetGAN to other established prescribed models on this task. We observe that our proposed method consistently reproduces most known patterns inherent to real-world networks without explicitly specifying any of them in the model definition (e.g., degree distribution, as seen in Fig. 1). However, a model that simply replicates the original graph would also trivially fulfill this requirement, which clearly isn’t our goal. In order to prove that this is not the case we examine the generalization properties of NetGAN by evaluating its link prediction performance. As our experiments show, our model exhibits competitive performance in this task and even achieves state-of-the-art results on some datasets. This result is especially impressive, since NetGAN is not trained explicitly for performing link prediction. To summarize, our main contributions are:

  • We introduce NetGAN 111 Code available at: https://www.kdd.in.tum.de/netgan – the first of its kind GAN architecture that generates graphs via random walks. Our model tackles the associated challenges of staying permutation invariant, learning from a single graph and generating discrete output.

  • We show that our method preserves important topological properties, without having to explicitly specifying them in the model definition. Moreover, we demonstrate how latent space interpolation leads to producing graphs with smoothly changing characteristics.

  • We highlight the generalization properties of NetGAN by its link prediction performance that is competitive with the state of the art on real-word datasets, despite the model not being trained explicitly for this task.

2 Related Work

So far, no GAN architectures applicable to real-world networks have been proposed. Liu et al. (2017) propose a GAN architecture for learning topological features of subgraphs. Tavakoli et al. (2017) apply GANs to graph data by trying to directly generate adjacency matrices. Because their model produces the entire adjacency matrix – including the zero entries – it requires computations and memory quadratic in the number of nodes. Such quadratic complexity is infeasible in practice, allowing to process only small graphs, with reported runtime of over 60 hours for a graph with only 154 nodes. In contrast, NetGAN operates on random walks – it considers only the non-zero entries of the adjacency matrix efficiently exploiting the sparsity of real-world graphs – and is readily applicable to graphs with thousands of nodes.

Deep learning methods for graph data have mostly been studied in the context of node embeddings (Perozzi et al., 2014; Grover & Leskovec, 2016; Kipf & Welling, 2016)

. The main idea behind these approaches is that of modeling the probabilities of each individual edge’s existence,

, as some function of the respective node embeddings, , where is represented by a neural network. The recently proposed GraphGAN (Wang et al., 2017) is another instance of such prescribed edge-level probabilistic models, where is optimized using the GAN objective instead of the traditional cross-entropy. Deep embedding based methods achieve state-of-the-art scores in tasks like link prediction and node classification. Nevertheless, as we show in Sec. 3.2, using such approaches for generating entire graphs produces samples that don’t preserve any of the patterns inherent to real-world networks.

Prescribed generative models for graphs have a long history and are well-studied. For a survey we refer the reader to Chakrabarti & Faloutsos (2006) and Goldenberg et al. (2010). Typically, prescribed generative approaches are designed to capture and reproduce some predefined subset of graph properties (e.g., degree distribution, community structure, clustering coefficient). Notable examples include the configuration model (Bender & Canfield, 1978; Molloy & Reed, 1995), variants of the degree-corrected stochastic blockmodel (Karrer & Newman, 2011; Bojchevski & Günnemann, ), Exponential Random Graph Models (Holland & Leinhardt, 1981), Multiplicative Attribute Graph model (Kim & Leskovec, 2011), and the block two-level Erdős-Réniy random graph model (Seshadhri et al., 2012). In Sec. 4 we compare with some of these prescribed models on the tasks of graph generation and link prediction.

Due to the challenging nature of the problem, only few approaches able to generate discrete data using GANs exist. Most approaches focus on generating discrete sequences such as text, with some of them using reinforcement learning techniques to enable backpropagation through sampling discrete random variables

(Yu et al., 2017; Kusner & Hernández-Lobato, 2016; Li et al., 2017; Liang et al., 2017). Other approaches modify the GAN objective to tackle the same challenge (Che et al., 2017; Hjelm et al., 2017). Focusing on non-sequential discrete data, Choi et al. (2017) generate high-dimensional discrete features (e.g. binary indicators, counts) in patient records. None of these methods have considered graph structured data.

sample

sample

(a) Generator architecture

(b) NetGAN architecture
Figure 2: The NetGAN architecture proposed in this work (b) and the generator architecture (a).

3 Model

In this section we introduce NetGAN- a Generative Adversarial Network model for graph / network data. Its core idea lies in capturing the topology of a graph by learning a distribution over the random walks. Given an input graph of nodes, defined by a binary adjacency matrix , we first sample a set of random walks of length from . This collection of random walks serves as a training set for our model. We use the biased second-order random walk sampling strategy described in Grover & Leskovec (2016), as it better captures both local and global graph structure. An important advantage of using random walks is their invariance under node reordering. Additionally, random walks only include the nonzero entries of , thus efficiently exploiting the sparsity of real-world graphs.

Like any typical GAN architecture, NetGAN consists of two main components - a generator and a discriminator . The goal of the generator is to generate synthetic random walks that are plausible in the input graph. At the same time, the discriminator learns to distinguish the synthetic random walks from the real ones that come from the training set. Both and are trained end-to-end using backpropagation. At any point of the training process it is possible to use to generate a set of random walks, which can then be used to produce an adjacency matrix of a new generated graph. In the rest of this section we describe each stage of this process and our design choices in more detail. An overview of our model’s complete architecture can be seen in Fig. 2.

3.1 Architecture

Generator. The generator defines an implicit probabilistic model for generating random walks: . We model as a sequential process based on a neural network parametrized by . At each step ,

produces two values: the probability distribution over the next node to be sampled, parametrized by the logits

, and the current memory state of the model, denoted as . The next node

, represented as a one-hot vector, is sampled from a categorical distribution

, where denotes the softmax function, and together with is passed into at the next step . Similarly to the classic GAN setting, a latent code

drawn from a multivariate standard normal distribution is passed through a parametric function

to initialize . The generative process of is summarized in the box below.

In this work we focus our attention on the Long short-term memory (LSTM) architecture for

, introduced by Hochreiter & Schmidhuber (1997). The memory state of an LSTM is represented by the cell state , and the hidden state . The latent code goes through two separate streams, each consisting of two fully connected layers with activation, and then used to initialize .

A natural question might arise: ”Why use a model with memory and temporal dependencies, when the random walks are Markov processes?” (2nd order Markov for biased RWs). Or put differently, what’s the benefit of using random walks of length greater than 2? In theory, a model with large enough capacity could simply memorize all existing edges in the graph and recreate them. However, for large graphs achieving this in practice is not feasible. More importantly, pure memorization is not the goal of NetGAN, rather we want to have generalization and to generate graphs with similar properties, not exact replicas. Having longer random walks combined with memory helps the model to learn the topology and general patterns in the data (e.g., community structure). Our experiments in Sec. 4.2 confirm this, showing that longer random walks are indeed beneficial.

After each time step, to generate the next node in the random walk, the network should output the logits of length . However, operating in such high dimensional space leads to an unnecessary computational overhead. To tackle this issue, the LSTM outputs instead, with , which is then up-projected to using the matrix . This enables us to efficiently handle large-scale graphs.

Sampling the next node in the random walk

presents another challenge. Since sampling from a categorical distribution is a non-differentiable operation it blocks the flow of gradients and precludes backpropagation. We solve this problem by using the Straight-Through Gumbel estimator by

Jang et al. (2016). More specifically, we perform the following transformation: First, we let , where is a temperature parameter, and ’s are i.i.d. samples from a Gumbel distribution with zero mean and unit scale. Then, the next sample is chosen as . While the one-hot sample is passed as input to the next time step, during the backward pass the gradients will flow through the differentiable . The choice of allows to trade-off between better flow of gradients (large , more uniform ) and more exact calculations (small , ).

Now that a new node is sampled, it needs to be projected back to a lower-dimensional representation before feeding into the LSTM. This is done by means of down-projection matrix .

Discriminator. The discriminator is based on the standard LSTM architecture. At every time step , a one-hot vector , denoting the node at the current position, is fed as input. After processing the entire sequence of nodes, the discriminator outputs a single score that represents the probability of the random walk being real.

3.2 Training

Wasserstein GAN. We train our model based on the Wasserstein GAN (WGAN) framework (Arjovsky et al., 2017), as it prevents mode collapse and leads to more stable training overall. To enforce the Lipschitz constraint of the discriminator, we use the gradient penalty as in Gulrajani et al. (2017). The model parameters

are trained using stochastic gradient descent with Adam

(Kingma & Ba, 2014). Weights are regularized with an penalty.

Early stopping. Because we are interested in generalizing the input graph, the “trivial” solution where the generator has memorized all existing edges is of no interest to us. This means that we need to control how closely the generated graphs resemble the original one. To achieve this, we propose two possible early stopping strategies, either of which can be used depending on the task at hand. The first strategy, named Val-Criterion is concerned with the generalization properties of NetGAN. During training, we keep a sliding window of the random walks generated in the last 1,000 iterations and use them to construct a matrix of transition counts. This matrix is then used to evaluate the link prediction performance on a validation set (i.e. ROC and AP scores, for more details see Sec. 4.2). We stop with training when the validation performance stops improving.

The second strategy, named EO-Criterion makes NetGAN very flexible and gives the user control over the graph generation. We stop training when we achieve a user specified edge overlap between the generated graphs (see next section) and the original one at a given iteration. Based on her end task the user can choose to generate graphs with either small or large edge overlap with the original, while maintaining structural similarity. This will lead to generated graphs that either generalize better or are closer replicas respectively, yet still capture the properties of the original.

Graph Max. degree Assort- ativity Triangle count Power law exp. Inter-comm. unity density Intra-comm. unity density Cluster- ing coeff. Charac. path len. Average rank
Cora-ML 240 -0.075 2,814 1.860 4.3e-4 1.7e-3 2.73e-3 5.61
Conf. model (1% EO) * -0.030 322 * 1.6e-3 2.8e-4 3.00e-4 4.38 7.50
Conf. model (52% EO) * -0.051 626 * 9.8e-4 9.9e-4 6.10e-4 4.46 5.83
DC-SBM (11% EO) 165 -0.052 1,403 1.814 6.7e-4 1.2e-3 3.30e-3 5.12 3.36
ERGM (56% EO) 243 -0.077 2,293 1.786 6.9e-4 1.2e-3 2.17e-3 4.59 2.88
BTER (2.2% EO) 199 0.033 3,060 1.787 1.0e-3 7.5e-4 4.62e-3 4.59 4.75
VGAE (0.3% EO) 13 -0.009 14 1.674 1.4e-3 3.2e-4 1.17e-3 5.28 5.88
NetGAN Val (39% EO) 199 -0.060 1,410 1.773 6.5e-4 1.3e-3 2.33e-3 5.17 3.00
NetGAN EO (52% EO) 233 -0.066 1,588 1.793 6.0e-4 1.4e-3 2.44e-3 5.20 1.75
Table 1: Statistics of Cora-ML and the graphs generated by NetGAN and the baselines, averaged over 5 trials. NetGAN closely matches the input networks in most properties, while other methods either deviate significantly in at least one statistic or overfit. * indicates values for the conf. model that by definition exactly match the original.

3.3 Assembling the Adjacency Matrix

After finishing the training, we use the generator to construct a score matrix of transition counts, i.e. we count how often an edge appears in the set of generated random walks (typically, using a much larger number of random walks than for early stopping, e.g., 500K). While the raw counts matrix is sufficient for link prediction purposes, we need to convert it to a binary adjacency matrix if we wish to reason about the synthetic graph. First, is symmetrized by setting . Because we cannot explicitly control the starting node of the random walks generated by

, some high-degree nodes will likely be overrepresented. Thus, a simple binarization strategy like thresholding or choosing top-

entries might lead to leaving out the low-degree nodes and producing singletons.

To address this issue, we use the following approach: (i) We ensure that every node has at least one edge by sampling a neighbor with probability . If an edge was already sampled before, we repeat the procedure; (ii) We continue sampling edges without replacement using for each edge the probability , until we reach the desired amount of edges (e.g., as many edges as in the original graph). To obtain an undirected graph for every edge we also include . Note that this procedure is not guaranteed to produce a fully connected graph.

4 Experiments

In this section we evaluate the quality of the graphs generated by NetGAN by computing various graph statistics. We quantify the generalization power of the proposed model by evaluating its link prediction performance. Furthermore, we demonstrate how we can generate graphs with smoothly changing properties via latent space interpolation. Additional experiments are provided in the supp. mat.

Datasets. For the experiments we use five well-known citation datasets and the Political Blogs dataset. For the large Cora dataset and its commonly used subset of machine learning papers denoted with Cora-ML we use the same preprocessing as in Bojchevski & Günnemann (2018). For all the experiments we treat the graphs as undirected and only consider the largest connected component (LCC). Information about the datasets is listed in Table 2.

Name Reference
Cora-ML 2,810 7,981 (McCallum et al., 2000)
Cora 18,800 64,529 (McCallum et al., 2000)
CiteSeer 2,110 3,757 (Sen et al., 2008)
Pubmed 19,717 44,324 (Sen et al., 2008)
DBLP 16,191 51,913 (Pan et al., 2016)
Pol. Blogs 1,222 16,714 (Adamic & Glance, 2005)
Table 2: Dataset statistics. - number of nodes and edges respectively in the largest connected component.

4.1 Graph Generation

(a) Degree distribution
 
(b) Assortativity over
training iterations
(c) Edge overlap (EO) over
training iterations
Figure 3: Properties of graphs generated by NetGAN trained on Cora-ML.

Setup. In this task, we fit NetGAN to the Cora-ML and Citeseer citation networks in order to evaluate quality of the generated graphs. We compare to the following baselines: configuration model (Molloy & Reed, 1995), degree-corrected stochastic blockmodel (DC-SBM) (Karrer & Newman, 2011), exponential random graph model (ERGM) (Holland & Leinhardt, 1981) and the block two-level Erdős-Réniy random graph model (BTER) (Seshadhri et al., 2012)

. Additionally, we use the variational graph autoencoder (VGAE)

(Kipf & Welling, 2016) as a representative of network embedding approaches. We randomly hide of the edges (which are used for the stopping criterion; see Sec. 3.2) and fit all the models on the remaining graph. We sample 5 graphs from each of the trained models and report their average statistics in Table 1

. Definitions of the statistics, additional metrics, standard deviations and details about the baselines are given in the supplementary material.

Evaluation. The general trend that becomes apparent from the results in Table 1 (and Table 2 in supplementary material) is that prescribed models excel at recovering the statistics that they directly model (e.g., degree sequence for DC-SBM). At the same time, these models struggle when dealing with graph properties that they don’t account for (e.g., assortativity for BTER). On the other hand, NetGAN is able to capture all the graph properties well, although none of them are explicitly specified in its model definition. We also see that VGAE is not able to produce realistic graphs. This is expected, since the main purpose of VGAE is learning node embeddings, and not generating entire graphs.

The final column shows the average rank of each method for all statistics, with NetGAN performing the best. ERGM seems to be performing surprisingly well, however it suffers from severe overfitting – using the same fitted ERGM for the link prediction task we get both AUC and AP scores close to 0.5 (worst possible value). In contrast, NetGAN does a good job both at preserving properties in generated graphs, as well as generalizing, as we see in Sec. 4.2.

Is the good performance of NetGAN in this experiment only due to the overlapping edges (existing in the input graph)? To rule out this possibility we perform the following experiment: We take the graph generated by NetGAN, fix the overlapping edges and rewire the rest according to the configuration model. The properties of the resulting graph (row #3 in Table 1) deviate strongly from the input graph. This confirms that NetGAN does not simply memorize some edges and generates the rest at random, but rather captures the underlying structure of the network.

In line with our intuition, we can see that higher EO leads to generated graphs with statistics closer to the original. Figs. 2(b) and 2(c) show how the graph statistics evolve during the training process. Fig. 2(c)

shows that the edge overlap smoothly increasing with the number of epochs. We provide plots for other statistics and for

Citeseer in the supp. mat.

4.2 Link Prediction

Setup. Link prediction is a common graph mining task where the goal is to predict the existence of unobserved links in a given graph. We use it to evaluate the generalization properties of NetGAN. We hold out 10% of edges from the graph for validation and 5% as the test set, along with the same amount of randomly selected non-edges, while ensuring that the training network remains connected. We measure the performance with two commonly used metrics: area under the ROC curve (AUC) and average precision (AP). To evaluate NetGAN’s performance, we sample a given number of random walks (500K/100M) from the trained generator and we use the observed transition counts between any two nodes as a measure of how likely there is an edge between them. We compare with DC-SBM, node2vec and VGAE, as well as Adamic/Adar(Adamic & Adar, 2003).

Evaluation. The results are listed in Table 3. There is no overall dominant method, with different methods achieving best results on different datasets. NetGAN shows competitive performance for all datasets, even achieving state-of-the-art results for some of them (Citeseer and PolBlogs), despite not being explicitly trained for this task.

Interestingly, the NetGAN performance increases when increasing the number of random walks sampled from the generator. This is especially true for the larger networks (Cora, DBLP, Pubmed), since given their size we need more random walks to cover the entire graph. This suggests that for an additional computational cost one can get significant gains in link prediction performance. Note, that while 100M may seem like a large number, the sampling procedure can be trivially parallelized.

Method Cora-ML Cora Citeseer DBLP Pubmed PolBlogs
AUC AP AUC AP AUC AP AUC AP AUC AP AUC AP
Adamic/Adar 92.16 85.43 93.00 86.18 88.69 77.82 91.13 82.48 84.98 70.14 85.43 92.16
DC-SBM 96.03 95.15 98.01 97.45 94.77 93.13 97.05 96.57 96.76 95.64 95.46 94.93
node2vec 92.19 91.76 98.52 98.36 95.29 94.58 96.41 96.36 96.49 95.97 85.10 83.54
VGAE 95.79 96.30 97.59 97.93 95.11 96.31 96.38 96.93 94.50 96.00 93.73 94.12
NetGAN (500K) 94.00 92.32 82.31 68.47 95.18 91.93 82.45 70.28 87.39 76.55 95.06 94.61
NetGAN (100M) 95.19 95.24 84.82 88.04 96.30 96.89 86.61 89.21 93.41 94.59 95.51 94.83
Table 3: Link prediction performance (in %).
(a) Avg. degree
of start node
(b) Avg. share of nodes
in start community
(c) Gini coefficient
(input graph: 0.48)
(d) Max. degree
(input graph: 240)

()

(*)
(a) Community histograms
(b) Top to bottom trajectory
(c) Left to right trajectory
Figure 4: Properties of the random walks (3(a) and 3(b)) as well as the graphs (3(c) and 3(d)) sampled from the bins.
Figure 5: Community histograms of graphs sampled from subsets of the latent space. (a) shows complete community histograms on a grid. (b) and (c) show how shares of specific communities change along trajectories. () is the community distribution when sampling from the entire latent space, and (*) is the community histogram of Cora-ML. Available as an animation at https://goo.gl/bkNcVa.
Figure 4: Properties of the random walks (3(a) and 3(b)) as well as the graphs (3(c) and 3(d)) sampled from the bins.

Sensitivity analysis.

Although NetGAN has many hyperparameters – typical for a GAN model – in practice most of them are not critical for performance, as long as they are within a reasonable range (e.g.

).

Figure 6: Effect of the random walk length on the performance.

One important exception is the the random walk length . To choose the optimal value, we evaluate the change in link prediction performance as we vary on Cora-ML. We train multiple models with different random walk lengths, and evaluate the scores ensuring each one observes equal number of transitions. Results averaged over 5 runs are given in Fig. 6. We empirically confirm that the model benefits from using longer random walks as opposed to just edges (i.e. =2). The performance gain for over is marginal and does not outweigh the additional computational cost, thus we set for all experiments.

4.3 Latent Variable Interpolation

Setup. Latent space interpolation is a good way to gain insight into what kind of structure the generator was able to capture. To be able to visualize the properties of the generated graphs we train our model using a 2-dimensional noise vector drawn as before from a bivariate standard normal distribution. This corresponds to a 2-dimensional latent space . Then, instead of sampling from the entire latent space , we now sample from subregions of and visualize the results. Specifically, we divide into

subregions (bins) of equal probability mass using the standard normal cumulative distribution function

. For each bin we generate 62.5K random walks. We evaluate properties of both the generated random walks themselves, as well as properties of the resulting graphs obtained by sampling a binary adjacency matrix for each bin.

Evaluation. In Fig. 3(a) and 3(b) we see properties of the generated random walks; in Fig. 3(c) and 3(d), we visualize properties of graphs sampled from the random walks in the respective bins. In all four heatmaps, we see distinct patterns, e.g. higher average degree of starting nodes for the bottom right region of Fig. 3(a), or higher degree distribution inequality in the top-right area of Fig. 3(c). While Fig. 3(c) and 3(d) show that certain regions of correspond to generated graphs with very different degree distributions, recall that sampling from the entire latent space () yields graphs with degree distribution similar to the original graph (see Fig. 0(c)). The model was trained on Cora-ML. More heatmaps for other metrics (16 in total) and visualizations for Citeseer can be found in the supplementary material.

This experiment clearly demonstrates that by interpolating in the latent space we can obtain graphs with smoothly changing properties. The smooth transitions in the heatmaps provide evidence that our model learns to map specific parts of the latent space to specific properties of the graph.

We can also see this mapping from latent space to the generated graph properties in the community distribution histograms on a grid in Fig. 5. Marked by (*) and () we see the community distributions for the input graph and the graph obtained by sampling on the complete latent space respectively. In Fig. 4(b) and 4(c), we see the evolution of selected community shares when following a trajectory from top to bottom, and left to right, respectively. The community histograms resulting from sampling random walks from opposing regions of the latent space are very different; again the transitions between these histograms are smooth, as can be seen in the trajectories in Fig. 4(b) and 4(c).

5 Discussion and Future Work

When evaluating different graph generative models in Sec. 3.2, we observed a major limitation of explicit models. While the prescribed approaches excel at recovering the properties directly included in their definition, they perform significantly worse with respect to the rest. This clearly indicates the need for implicit graph generators such as NetGAN. Indeed, we notice that our model is able to consistently capture all the important graph characteristics (see Table 1). Moreover, NetGAN generalizes beyond the input graph, as can be seen by its strong link prediction performance in Sec. 4.2. Still, being the first model of its kind, NetGAN possesses certain limitations, and a number of related questions could be addressed in follow-up works:

Scalability. We have observed in Sec. 4.2 that it takes a large number of generated random walks to get representative transition counts for large graphs. While sampling random walks from NetGAN is trivially parallelizable, a possible extension of our model is to use a conditional

generator, i.e. the generator can be provided a desired starting node, thus ensuring a more even coverage. On the other hand, the sampling procedure itself can be sped up by incorporating a hierarchical softmax output layer - a method commonly used in natural language processing.

Evaluation. It is nearly impossible to judge whether a graph is realistic by visually inspecting it (unlike images, for example). In this work we already quantitatively evaluate the performance of NetGAN on a large number of standard graph statistics. However, developing new measures applicable to (implicit) graph generative models will deepen our understanding of their behavior, and is an important direction for future work.

Experimental scope. In the current work we focus on the setting of a single large graph. Adaptation to other scenarios, such as a collection of smaller i.i.d. graphs, that frequently occur in other fields (e.g., chemistry, biology), would be an important extension of our model. Studying the influence of the graph topology (e.g., sparsity, diameter) on NetGAN’s performance will shed more light on the model’s properties.

Other types of graphs. While plain graphs are ubiquitous, many of important applications deal with attributed, k-partite or heterogeneous networks. Adapting the NetGAN model to handle these other modalities of the data is a promising direction for future research. Especially important would be an adaptation to the dynamic / inductive setting, where new nodes are added over time.

6 Conclusion

In this work we introduce NetGAN- an implicit generative model for network data. NetGAN is able to generate graphs that capture important topological properties of complex networks, such as community structure and degree distribution, without having to manually specify any of them. Moreover, our proposed model shows strong generalization properties, as highlighted by its competitive link prediction performance on a number of datasets. NetGAN can also be used for generating graphs with continuously varying characteristics using latent space interpolation. Combined our results provide strong evidence that implicit generative models for graphs are well-suited for capturing the complex nature of real-world networks.

Acknowledgments

This research was supported by the German Research Foundation, Emmy Noether grant GU 1409/2-1, and by the Technical University of Munich - Institute for Advanced Study, funded by the German Excellence Initiative and the European Union Seventh Framework Programme under grant agreement no 291763, co-funded by the European Union.

References

Appendix A Graph statistics

Appendix B Baselines

  • Configuration model. In addition to randomly rewiring all edges in the input graph, we also generate random graphs with similar overlap as graphs generated by NetGAN using the configuration model. For this, we randomly select a share of edges (e.g. 39%) and keep them fixed, and shuffle the remaining edges. This leads to a graph with the specified edge overlap; in Table 2 we show that with the same edge overlap, NetGAN’s generated graphs in general match the input graph better w.r.t the statistics we measure.

  • Exponential random graph model. We use the R implementation of ERGM from the ergm package (Handcock et al., 2017). We used the following parameter settings: edge count, density, degree correlation, deg1.5, and gwesp. Here, deg1.5 is the sum of all degrees to the power of 1.5, and gwesp refers to the geometrically weighted edgewise shared partner distribution.

  • Degree-corrected stochastic blockmodel. We use the Python implementation from the graph-tool package (Peixoto, ) using the recommended hyperparameter settings.

  • Variational graph autoencoder. We use the implementation provided by the authors (https://github.com/tkipf/gae). We construct the graph from the predicted edge probabilities using the same protocol as in Sec. 3.3 of our paper. To ensure a fair comparison we perform early stopping, i.e. select the weights that achieve the best validation set performance.

Appendix C Properties of generated graphs

Appendix D Graph statistics during the training process

(a)
(b)
(c)
(d)
(e)
(f)

Appendix E Latent space interpolation heatmaps

(a) Avg. degree
of start node
(b) Avg. share of nodes in the same comm. as the starting node
(c) Gini coefficient
(input graph: 0.48)
(d) Max. degree
(input graph: 240)
(e) Assortativity
(input graph: -0.075)
(f) Claw count
(input graph: )
(g) Rel. edge distr. entro-
py (input graph: 0.94)
(h) Largest conn. comp.
(input graph: 2,810)
(i) Edge
overlap
(j) Power law exponent
(input graph: 1.86)
(k) Avg. precision
link prediction
(l) ROC AUC
link prediction
(m) Share of walks
in single community
(n) tAvg. start node entropy
(o) Triangle count
(input graph: 2,814)
(p) Wedge count
(input graph: 101,872)

Appendix F Latent space interpolation community histrograms – Citeseer

Appendix G Recovering ground-truth edge probabilities

To further investigate the ability of NetGAN to capture the graph structure we perform an additional experiment with the goal of analyzing how well we can recover the ground-truth edge probabilities given a graph generated from a prescribed generative model. Towards that end, first, we generate a graph from DC-SBM ( nodes and communities), then we fit NetGAN on this graph, and finally we compare the ground truth edge probabilities to the edge scores inferred by NetGAN – specifically we compute their ranking correlation. We find a correlation of 0.998 (with EO = 0.42), which shows that NetGAN uncovered the underlying generative process, without overfitting to the input graph.

Appendix H Hyperparameter configuration

As discussed in Sec. 4.2 NetGAN is not sensitive to the choice of most hyperparameters. For completeness, we report here sensible defaults that we used in used in our experiments. The generator and discriminator each have a single hidden layer with and hidden units respectively. The down-projection matrix for the generator is with , and for the discriminator is with . The latent code is drawn from a dimensional multivariate standard normal distribution. We anneal the temperature from down to every 500 iterations with a multiplicative decay of . We tune the parameters and (used to bias the generated random walks) for each dataset separately using the procedure in Grover & Leskovec (2016).

We use Adam (Kingma & Ba, 2014) to optimize all the parameters with a learning rate of and we set the regularization strength for the penalty to . We perform five update steps for the parameters of the discriminator for each single update step of the parameters of the generator, and we set the Wasserstein gradient penalty applied to the discriminator to as suggested by Gulrajani et al. (2017). For early stopping, we evaluate the score every 500 iterations, and set the patience to 5 evaluation steps. To calculate the validation score we generate 15M transitions, e.g. for a random walk of length 16 (i.e. 15 transitions per random walk) this equals 1M random walks.

For more details we refer the reader to the provided reference implementation at https://www.kdd.in.tum.de/netgan.