Joint embedding of structure and features via graph convolutional networks

05/21/2019 ∙ by Sébastien Lerique, et al. ∙ 0

The creation of social ties is largely determined by the entangled effects of people's similarities in terms of individual characters and friends. However, feature and structural characters of people usually appear to be correlated, making it difficult to determine which has greater responsibility in the formation of the emergent network structure. We propose AN2VEC, a node embedding method which ultimately aims at disentangling the information shared by the structure of a network and the features of its nodes. Building on the recent developments of Graph Convolutional Networks (GCN), we develop a multitask GCN Variational Autoencoder where different dimensions of the generated embeddings can be dedicated to encoding feature information, network structure, and shared feature-network information. We explore the interaction between these disentangled characters by comparing the embedding reconstruction performance to a baseline case where no shared information is extracted. We use synthetic datasets with different levels of interdependency between feature and network characters and show (i) that shallow embeddings relying on shared information perform better than the corresponding reference with unshared information, (ii) that this performance gap increases with the correlation between network and feature structure, and (iii) that our embedding is able to capture joint information of structure and features. Our method can be relevant for the analysis and prediction of any featured network structure ranging from online social systems to network medicine.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Although it is relatively easy to obtain the proxy social network and various individual features for users of online social platforms, the combined characterisation of these types of information is still challenging our methodology. While current approaches have been able to approximate the observed marginal distributions of node and network features separately, their combined consideration was usually done via summary network statistics merged with otherwise independently built feature sets of nodes. However, the entanglement between structural patterns and feature similarities appears to be fundamental to a deeper understanding of network formation and dynamics. The value of this joint information then calls for the development of statistical tools for the learning of combined representation of network and feature information and their dependencies.

The formation of ties and mesoscopic structures in online social networks is arguably determined by several competing factors. Considering only network information, neighbour similarities between people are thought to explain network communities, where triadic closure mechanisms [1, 2] induce ties between peers with larger fractions of common friends [3]. Meanwhile, random bridges [2] are built via focal closure mechanisms, optimising the structure for global connectedness and information dissemination. At the same time, people in the network can be characterised by various individual features such as their socio-demographic background [4, 5], linguistic characters [6, 5, 7], or the distributions of their topics of interests [8, 9], to mention just a few. Such features generate homophilic tie creation preferences [10, 11]

, which induce links with higher probability between similar individuals, whom in turn form feature communities of shared interest, age, gender, or socio-economic status, and so on 

[4, 12]. Though these mechanisms are not independent and lead to correlations between feature and network communities, it is difficult to define the causal relationship between the two: first, because simultaneously characterising similarities between multiple features and a complex network structure is not an easy task; second, because it is difficult to determine which of the two types of information, features or structure, is driving network formation to a greater extent. Indeed, we do not know what fraction of similar people initially get connected through homophilic tie creation, versus the fraction that first get connected due to structural similarities before influencing each other to become more similar [13, 14].

Over the last decade popular methods have been developed to characterise structural and feature similarities and to identify these two notions of communities. The detection of network communities has been a major challenge in network science with various concepts proposed [15, 16, 17]

to solve it as an unsupervised learning task 

[18, 19]. Commonly, these algorithms rely solely on network information, and their output is difficult to cross-examine without additional meta-data which is usually disregarded in their description. On the other hand, methods grouping similar people into feature communities typically ignore network information, and exclusively rely on individual features to solve the problem as a data clustering challenge [20, 21]

. Some semi-supervised learning tasks, such as link prediction, may take feature and structural information simultaneously into account, but only by enriching individual feature vectors with node-level network characteristics such as degree or local clustering 

[22, 23, 9]. Methods that would take higher order network correlations and multivariate feature information into account at the same time are still to be defined. Their development would offer huge potential in understanding the relation between individuals’ characters, their social relationships, the content they are engaged with, and the larger communities they belong to. This would not only provide us with deeper insight about social behaviour, it would give us predictive tools for the emergence of network structure, individual interests and behavioural patterns.

In this paper we propose a contribution to solving this problem by developing a joint feature-network embedding built on multitask Graph Convolutional Networks [24, 25, 26, 27] and Variational Autoencoders (GCN-VAE) [28, 29, 30, 31, 32], which we call the Attributed Network to Vector method (AN2VEC). In our model, different dimensions of the generated embeddings can be dedicated to encoding feature information, network structure, or shared feature-network information separately. Unlike previous embedding methods dealing with features [33, 34, 35, 36], this interaction model [37] allows us to explore the dependencies between the disentangled network and feature information by comparing the embedding reconstruction performance to a baseline case where no shared information is extracted. Using this method, we can identify an optimal reduced embedding, which indicates whether combined information coming from the structure and features is important, or whether their non-interacting combination is sufficient for reconstructing the featured network.

In practice, as this method solves a reconstruction problem, it may give important insights about the combination of feature- and structure-driven mechanisms which determine the formation of a given network. As an embedding, it is useful to identify people sharing similar individual and structural characters. And finally, by measuring the optimal overlap between feature– and network-associated dimensions, it can be used to verify network community detection methods to see how well they identify communities explained by feature similarities.

In what follows, after summarising the relevant literature, we introduce our method and demonstrate its performance on synthetic featured networks, for which we control the structural and feature communities as well as the correlations between the two. As a result, we will show that our embeddings, when relying on shared information, outperform the corresponding reference without shared information, and that this performance gap increases with the correlation between network and feature structure since the method can capture the increased joint information. Finally, we close our paper with a short summary and a discussion about potential future directions for our method.

2 Related Work

The advent of increasing computational power coupled with the continuous release and ubiquity of large graph-structured datasets has triggered a surge of research in the field of network embeddings. The main motivation behind this trend is to be able to convert a graph into a low-dimensional space where its structural information and properties are maximally preserved [38]. The aim is to extract unseen or hard to obtain properties of the network, either directly or by feeding the learned representations to a downstream inference pipeline.

2.1 Graph embedding survey: from matrix factorisation to deep learning

In early work, low-dimensional node embeddings were learned for graphs constructed from non-relational data by relying on matrix factorisation techniques: by assuming that the input data lies on a low dimensional manifold, such methods sought to reduce the dimensionality of the data while preserving its structure, and did so by factorising graph Laplacian eigenmaps [39] or node proximity matrices [40].

More recent work has attempted to develop embedding architectures that can use deep learning techniques to compute node representations. DeepWalk 

[41], for instance, computes node co-occurrence statistics by sampling the input graph via truncated random walks, and adopts a SkipGram neural language model to maximise the probability of observing the neighbourhood of a node given its embedding. By doing so the learned embedding space preserves second order proximity in the original graph. However, this technique and the ones that followed [42, 43] present generalisation caveats, as unobserved nodes during training cannot be meaningfully embedded in the representation space, and the embedding space itself cannot be generalised between graphs. Instead of relying on random walk-based sampling of graphs to feed deep learning architectures, other approaches have used the whole network as input to autoencoders in order to learn, at the bottleneck layer, an efficient representation able to recover proximity information [31, 44, 34]

. However, the techniques developed herein remained limited due to the fact that successful deep learning models such as convolutional neural networks require an underlying euclidean structure in order to be applicable.

2.2 Geometric deep learning survey: defining convolutional layers on non-euclidean domains

This restriction has been gradually overcome by the development of graph convolutions or Graph Convolutional Networks (GCN). By relying on the definition of convolutions in the spectral domain, Bruna et al. [25] defined spectral convolution layers based on the spectrum of the graph Laplacian. Several modifications and additions followed and were progressively added to ensure the feasibility of learning on large networks, as well as the spatial localisation of the learned filters [45, 27]. A key step is made by [46] with the use of Chebychev polynomials of the Laplacian, in order to avoid having to work in the spectral domain. These polynomials, of order up to , generate localised filters that behave as a diffusion operator limited to hops around each vertex. This construction is then further simplified by Kipf and Welling by assuming among others that  [24].

Recently, these approaches have been extended into more flexible and scalable frameworks. For instance, Hamilton et al. [26] extended the original GCN framework by enabling the inductive embedding of individual nodes, training a set of functions that learn to aggregate feature information from a node’s local neighborhood. In doing so, every node defines a computational graph whose parameters are shared for all the graphs nodes. More broadly, the combination of GCN with autoencoder architectures has proved fertile for creating new embedding methods. The introduction of probabilistic node embeddings, for instance, has appeared naturally from the application of variational autoencoders to graph data [29, 28, 30], and has since led to explorations of the uncertainty of embeddings [36, 32], of appropriate levels of disentanglement and overlap [47], and of better representation spaces for measuring pairwise embedding distances (see in particular recent applications of the Wasserstein distance between probabilistic embeddings [32, 48]). Such models consistently outperform earlier techniques on different benchmarks and have opened several interesting lines of research in fields ranging from drug design [49] to particle physics [50].

Most of the more recent approaches mentioned above can incorporate node features (either because they rely on them centrally, or as an add-on). However, with the exception of DANE [33], they mostly do so by assuming that node features are an additional source of information which is congruent with the network structure (e.g. multi-task learning with shared weights [34], or fusing both information types together [35]). That assumption may not hold in many complex datasets, and it seems important to explore what type of embeddings can be constructed when we lift it, considering different levels of congruence between a network and the features of its nodes.

3 Methods

In this section we present the architecture of the neural network model we use to generate shared feature-structure node embeddings111The implementation of our model is available online at github.com/ixxi-dante/an2vec.. We take a featured network as input, with structure represented as an adjacency matrix and node features represented as vectors (see below for a formal definition). Our starting point is a GCN-VAE, and our first goal is a multitask reconstruction of both node features and network adjacency matrix. Then, as a second goal, we tune the architecture to be able to scale the number of embedding dimensions dedicated to feature-only reconstruction, adjacency-only reconstruction, or shared feature-adjacency information, while keeping the number of trainable parameters in the model constant.

3.1 Multitask graph convolutional autoencoder

We begin with the graph-convolutional variational autoencoder developed by [30], which stacks graph-convolutional (GC) layers [24] in the encoder part of a variational autoencoder [29, 28] to obtain a lower dimensional embedding of the input structure. This embedding is then used for the reconstruction of the original graph (and in our case, also of the features) in the decoding part of the model. Similarly to [24]

, we use two GC layers in our encoder and generate Gaussian-distributed node embeddings at the bottleneck layer of the autoencoder. We now introduce each phase of our embedding method in formal terms.

3.1.1 Encoder

We are given an undirected unweighted feautured graph , with nodes, each node having a -dimensional feature vector. Loosely following the notations of [30], we note the graph’s adjacency matrix (diagonal elements set to ), the matrix of node features, and the D-dimensional feature vector of a node .

The encoder part of our model is where -dimensional node embeddings are generated. It computes and , two matrices, which parametrise a stochastic embedding of each node:

Here we use two graph-convolutional layers for each parameter set, with shared weights at the first layer and parameter-specific weights at the second layer:

In this equation, and

are the weight matrices for the linear transformations of each layer’s input;

refers to a rectified linear unit 

[51]; and following the formalism introduced in [24], is the standard normalised adjacency matrix with added self-connections, defined as:

where is the identity matrix.

3.1.2 Embedding

The parameters and produced by the encoder define the distribution of an -dimensional stochastic embedding for each node , defined as:

Thus, for all the nodes we can write a probability density function over a given set of embeddings

, in the form of an matrix:

3.1.3 Decoder

The decoder part of our model aims to reconstruct both the input node features and the input adjacency matrix by producing parameters of a generative model for each of the inputs. On one hand, the adjacency matrix

is modelled as a set of independent Bernoulli random variables, whose parameters come from a bi-linear form applied to the output of a single dense layer:

Similarly to above, is the weight matrix for the first adjacency matrix decoder layer, and is the weight matrix for the bilinear form which follows.

On the other hand, features can be modelled in a variety of ways, depending on whether they are binary or continuous, and if their norm is constrained or not. Features in our experiments are one-hot encodings, so we model the reconstruction of the feature matrix

by using single-draw

-categories multinomial random variables. The parameters of those multinomial variables are computed from the embeddings with a two-layer perceptron:

222Other types of node features are modelled according to their constraints and domain. Binary features are modelled as independent Bernoulli random variables. Continuous-range features are modelled as Gaussian random variables in a similar way to the embeddings themselves.

In the above equations, refers to the logistic function applied element-wise on vectors or matrices, and refers to the normalised exponential function, also applied element-wise, with running along the rows of matrices (and along the indices of vectors).

Thus we can write the probability density for a given reconstruction as:

3.1.4 Learning

The variational autoencoder is trained by minimising an upper bound to the marginal likelihood-based loss [29] defined as:

Here

is the Kullback-Leibler divergence between the distribution of the embeddings and a Gaussian Prior, and

is the vector of decoder parameters whose associated loss acts as a regulariser for the decoder layers.333Indeed, following [29] we assume , such that .

Computing the adjacency and feature reconstruction losses by using their exact formulas is computationally not tractable, and the standard practice is instead to estimate those losses by using an empirical mean. We generate

samples of the embeddings by using the distribution given by the encoder, and average the losses of each of those samples444In practice, is often enough. [29, 28]:

Finally, for diagonal Gaussian embeddings such as the ones we use, can be expressed directly [28]:

3.1.5 Loss adjustments

In practice, to obtain useful results a few adjustments are necessary to this loss function. First, given the high sparsity of real-world graphs, the

and terms in the adjacency loss must be scaled respectively up and down in order to avoid globally near-zero link reconstruction probabilities. Instead of penalising reconstruction proportionally to the overall number of errors in edge prediction, we want false negatives ( terms) and false positives ( terms) to contribute equally to the reconstruction loss, independent of graph sparsity. Formally, let denote the density of the graph’s adjacency matrix (); then we replace by the following re-scaled estimated loss (the so-called “balanced cross-entropy”):

Second, we correct each component loss for its change of scale when the shapes of the inputs and the model parameters change: is linear in and , is quadratic in , and is linear in (but not in , remember that since each is a single-draw multinomial).

Beyond dimension scaling, we also wish to keep the values of and comparable and, doing so, maintain a certain balance between the difficulty of each task. As a first approximation to the solution, and in order to avoid more elaborate schemes which would increase the complexity of our architecture (such as [52]), we divide both loss components by their values at maximum uncertainty555That is, , and ., respectively and .

Finally, we make sure that the regulariser terms in the loss do not overpower the actual learning terms (which are now down-scaled close to 1) by adjusting and an additional factor, , which scales the Kullback-Leibler term.666We use . These adjustments lead us to the final total loss the model is trained for:

where we have removed constant terms with respect to trainable model parameters.

3.2 Scaling shared information allocation

The model we just presented uses all dimensions of the embeddings indiscriminately to reconstruct the adjacency matrix and the node features. While this can be useful in some cases, it cannot adapt to different interdependencies between graph structure and node features; in cases where the two are not strongly correlated, the embeddings would lose information by conflating features and graph structure. Therefore our second aim is to adjust the dimensions of the embeddings used exclusively for feature reconstruction, or for adjacency reconstruction, or used for both.

In a first step, we restrict which part of a node’s embedding is used for each task. Let be the number of embedding dimensions we will allocate to adjacency matrix reconstruction only, the number of dimensions allocated to feature reconstruction only, and the number of dimensions allocated to both. We have . We further introduce the following notation for the restriction of the embedding of node to a set of dedicated dimensions 777Note that the order of the indices does not change the training results, as the model has no notion of ordering inside its layers. What follows is valid for any permutation of the dimensions, and the actual indices only matter to downstream interpretation of the embeddings after training.:

This extends to the full matrix of embeddings similarly:

Using these notations we adapt the decoder to reconstruct adjacency and features as follows:

In other words, adjacency matrix reconstruction relies on embedding dimensions, feature reconstruction relies on dimensions, and overlapping dimensions are shared between the two. Our reasoning is that for datasets where the dependency between features and network structure is strong, shallow models with higher overlap value will perform better than models with the same total embedding dimensions and less overlap, or will perform on par with models that have more total embedding dimensions and less overlap. Indeed, the overlapping model should be able to extract the information shared between features and network structure and store it in the overlapping dimensions, while keeping the feature-specific and structure-specific information in their respective embedding dimensions. This is to compare to the non-overlapping case, where shared network-feature information is stored redundantly, both in feature- and structure-specific embeddings, at the expense of a larger number of distinct dimensions.

Therefore, to evaluate the performance gains of this architecture, one of our measures is to compare the final loss for different hyperparameter sets, keeping

and fixed and varying the overlap size . Now, to make sure the training losses for different hyperparameter sets are comparable, we must maintain the overall number of trainable parameters in the model fixed. The decoder already has a constant number of trainable parameters, since it only depends on the number of dimensions used for decoding features () and adjacency matrix (), which are themselves fixed.

On the other hand, the encoder requires an additional change. We maintain the dimensions of the encoder-generated and parameters fixed at (independently from , given the constraints above), and reduce those outputs to dimensions by averaging dimensions and together.888Formally:

where denotes concatenation along the columns of the matrices. In turn, this model maintains a constant number of trainable parameters, while allowing us to adjust the number of dimensions shared by feature and adjacency reconstruction (keeping and constant). Figure 1 schematically represents this architecture.

Figure 1: Schema of the overlapping embedding model we propose. Grey blocks represent model layers; coloured areas correspond to information dedicated to adjacency (red) or features (blue). The first GC layer on the left receives the node features as input; the top dense layer on the right produces the feature reconstruction, and the bottom bilinear layer produces the adjacency reconstruction.

4 Results

We are interested in measuring two main effects: first, the variation in model performance as we increase the overlap in the embeddings, and second, the capacity of the embeddings with overlap (versus no overlap) to capture and benefit from dependencies between graph structure and node features. To that end, we train overlapping and non-overlapping models on synthetic data with different degrees of correlation between network structure and node features.

4.1 Synthetic featured networks

We use a Stochastic Block Model [53] to generate synthetic featured networks, each with communities of nodes, with intra-cluster connection probabilities of , and with inter-cluster connection probabilities of . Each node is initially assigned a colour which encodes its feature community; we shuffle the colours of a fraction of the nodes, randomly sampled. This procedure maintains constant the overall count of each colour, and lets us control the correlation between the graph structure and node features by moving from 0 (no correlation) to 1 (full correlation).

Node features are represented by a one-hot encoding of their colour (therefore, in all our scenarios, the node features have dimension ). However, since in this case all the nodes inside a community have exactly the same feature value, the model can have difficulties differentiating nodes from one another. We therefore add a small Gaussian noise () to make sure that nodes in the same community can be distinguished from one another.

Note that the feature matrix has less degrees of freedom than the adjacency matrix in this setup, a fact that will be reflected in the plots below. However, opting for this minimal generative model lets us avoid the parameter exploration of more complex schemes for feature generation, while still demonstrating the effectiveness of our model.

4.2 Comparison setup

To evaluate the efficiency of our model in terms of capturing meaningful correlations between network and features, we compare overlapping and non-overlapping models as follows. For a given maximum number of embedding dimensions , the overlapping models keep constant the number of dimensions used for adjacency matrix reconstruction and the number of dimensions used for feature reconstruction, with the same amount allocated to each task: . However they vary the overlap from 0 to by steps of 2. Thus the total number of embedding dimensions varies from to , and as decreases, increases. We call one such model .

Now for a given overlapping model , we define a reference model , which has the same total number of embedding dimensions, but without overlap: , and (explaining why we vary with steps of 2). Note that while the reference model has the same information bottleneck as the overlapping model, it has less trainable parameters in the decoder, since will decrease as decreases. Nevertheless, this will not be a problem for our measures, since we will be mainly looking at the behaviour of a given model for different values of (i.e. the feature-network correlation parameter).

For our calculations (if not noted otherwise) we use synthetic networks of nodes (i.e. 100 clusters), and set the maximum embedding dimensions to 20. For all models, we set the intermediate layer in the encoder and the two intermediate layers in the decoder to an output dimension of , and the internal number of samples for loss estimation at

. We train our models for 1000 epochs using the Adam optimiser

[54] with a learning rate of (following [30]), after initialising weights following [55]. For each combination of and , the training of the overlapping and reference models is repeated times on independent featured networks.

Since the size of our synthetic data is constant, and we average training results over independently sampled data sets, we can meaningfully compare the averaged training losses of models with different parameters. We therefore take the average best training loss of a model to be our main measure, indicating the capacity to reconstruct an input data set for a given information bottleneck and embedding overlap.

4.3 Advantages of overlap

Figure 2: Absolute training loss values of overlapping and reference models. The curve colours represents the total embedding dimensions , and the x axis corresponds to feature-network correlation. The top row is the total loss, the middle row is the adjacency matrix reconstruction loss and the bottom row is the feature reconstruction loss. The left column shows overlapping models, and the right column shows reference non-overlapping models.

4.3.1 Absolute loss values

Figure 2 shows the variation of the best training loss (total loss, adjacency reconstruction loss, and feature reconstruction loss) for both overlapping and reference models, with ranging from 0 to 1 and decreasing from 20 to 10 by steps of 2. One curve in these plots represents the variation in losses of a model with fixed

for data sets with increasing correlation between network and features; each point aggregates 20 independent trainings, used to bootstrap 95% confidence intervals.

We first see that all losses, whether for overlapping model or reference, decrease as we move from the uncorrelated scenario to the correlated scenario. This is true despite the fact that the total loss is dominated by the adjacency reconstruction loss, as feature reconstruction is an easier task overall. Second, recall that the decoder in a reference model has less parameters than its corresponding overlapping model of the same dimensions (except for zero overlap), such that the reference is less powerful and produces higher training losses. The absolute values of the losses for overlap and reference models are therefore not directly comparable. However, the changes in slopes are meaningful. Indeed, we note that the curve slopes are steeper for models with higher overlap (lower ) than for lower overlap (higher ), whereas they seem relatively independent for the reference models of different . In other words, as we increase the overlap, our models seem to benefit more from an increase in network-feature correlation than what a reference model benefits.

4.3.2 Relative loss disadvantage

In order to assess this trend more reliably, we examine losses relative to the maximum embedding models. Figure 3 plots the loss disadvantage that overlap and reference models have compared to their corresponding model with , that is, . We call this the relative loss disadvantage of a model. In this plot, the height of a curve thus represents the magnitude of the decrease in performance of a model relative to the model with maximum embedding size, . Note that for both the overlap model and the reference model, moving along one of the curves does not change the number of trainable parameters in the model.

As the correlation between network and features increases, we see that the relative loss disadvantage decreases in overlap models, and that the effect is stronger for higher overlaps. In other words, when the network and features are correlated, the overlap captures this joint information and compensates for the lower total number of dimensions (compared to ): the model achieves a better performance than when network and features are more independent. Strikingly, for the reference model these curves are flat, thus indicating no variation in relative loss disadvantage with varying network-feature correlations in these cases. This confirms that the new measure successfully controls for the baseline decrease of absolute loss values when the network-features correlation increases, as observed in Figure 2. Our architecture is therefore capable of capturing and taking advantage of some of the correlation by leveraging the overlap dimensions of the embeddings.

Finally note that for high overlaps, the feature reconstruction loss value actually increases a little when grows. The behaviour is consistent with the fact that the total loss is dominated by the adjacency matrix loss (the hardest task). In this case it seems that the total loss is improved more by exploiting the gain of optimising for adjacency matrix reconstruction, and paying the small cost of a lesser feature reconstruction, than decreasing both adjacency matrix and feature losses together. If wanted, this strategy could be controlled using a gradient normalisation scheme such as [52].

Figure 3: Relative loss disadvantage for overlapping and reference models. The curve colours represents the total embedding dimensions , and the x axis corresponds to feature-network correlation. The top row is the total loss, the middle row is the adjacency matrix reconstruction loss and the bottom row is the feature reconstruction loss. The left column shows overlapping models, and the right column shows reference non-overlapping models. See main text for a discussion.

4.4 Standard benchmarks

Finally we compare the performance of our architecture to other well-known embedding methods, namely spectral clustering (SC)

[56], DeepWalk (DW) [41], and the vanilla non-variational and variational Graph Auto-Encoders (GAE and VGAE) [30], on the link prediction task introduced by [30] for the Cora, CiteSeer and PubMed datasets [57, 58]. Note that neither SC nor DW support feature information as an input.

The Cora and CiteSeer datasets are citation networks made of respectively 2708 and 3312 machine learning articles, each assigned to a small number of document classes (7 for Cora, 6 for CiteSeer), with a bag-of-words feature vector for each article (respectively 1433 and 3703 words). The PubMed network is made of 19717 diabetes-related articles from the PubMed database, each assigned to one of three classes, with article feature vectors containing

term frequency-inverse document frequency (TF/IDF) scores for 500 words. The link prediction task consists in training a model on a version of the datasets where part of the edges have been removed, while node features are left intact. A test set is formed by randomly sampling 15% of the edges combined with the same number of random disconnected pairs (non-edges), and the model is trained on the remaining dataset where 15% of the real edges are missing.

Method Cora CiteSeer PubMed
AUC AP AUC AP AUC AP
SC 84.6 0.01 88.5 0.00 80.5 0.01 85.0 0.01 84.2 0.02 87.8 0.01
DW 83.1 0.01 85.0 0.00 80.5 0.02 83.6 0.01 84.4 0.00 84.1 0.00
GAE 91.0 0.02 92.0 0.03 89.5 0.04 89.9 0.05 96.4 0.00 96.5 0.00
VGAE 91.4 0.01 92.6 0.01 90.8 0.02 92.0 0.02 94.4 0.02 94.7 0.02
AN2VEC-0 89.5 0.01 90.6 0.01 91.2 0.01 91.5 0.02 91.8 0.01 93.2 0.01
AN2VEC-16 89.4 0.01 90.2 0.01 91.1 0.01 91.3 0.02 92.1 0.01 92.8 0.01
AN2VEC-S-0 92.9 0.01 93.4 0.01 94.3 0.01 94.8 0.01 95.1 0.01 95.4 0.01
AN2VEC-S-16 93.0 0.01 93.5 0.00 94.9 0.00 95.1 0.00 93.1 0.01 93.1 0.01
Table 1: Link prediction task in citation networks. SC, DW, GAE and VGAE values are from [30]

. Error values indicate the sample standard deviation.

We pick hyperparameters such that the restriction of our model to VGAE would match the hyperparameters used by [30]. That is a 32-dimensions intermediate layer in the encoder and the two intermediate layers in the decoder, and 16 embedding dimensions for each reconstruction task (). We call the zero-overlap and the full-overlap versions of this model AN2VEC-0 and AN2VEC-16 respectively. In addition, we test a variant of these models with a shallow adjacency matrix decoder, consisting of a direct inner product between node embeddings, while keeping the two dense layers for feature decoding. Formally: . This modified overlapping architecture can be seen as simply adding the feature decoding and embedding overlap mechanics to the vanilla VGAE. Consistently, we call the zero-overlap and full-overlap versions AN2VEC-S-0 and AN2VEC-S-16.

We follow the test procedure laid out by [30]: we train for epochs using the Adam optimiser [54] with a learning rate of , initialise weights following [55], and repeat each condition times. The parameter of each node’s embedding is then used for link prediction (i.e. the parameter is put through the decoder directly without sampling), for which we report area under the ROC curve and average precision scores in Table 1.

Figure 4: Cora embeddings created by AN2VEC-S-16, downscaled to 2D using Multidimensional scaling. Node colors correspond to document classes, while network links are in grey.

We reasoned that AN2VEC-0 and AN2VEC-16 should have somewhat poorer performance than VGAE. These models are required to reconstruct an additional output, which is not directly useful to the link prediction task at hand. Results confirmed our intuition. However, we found that the shallow decoder models AN2VEC-S-0 and AV2VEC-S-16 perform consistently better than the vanilla VGAE (for Cora and CiteSeer) and than their deep counterparts AN2VEC-0 and AN2VEC-16 (for all datasets). As neither AN2VEC-0 nor AN2VEC-16 exhibited over-fitting, this behaviour is suprising and warrants further explorations which are beyond the scope of this paper (in particular, this may be specific to the link prediction task). Nonetheless, the higher performance of AN2VEC-S-0 and AN2VEC-S-16 over the vanilla VGAE on Cora and CiteSeer confirms that including feature reconstruction in the constraints of node embeddings is capable of increasing link prediction performance when feature and structure are not independent, and is consistent with [33, 35, 34]. An illustration of the embeddings produced by AN2VEC-S-16 on Cora is shown in Figure 4.

On the other hand, performance of AN2VEC-S-0 on PubMed is on par with GAE and VGAE, while AN2VEC-S-16 is a little less performant. The fact that lower overlap models are more performant on this dataset indicates that features and structure are less congruent here than in Cora or CiteSeer (again consistent with the comparisons found in [34]). Despite this, the embeddings produced by our AN2VEC-S-16 method encode both the network structure and the node features, and can therefore be used for downstream tasks involving both types of information.

5 Conclusions

In this work, we proposed an attributed network embedding method based on the combination of Graph Convolutional Networks and Variational Autoencoders. Beyond the novelty of this architecture, it is able to consider jointly network information and node attributes for the embedding of nodes. We further introduced a control parameter able to regulate the amount of information allocated to the reconstruction of the network, the features, or both. In doing so, we showed how shallow versions of the proposed model outperform the corresponding non-interacting reference embeddings on given benchmarks, and demonstrated how this overlap parameter consistently captures joint network-feature information when they are correlated.

Our method therefore opens several new lines of research and applications in fields where attributed networks are relevant. Examples range from the coevolution of social networks to dynamical processes taking place on them such as language or trending information, and can be potentially envisaged in network medicine or in systems biology. We expect that our method will help yield a deeper understanding between node features and structure, to better predict network evolution and link creation mechanisms. In particular, it should help identify nodes with special roles in the network by clarifying whether their importance has structural or feature origin.

In this paper our aim was to ground our method and demonstrate its usefulness on small but controllable featured networks. Its evaluation on more complex synthetic datasets, in particular with richer generation schemes, as well as its application to real datasets, are therefore our immediate goals in the future.

List of abbreviations

VAE
Variational Autoencoder

GCN
Graph Convolutional Network

GCN-VAE
Graph Convolutional Variational Autoencoder

GC
Graph Convolutional layer

MLP
Multi-layer perceptron

Ber
Bernoulli random variable

ReLU
Rectified Linuar Unit

KL
Kullback-Leibler divergence

AN2VEC
Attributed Network to Vector model:

  • AN2VEC-0: zero overlap model

  • AN2VEC-16: 16-dimensions overlap model

  • AN2VEC-S-0: zero overlap model with shallow adjacency decoder

  • AN2VEC-S-16: 16-dimensions overlap model with shallow adjacency decoder

SC
Spectral Clustering

DW
DeepWalk embedding model

DANE
Deep Attributed Network Embedding

GAE
Graph Autoencoder

VGAE
Variational Graph Autoencoder

TF/IDF
Term-frequency-inverse-document-frequency

AUC
Area under the ROC curve

AP
Average precision

ROC
Receiver operating characteristic

Availability of data and material

The synthetic datasets generated for this work are stochastically created by our implementation, available at github.com/ixxi-dante/an2vec.

The datasets used for standard benchmarking (Cora, CiteSeer, and PubMed) are available at linqs.soe.ucsc.edu/data.

Competing interests

The authors declare that they have no competing interests.

Funding

This project was supported by the LIAISON Inria-PRE project, the SoSweet ANR project (ANR-15-CE38-0011), and the ACADEMICS project financed by IDEX LYON.

Author’s contributions

MK, JLA and SL participated equally in designing and developing the project, and in writing the paper. SL implemented the model and experiments. SL and JLA developed and implemented the analysis of the results.

Acknowledgements

We thank E. Fleury, J-Ph. Magué, D. Seddah, and E. De La Clergerie for constructive discussions and for their advice on data management and analysis. Computations for this work were made using the experimental GPU platform at the Centre Blaise Pascal of ENS Lyon, relying on the SIDUS infrastructure provided by E. Quemener.

References