Attribute2vec: Deep Network Embedding Through Multi-Filtering GCN

We present a multi-filtering Graph Convolution Neural Network (GCN) framework for network embedding task. It uses multiple local GCN filters to do feature extraction in every propagation layer. We show this approach could capture different important aspects of node features against the existing attribute embedding based method. We also show that with multi-filtering GCN approach, we can achieve significant improvement against baseline methods when training data is limited. We also perform many empirical experiments and demonstrate the benefit of using multiple filters against single filter as well as most current existing network embedding methods for both the link prediction and node classification tasks.



There are no comments yet.


page 1

page 2

page 3

page 4


Co-embedding of Nodes and Edges with Graph Neural Networks

Graph, as an important data representation, is ubiquitous in many real w...

Automated Graph Learning via Population Based Self-Tuning GCN

Owing to the remarkable capability of extracting effective graph embeddi...

LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation

Graph Convolution Network (GCN) has become new state-of-the-art for coll...

Neighbor2vec: an efficient and effective method for Graph Embedding

Graph embedding techniques have led to significant progress in recent ye...

Graph Convolution: A High-Order and Adaptive Approach

In this paper, we presented a novel convolutional neural network framewo...

Detecting intracranial aneurysm rupture from 3D surfaces using a novel GraphNet approach

Intracranial aneurysm (IA) is a life-threatening blood spot in human's b...

Approximating Network Centrality Measures Using Node Embedding and Machine Learning

Analyzing and extracting useful information from real-world complex netw...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Network embedding aims to embed network information into low-dimensional vector space so that standard machine learning algorithms can be applied directly to investigate latent features of networks. To ensure the quality of various network mining tasks, the embedded vector representation should be representative–it needs to encode as much information about a node or an edge as possible, including its context information as well as the structure information. Such a vector should also be concise–the dimension of it should not be too large, in consideration of both memory usage efficiency and computational efficiency. Getting appropriate embedding becomes more challenging nowadays since the network size and the node attribute variety both grow exponentially fast. It is crucial to gain a representation to embed diverse node attributes as well as the structural properties from a network.

Among different network embedding methods, pioneer works such as DeepWalk [Perozzi et al.2014], and node2vec [Grover and Leskovec2016]

have laid a foundation for embedding framework that incorporates deep learning and skip-gram architecture

[Mikolov et al.2013]. Such works propose to use random walk for simulating the context of a center node in the network. Following this architecture, subsequent techniques, such as Metapath2vec [Dong et al.2017] also solves the heterogeneous network embedding problem. In most of these methods, various types of nodes are used to generate random walks and incorporated into the objective function. Methods of this kind have flexible strategies of generating different random walks, and perform well in capturing neighborhood similarities and detecting community membership. However, most of them only focus on encoding network structure, which take the whole network structure as input to generate embedding vectors, while keeping little attention on node attributes that also provide important information. Most of these techniques focus on transductive embedding, which typically operates on fixed graph thus does not generalize to unseen data.

Node attributes in network embedding have been proven to be important [Huang et al.2017]; however, the embedding of networks consisting of nodes affiliated with various attributes is still in its early stage due to the complexity of attributes [Zhang et al.2018]. There are a few works combining the network structure and node attributes to build their embeddings, such as LINE [Tang et al.2015] and AANE [Huang et al.2017]. Different methods have been adopted to extract the node attributes information. Most of these models are limited in their transductive essence–the input to the system has to be the whole fixed network with node attributes, which causes the model to generate poorly from the training data to unseen data. Another limitation is the scalability–such methods need to perform computation with respect to matrix that scales linearly to the graph size.

To address the aforementioned issues, recently proposed GraphSAGE model [Hamilton et al.2017a] uses local aggregation functions for embedding computation, which largely simplifies the whole graph propagation procedure mentioned above, and also achieves inductive property. Inspired by this framework, we build MF-GCN, which adopts a multiple-filtering local GCN model as the aggregation function, incorporates a skip-gram with negative sampling model [Levy and Goldberg2014], and a quadratic mean-square-error on the node attributes as the objective function. We demonstrate that with multiple-filtering GCN architecture, the system can capture diverse aspects of node attributes. Two network learning tasks, node classification and link prediction, are conducted in four benchmark datasets to evaluate the performances of our algorithm, compared to several other state-of-the-art approaches. The contributions of this research is summarized as below:

  1. We introduce a novel approach called MF-GCN based on multi-filter GCN aggregator for extracting different aspects of node attributes. MF-GCN outperforms state-of-the-art methods in capturing diverse node attributes, along with the network structure information.

  2. We show that MF-GCN can achieve significantly better performance than other baseline methods when the training data is limited.

  3. We empirically demonstrate the impact of using different numbers of GCN filters and suggest optimal numbers of filters for different learning tasks.

  4. We conduct various experiments and show that our MF-GCN model has superior performance over most of the baseline methods on link prediction and node classification tasks.

2 Literature Review

Representation learning on network data has mainly been about performing embedding operation, which involves an Encoding and a Decoding parts. Encoding generates an embedding, and decoding is used as the objective function to update model learning parameters. Techniques to solve this problem nowadays usually follow three tracks: Skip-gram-based model; Matrix Factorization model; and Graph Neural Network. Below we review related research in these three tracks.

Skip-gram-based model defines a framework for embedding by incorporating the information of relevant nodes appearing in certain steps of random walk of the target node. Many existing methods have adopted such model. For example, DeepWalk [Perozzi et al.2014] adopts uniform sampling strategy for performing random walk, then uses the skip-gram model with negative sampling for computing the objective function. However, the uniform sampling strategy and unclearly defined objective function could not well preserve the network structure. Node2vec [Grover and Leskovec2016] combines Breadth-first and Depth-first searching strategies, so it remains more first- and second-order proximity; but still it lacks a clear objective function for preserving global network structure. LINE [Tang et al.2015] defines an objective function to minimize the KL-divergence between the empirical distribution and the objective distribution; thus, it does well on preserving first- and second-order proximity, but the local sampling strategy still lacks information for global network structure. HARP [Chen et al.2018a] shows the improvement of using a graph coarsening strategy on different skip-gram approaches.

Matrix Factorization method

performs embedding through factorizing a similarity matrix, and uses the factorized vector as the embedding representation vector. This stream of research has mostly focused on designing a similarity matrix to perform factorization. Typically the operation is eigen-decomposition or singular-value decomposition. Representative work in this area includes HOPE

[Ou et al.2016] and Netsmf [Qiu et al.2019].

Graph Neural Network defines a framework through graph convolution to incorporate neighborhood information into the encoding layer. From the approach perspective, this line of work could be categorized into two branches: Spectral-based Filtering and Spatial-based Filtering. Spectral-based Filtering method performs graph convolution and forms the proposition function in the spectral domain. ChebNet [Defferrard et al.2016] follows Spectral Graph Theory [Hammond et al.2011] to define a spectral graph convolution with the full set of Chebyshev Polynomial approximation to the convolution kernel. Graph Convolutional Neural Network (GCN) [Kipf and Welling2016] later simplifies it by using only first-order approximation. FastGCN [Chen et al.2018b] uses a sampling technique to avoid whole graph convolution. Following the GCN idea, AGCN [Li et al.2018] adopts its structure but creates different learning metrics to approximate a Gaussian kernel. Spatial-based Filtering method performs graph convolution in a spatial domain. The original work [Scarselli et al.2008] defines a framework to recursively update propositional function until an equilibrium point is reached. GGNN [Lin et al.2015]

improves this model by conducting a gated, recurrent unit. Later various techniques have been proposed in different ways, including designing efficient sampling strategy

[Chen et al.2018b], improving the model inductive ability [Hamilton et al.2017b], and defining convolution operation on spatial domain [Niepert et al.2016]. Graph Attention Mechanism [Veličković et al.2017] later comes to further improve the model by assigning attention coefficients to address relative importance of neighborhood nodes.

3 Preliminary Requirement and Optimization Model

In this section, we present the prerequisite models and techniques for our method. We first define the problem, then review the Graph Convolutional Neural Network model and its local extended version that is crucial to our approach. Later the optimization model used in our method to update the system parameters is derived.

3.1 Problem Definition

Generally, a network is defined as , where is a finite set of vertices (, the number of nodes); is the set of edges; is the set of attributes associated with each node; is the weighted adjacency matrix, where each entry represents the weight of each edge. In this work, we focus on the unweighted homogeneous network embedding, so if there is an edge connecting vertices and , otherwise . we define as an embedding function that maps each node to a latent representation. For every center node , let denote its context nodes generated from a random walk sampling strategy, and denote the one-hop neighborhood nodes of center node . The problem we try to solve is to find the best to generate embedding vectors for nodes which could fulfill various machine learning tasks.

3.2 GCN and Its Local Version

In GCN, the graph convolution operation is used as layer propagation function,which could be written as:



is the convolved embedding feature matrix (the intermediate layer hidden representation),

the graph adjacency matrix plus an Identity matrix, and

the diagonal degree matrix where , and the signal with C input channels (C-dimensional attribute vector for each node). is the learnable matrix of filter parameters, the total number of nodes in the graph, and the dimension of the embedding vectors.

Local GCN [Hamilton et al.2017a] is an extended local version of GCN. Since in Equation 1 is the adjacency matrix and is the diagonal degree matrix, this formula could actually be treated as a normalization function that each entry of the output vector represents a node status propagated by all of its neighbor nodes. Equation 2 represents this operation:


where is the hidden representation of node in layer, is the weight parameters in layer, and is the dimension of the embedding vector. is the neighborhood nodes of a center node . This is a per-neighborhood normalization of the parameterized propagation layer. In the current work, we extend to use multiple local GCN filters as the propagation function.

3.3 Optimization Model

We follow the transductive optimization model from [Yang et al.2016]

, aiming at preserving the network structure information as well as predicting the correct node label. To preserve structure information, we apply the skip-gram model, which seeks to maximize the probability of observing the context nodes

of a center node based on its embedding representation:


We follow the assumption of conditional independency [Grover and Leskovec2016], where the probability of observing one context node is independent of other context nodes given the center node:


where is the context node of the center node. We maximize the objective function (Equation 3) by minimizing its negative log form:


where is the prediction probability. Let , denote the embedding vector representation of context node and the associated center node that are mapped by , we model this probability as follows:


where is the normalization factor that integrates over all nodes. is the inner product between and that represents the similarity of these two embedding representations. With this assumption, Equation 5 could be simplified to:


Numerical computation of is huge for large graph with millions of nodes, since the computation grows linearly with graph size. So we adopt negative sampling to approximate the normalization factor; thus the objective function becomes:


where , is the number of negative samples. is the negative sampling distribution.

For supervised loss of predicting the correct label, we adopt a cross entropy loss on a softmax layer:


where is the class number, is the ground truth node label, and is the predicted label of center node from softmax layer.

4 Methodology

In this section, we present the framework of our MF-GCN, and the whole system architecture. We achieve this by first filtering node attributes with multiple local GCN filters, then concatenating these filtered hidden representation vector to form one hidden layer; the succeeding hidden layer follows the same procedure.

4.1 Multiple-Filtering GCN Aggregator

Since local GCN could propagate information from layers to layers, it is intuitive to assume that multiple such local GCN filters could propagate information from different aspect of node attributes. Therefore we propose a multi-filtering GCN architecture–each single local GCN filter is one attribute extractor, so multiple such filters form one layer of feature extraction. Figure 1 visually shows this operation. For each hidden layer, we use multiple distinct local GCN filters (each filter has distinct parameters) to operate on the input of the previous hidden layer; for the first layer, the operation is performed on the input from the node attribute channels. This operation is shown below:


where is the input attribute channel vector for node , is the aggregation output from the -th local GCN aggregator, is the learnable parameters for the -th local GCN aggregation function, and

is the non-linear activation function. After concatenating these aggregated representation vectors from different local GCNs to form the final aggregated feature vector, we define our

MF-GCN aggregation function as below:


where represents the aggregation output for node in the -th layer. is the total number of Local GCN Filters. Following the forward propagation algorithm in [Hamilton et al.2017a], the final generated embedding vector is derived as:


Where represents the hidden embedding representation in the layer, is the parameter of the fully connected encoding layer, and is the non-linear activation function.

Figure 1: MF-GCN model architecture. (’+’ means concatenation).

4.2 Network Embedding Framework

We design our network embedding system according to the Graph Auto Encoding framework. Algorithm 1 shows the whole procedure of the MF-GCN embedding generation algorithm we propose. Figure 1 visually shows the whole system structure. The system can be divided into the encoding and decoding parts.

Encoding: In the encoding phase, the input is the node attribute vector . The MF-GCN embedding generation procedure is performed on the input vector as shown in Algorithm 1, and finally get the embedding vector

Decoding: The decoding phase includes two parts, a supervised loss and a Skip-gram model with Negative Sampling error (SGNN)

. These two loss function are integrated to form the final loss function:


The supervised loss part is constructed by a fully connected layer followed by softmax layer for classifying different node labels, it is separated from the SGNN loss part. The input of both parts comes from the embedding output representation


4.3 Training procedure

The mini-batch gradient descent is applied in the training process. For each training batch, we randomly select batch sized number of nodes as the training center nodes. Random walk is performed to sample the context nodes for each center node . Here we use the random walk strategy according to node2vec, and the random walk length is set to be . For negative sampling part, we uniformly sample nodes that do not have connections with those one-hop neighbor nodes of the center node; neither do they have connections with the context nodes sampled previously. The negative sample number is set to be 100 for each center node.

Training is performed jointly between the two loss functions and . For the supervised loss

, the training is straightforward by incorporating a softmax layer with cross-entropy loss. For the SGNN loss function part, at each training epoch, we first propagate the center node and derive its embedding representation

. After sampling its context nodes, we input all these nodes to the system and derive the embedding representations of all context nodes . Then we perform negative sampling and input these negative samples into the system to get the embedding representations for these negative sampled nodes . Then we calculate the SGNN loss function according to 8

. Note that we use all the normalized form of these embedding vector to perform inner product, since we are measuring their cosine similarities.

Input: Graph ; input node attributes ; network layer depth K; number of local GCN filters in each layer; initialized weight matrices ; non-linear activation function .
Output: Embedding vector representation for all

1:  .
2:  while not converged do
3:     sample a mini-batch of center nodes .
4:     sample the corresponding context nodes of these center nodes through certain random walk strategy.
5:     perform negative sampling on these center nodes according to negative sampling distribution .
6:     for k=1…K do
10:     end for
11:     compute loss function
12:     compute gradient of loss function and update weight matrices .
13:  end while
14:  return embedded representation
Algorithm 1 MF-GCN embedding generation

4.4 System Architecture

The system architecture is explicitly shown in Figure 1. The input to our MF-GCN aggregator is the center node attribute vector as well as all of its neighborhood nodes attributes. We choose the network layer depth , since we find that one layer of Multi-Filtering GCN can already achieve good performance. We pick our local GCN filter number to be 25, and filter size to be 16. We concatenate these filters to form a 400 dimension aggregation representation, then we concatenate again with the center node attribute, and feed into a fully connected encoding layer to generate the embedding vector. We set the dimension off the embedding vector to be 100. This embedding vector is used as the final feature representation for nodes.

5 Experiments

5.1 Experimental Setup

To empirically evaluate the effectiveness and efficiency of our proposed model, we apply MF-GCN in four public benchmark datasets: Citeseer, Cora, PubMed and Wiki. Citeseer, Cora and PubMed are three paper citation networks, where each node represents a paper and each edge represents a citation relationship. Wiki is a Wikipedia hyperlink network, where each node represents a web page and each edge indicates a hyperlink from one page to another. The node attributes of these networks are extracted as bag-of-words representation. Table 1 presents the specific statistics of these four datasets.

Citeseer Cora PubMed Wiki
# Nodes 3,312 2,708 19,717 2,405
# Edges 4,660 5,278 44,327 17,981
Feature Dimension 3,703 1,433 500 4,973
Node classes 6 7 3 17
Table 1: Statistics of datasets

Baselines We compare our method against several well known baselines: DeepWalk, node2vec, LINE, role2vec, Raw feature (raw attributes are used as the embedding representation), GraphSAGE-GCN, GraphSAGE-mean, GraphSAGE-pool, and DANE (Deep Attribute Network Embedding). For sampling context nodes of a center node, we use the random walk strategy based on node2vec, and set , . For LINE, we adopt the second order proximity. Role2vec uses logarithmic binning and node attribute concatenation for role mapping functions. GraphSAGE-GCN is a special version of our MF-GCN model where the filter number is 1. For GraphSAGE-Mean, and GraphSAGE-pool, we implement the algorithms according to the original paper, and we set the fixed GraphSAGE neighborhood sampling number to be 20. We set the latent representation dimension for all the comparing methods to be 100. We choose the architecture for our MF-GCN model the same as described in section 4.4.

5.2 Link Prediction

Our first evaluation task is Link Prediction, which we test the models on all four benchmark datasets. Specifically, for each dataset, we randomly remove of the edges while keeping the graph connected, and train the model on the remaining graph. For training, since we do not reach node labels, we only use SGNN loss as the cost function. For testing, we use all the removed edges as the positive testing edges, and we select equal number of nodes pairs from the network which have no edge connecting them as the negative testing edge; then we test the models on predicting the positive testing edge, as well as detecting the non-existed edges. The AUC scores of different methods are collected to reflect how well each model performs, shown in Table 2. From the experiment results, MF-GCN generally performs better among all baseline methods.

Model Citeceer Cora PubMed Wiki
DeepWalk 0.771 0.734 0.857 0.747
Node2vec 0.721 0.726 0.868 0.761
LINE 0.772 0.723 0.871 0.728
Role2vec 0.921 0.896 0.898 0.848
GraphSAGE-mean 0.917 0.859 0.936 0.869
GraphSAGE-GCN 0.907 0.892 0.901 0.880
GraphSAGE-pool 0.937 0.891 0.932 0.875
DANE 0.914 0.844 0.927 0.821
MF-GCN 0.924 0.913 0.944 0.893
Table 2: AUC score for link prediction on different dataset

5.3 Node Classification

Our second evaluation task is node classification. We separately select , , nodes from the network as the 3 training sets, then test the classification accuracy on the remaining nodes. Micro-f1 score is used as the performance measurement. The testing results are shown in Table 3, 4, 5, and 6 respectively. From the test results, MF-GCN shows significant improvement against all the baselines. Especially when the training data is limited, our MF-GCN greatly outperforms others.

Model 10% 30% 50%
DeepWalk 0.368 0.508 0.575
Node2vec 0.373 0.507 0.604
LINE 0.389 0.517 0.574
Role2vec 0.518 0.659 0.699
Raw Feature 0.551 0.693 0.705
GraphSAGE-mean 0.627 0.698 0.699
GraphSAGE-GCN 0.647 0.696 0.703
GraphSAGE-pool 0.623 0.706 0.716
DANE 0.546 0.703 0.722
MF-GCN 0.686 0.713 0.746
Table 3: F1 score for node classification on Citeceer
Model 10% 30% 50%
DeepWalk 0.317 0.694 0.773
Node2vec 0.338 0.683 0.794
LINE 0.309 0.697 0.784
Role2vec 0.420 0.635 0.715
Raw Feature 0.369 0.652 0.734
GraphSAGE-mean 0.379 0.793 0.796
GraphSAGE-GCN 0.406 0.772 0.798
GraphSAGE-pool 0.439 0.778 0.788
DANE 0.502 0.787 0.804
MF-GCN 0.578 0.814 0.815
Table 4: F1 score for node classification on Cora
Model 10% 30% 50%
DeepWalk 0.582 0.709 0.725
Node2vec 0.593 0.705 0.727
LINE 0.566 0.719 0.731
Role2vec 0.616 0.716 0.733
Raw Feature 0.637 0.709 0.724
GraphSAGE-mean 0.739 0.768 0.778
GraphSAGE-GCN 0.679 0.742 0.773
GraphSAGE-pool 0.671 0.749 0.764
DANE 0.711 0.758 0.801
MF-GCN 0.758 0.771 0.813
Table 5: F1 score for node classification on PubMed
Model 10% 30% 50%
DeepWalk 0.148 0.428 0.506
Node2vec 0.156 0.413 0.527
LINE 0.158 0.424 0.522
Role2vec 0.397 0.562 0.625
Raw Feature 0.211 0.423 0.532
GraphSAGE-mean 0.414 0.466 0.568
GraphSAGE-GCN 0.425 0.552 0.601
GraphSAGE-pool 0.432 0.558 0.560
DANE 0.434 0.567 0.604
MF-GCN 0.466 0.597 0.644
Table 6: F1 score for node classification on Wiki

5.4 Optimized Filter Size

In order to show the effect of using different number of local GCN filters on node classification task, we conduct experiments of testing the F1-score on of test nodes from Cora under various number of filters (from filter number 1 to filter number 100) for training. We also collect different training time for one iteration under different number of filters. Our training is performed on a work station with two RTX 2080 Ti GPU, and Intel Core i9-9820X. Figure 2 shows the testing result. We find that the F1 score constantly increases from to with the filter number changes from 1 to 25; then it drops to 0.78, and remains relatively the same score as the filter number keeps increasing. We think the reason for this change is that adding more filters when the filter size is small could help capture different aspect of node features, thus leads to the increase of F1 score; but when the filter number reaches a saturation point, the system starts to be overfitting as the filter number keeps rising. Therefore, we pick the filter number to be 25 for our MF-GCN model to achieve the best performance.

Figure 2: Evaluation of effectiveness on different filter number
Figure 3: 2 dimensional embedding visualizations on citeceer
Figure 4: 2 dimensional embedding visualizations on Cora
Figure 5: 2 dimensional embedding visualizations on PubMed
Figure 6: 2 dimensional embedding visualizations on Wiki

5.5 2D visualization

To visually present the advantage of our MF-GCN model, we plot the 2 Dimensional embedding visualization of different methods using t-SNE [Maaten and Hinton2008] as shown in Figure 3, 4, 5 amd 6. These plots are generated by performing node embedding on the test nodes from Cora, where we use nodes for training. From the plot, MF-GCN clearly shows better performance on distinguishing different clusters of nodes against other baselines.

6 Conclusion

In this paper, we propose MF-GCN, a novel network embedding approach that extracts different aspects of node features by using multiple local GCN filters. To show the effectiveness of our model, we conduct various experiments. First, we show MF-GCN could increase performance of AUC score on link prediction task against many baseline methods. Second, we conduct experiments on node classification task from four public benchmark datasets and show that by using MF-GCN model, the F1 score has a significant improvement against baseline methods, especially when the dataset is limited.

To show the effectiveness of using different numbers of filters, we conduct experiments to measure the performance of using different number of filters on node classification task, and provide suggestions on choosing the optimal number of filters.

Finally, the embedding results of MF-GCN against other baseline methods are visually shown, which demonstrates that our MF-GCN model has superior performance over other baseline methods.

For future directions, we would like to continue our research on incorporating attention mechanism that selectively picks important filters.


  • [Chen et al.2018a] Haochen Chen, Bryan Perozzi, Yifan Hu, and Steven Skiena. Harp: Hierarchical representation learning for networks. In

    Thirty-Second AAAI Conference on Artificial Intelligence

    , 2018.
  • [Chen et al.2018b] Jie Chen, Tengfei Ma, and Cao Xiao. Fastgcn: fast learning with graph convolutional networks via importance sampling. arXiv preprint arXiv:1801.10247, 2018.
  • [Defferrard et al.2016] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems, pages 3844–3852, 2016.
  • [Dong et al.2017] Yuxiao Dong, Nitesh V Chawla, and Ananthram Swami. metapath2vec: Scalable representation learning for heterogeneous networks. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pages 135–144. ACM, 2017.
  • [Grover and Leskovec2016] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855–864. ACM, 2016.
  • [Hamilton et al.2017a] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pages 1024–1034, 2017.
  • [Hamilton et al.2017b] William L Hamilton, Rex Ying, and Jure Leskovec. Representation learning on graphs: Methods and applications. IEEE Data(base) Engineering Bulletin, 40:52–74, 2017.
  • [Hammond et al.2011] David K Hammond, Pierre Vandergheynst, and Rémi Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30(2):129–150, 2011.
  • [Huang et al.2017] Xiao Huang, Jundong Li, and Xia Hu. Accelerated attributed network embedding. In Proceedings of the 2017 SIAM international conference on data mining, pages 633–641. SIAM, 2017.
  • [Kipf and Welling2016] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
  • [Levy and Goldberg2014] Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. In Advances in neural information processing systems, pages 2177–2185, 2014.
  • [Li et al.2018] Ruoyu Li, Sheng Wang, Feiyun Zhu, and Junzhou Huang. Adaptive graph convolutional neural networks. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
  • [Lin et al.2015] Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu.

    Learning entity and relation embeddings for knowledge graph completion.

    In Twenty-ninth AAAI conference on artificial intelligence, 2015.
  • [Maaten and Hinton2008] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605, 2008.
  • [Mikolov et al.2013] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
  • [Niepert et al.2016] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In International conference on machine learning, pages 2014–2023, 2016.
  • [Ou et al.2016] Mingdong Ou, Peng Cui, Jian Pei, Ziwei Zhang, and Wenwu Zhu. Asymmetric transitivity preserving graph embedding. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1105–1114. ACM, 2016.
  • [Perozzi et al.2014] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701–710. ACM, 2014.
  • [Qiu et al.2019] Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Chi Wang, Kuansan Wang, and Jie Tang. Netsmf: Large-scale network embedding as sparse matrix factorization. In The World Wide Web Conference, pages 1509–1520. ACM, 2019.
  • [Scarselli et al.2008] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2008.
  • [Tang et al.2015] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In Proceedings of the 24th international conference on world wide web, pages 1067–1077. International World Wide Web Conferences Steering Committee, 2015.
  • [Veličković et al.2017] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
  • [Yang et al.2016] Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning with graph embeddings. arXiv preprint arXiv:1603.08861, 2016.
  • [Zhang et al.2018] Zhen Zhang, Hongxia Yang, Jiajun Bu, Sheng Zhou, Pinggang Yu, Jianwei Zhang, Martin Ester, and Can Wang. Anrl: Attributed network representation learning via deep neural networks. In IJCAI, volume 18, pages 3155–3161, 2018.