1 Introduction
Network embedding aims to embed network information into lowdimensional vector space so that standard machine learning algorithms can be applied directly to investigate latent features of networks. To ensure the quality of various network mining tasks, the embedded vector representation should be representative–it needs to encode as much information about a node or an edge as possible, including its context information as well as the structure information. Such a vector should also be concise–the dimension of it should not be too large, in consideration of both memory usage efficiency and computational efficiency. Getting appropriate embedding becomes more challenging nowadays since the network size and the node attribute variety both grow exponentially fast. It is crucial to gain a representation to embed diverse node attributes as well as the structural properties from a network.
Among different network embedding methods, pioneer works such as DeepWalk [Perozzi et al.2014], and node2vec [Grover and Leskovec2016]
have laid a foundation for embedding framework that incorporates deep learning and skipgram architecture
[Mikolov et al.2013]. Such works propose to use random walk for simulating the context of a center node in the network. Following this architecture, subsequent techniques, such as Metapath2vec [Dong et al.2017] also solves the heterogeneous network embedding problem. In most of these methods, various types of nodes are used to generate random walks and incorporated into the objective function. Methods of this kind have flexible strategies of generating different random walks, and perform well in capturing neighborhood similarities and detecting community membership. However, most of them only focus on encoding network structure, which take the whole network structure as input to generate embedding vectors, while keeping little attention on node attributes that also provide important information. Most of these techniques focus on transductive embedding, which typically operates on fixed graph thus does not generalize to unseen data.Node attributes in network embedding have been proven to be important [Huang et al.2017]; however, the embedding of networks consisting of nodes affiliated with various attributes is still in its early stage due to the complexity of attributes [Zhang et al.2018]. There are a few works combining the network structure and node attributes to build their embeddings, such as LINE [Tang et al.2015] and AANE [Huang et al.2017]. Different methods have been adopted to extract the node attributes information. Most of these models are limited in their transductive essence–the input to the system has to be the whole fixed network with node attributes, which causes the model to generate poorly from the training data to unseen data. Another limitation is the scalability–such methods need to perform computation with respect to matrix that scales linearly to the graph size.
To address the aforementioned issues, recently proposed GraphSAGE model [Hamilton et al.2017a] uses local aggregation functions for embedding computation, which largely simplifies the whole graph propagation procedure mentioned above, and also achieves inductive property. Inspired by this framework, we build MFGCN, which adopts a multiplefiltering local GCN model as the aggregation function, incorporates a skipgram with negative sampling model [Levy and Goldberg2014], and a quadratic meansquareerror on the node attributes as the objective function. We demonstrate that with multiplefiltering GCN architecture, the system can capture diverse aspects of node attributes. Two network learning tasks, node classification and link prediction, are conducted in four benchmark datasets to evaluate the performances of our algorithm, compared to several other stateoftheart approaches. The contributions of this research is summarized as below:

We introduce a novel approach called MFGCN based on multifilter GCN aggregator for extracting different aspects of node attributes. MFGCN outperforms stateoftheart methods in capturing diverse node attributes, along with the network structure information.

We show that MFGCN can achieve significantly better performance than other baseline methods when the training data is limited.

We empirically demonstrate the impact of using different numbers of GCN filters and suggest optimal numbers of filters for different learning tasks.

We conduct various experiments and show that our MFGCN model has superior performance over most of the baseline methods on link prediction and node classification tasks.
2 Literature Review
Representation learning on network data has mainly been about performing embedding operation, which involves an Encoding and a Decoding parts. Encoding generates an embedding, and decoding is used as the objective function to update model learning parameters. Techniques to solve this problem nowadays usually follow three tracks: Skipgrambased model; Matrix Factorization model; and Graph Neural Network. Below we review related research in these three tracks.
Skipgrambased model defines a framework for embedding by incorporating the information of relevant nodes appearing in certain steps of random walk of the target node. Many existing methods have adopted such model. For example, DeepWalk [Perozzi et al.2014] adopts uniform sampling strategy for performing random walk, then uses the skipgram model with negative sampling for computing the objective function. However, the uniform sampling strategy and unclearly defined objective function could not well preserve the network structure. Node2vec [Grover and Leskovec2016] combines Breadthfirst and Depthfirst searching strategies, so it remains more first and secondorder proximity; but still it lacks a clear objective function for preserving global network structure. LINE [Tang et al.2015] defines an objective function to minimize the KLdivergence between the empirical distribution and the objective distribution; thus, it does well on preserving first and secondorder proximity, but the local sampling strategy still lacks information for global network structure. HARP [Chen et al.2018a] shows the improvement of using a graph coarsening strategy on different skipgram approaches.
Matrix Factorization method
performs embedding through factorizing a similarity matrix, and uses the factorized vector as the embedding representation vector. This stream of research has mostly focused on designing a similarity matrix to perform factorization. Typically the operation is eigendecomposition or singularvalue decomposition. Representative work in this area includes HOPE
[Ou et al.2016] and Netsmf [Qiu et al.2019].Graph Neural Network defines a framework through graph convolution to incorporate neighborhood information into the encoding layer. From the approach perspective, this line of work could be categorized into two branches: Spectralbased Filtering and Spatialbased Filtering. Spectralbased Filtering method performs graph convolution and forms the proposition function in the spectral domain. ChebNet [Defferrard et al.2016] follows Spectral Graph Theory [Hammond et al.2011] to define a spectral graph convolution with the full set of Chebyshev Polynomial approximation to the convolution kernel. Graph Convolutional Neural Network (GCN) [Kipf and Welling2016] later simplifies it by using only firstorder approximation. FastGCN [Chen et al.2018b] uses a sampling technique to avoid whole graph convolution. Following the GCN idea, AGCN [Li et al.2018] adopts its structure but creates different learning metrics to approximate a Gaussian kernel. Spatialbased Filtering method performs graph convolution in a spatial domain. The original work [Scarselli et al.2008] defines a framework to recursively update propositional function until an equilibrium point is reached. GGNN [Lin et al.2015]
improves this model by conducting a gated, recurrent unit. Later various techniques have been proposed in different ways, including designing efficient sampling strategy
[Chen et al.2018b], improving the model inductive ability [Hamilton et al.2017b], and defining convolution operation on spatial domain [Niepert et al.2016]. Graph Attention Mechanism [Veličković et al.2017] later comes to further improve the model by assigning attention coefficients to address relative importance of neighborhood nodes.3 Preliminary Requirement and Optimization Model
In this section, we present the prerequisite models and techniques for our method. We first define the problem, then review the Graph Convolutional Neural Network model and its local extended version that is crucial to our approach. Later the optimization model used in our method to update the system parameters is derived.
3.1 Problem Definition
Generally, a network is defined as , where is a finite set of vertices (, the number of nodes); is the set of edges; is the set of attributes associated with each node; is the weighted adjacency matrix, where each entry represents the weight of each edge. In this work, we focus on the unweighted homogeneous network embedding, so if there is an edge connecting vertices and , otherwise . we define as an embedding function that maps each node to a latent representation. For every center node , let denote its context nodes generated from a random walk sampling strategy, and denote the onehop neighborhood nodes of center node . The problem we try to solve is to find the best to generate embedding vectors for nodes which could fulfill various machine learning tasks.
3.2 GCN and Its Local Version
In GCN, the graph convolution operation is used as layer propagation function,which could be written as:
(1) 
where
is the convolved embedding feature matrix (the intermediate layer hidden representation),
the graph adjacency matrix plus an Identity matrix, and
the diagonal degree matrix where , and the signal with C input channels (Cdimensional attribute vector for each node). is the learnable matrix of filter parameters, the total number of nodes in the graph, and the dimension of the embedding vectors.Local GCN [Hamilton et al.2017a] is an extended local version of GCN. Since in Equation 1 is the adjacency matrix and is the diagonal degree matrix, this formula could actually be treated as a normalization function that each entry of the output vector represents a node status propagated by all of its neighbor nodes. Equation 2 represents this operation:
(2) 
where is the hidden representation of node in layer, is the weight parameters in layer, and is the dimension of the embedding vector. is the neighborhood nodes of a center node . This is a perneighborhood normalization of the parameterized propagation layer. In the current work, we extend to use multiple local GCN filters as the propagation function.
3.3 Optimization Model
We follow the transductive optimization model from [Yang et al.2016]
, aiming at preserving the network structure information as well as predicting the correct node label. To preserve structure information, we apply the skipgram model, which seeks to maximize the probability of observing the context nodes
of a center node based on its embedding representation:(3) 
We follow the assumption of conditional independency [Grover and Leskovec2016], where the probability of observing one context node is independent of other context nodes given the center node:
(4) 
where is the context node of the center node. We maximize the objective function (Equation 3) by minimizing its negative log form:
(5) 
where is the prediction probability. Let , denote the embedding vector representation of context node and the associated center node that are mapped by , we model this probability as follows:
(6) 
where is the normalization factor that integrates over all nodes. is the inner product between and that represents the similarity of these two embedding representations. With this assumption, Equation 5 could be simplified to:
(7) 
Numerical computation of is huge for large graph with millions of nodes, since the computation grows linearly with graph size. So we adopt negative sampling to approximate the normalization factor; thus the objective function becomes:
(8) 
where , is the number of negative samples. is the negative sampling distribution.
For supervised loss of predicting the correct label, we adopt a cross entropy loss on a softmax layer:
(9) 
where is the class number, is the ground truth node label, and is the predicted label of center node from softmax layer.
4 Methodology
In this section, we present the framework of our MFGCN, and the whole system architecture. We achieve this by first filtering node attributes with multiple local GCN filters, then concatenating these filtered hidden representation vector to form one hidden layer; the succeeding hidden layer follows the same procedure.
4.1 MultipleFiltering GCN Aggregator
Since local GCN could propagate information from layers to layers, it is intuitive to assume that multiple such local GCN filters could propagate information from different aspect of node attributes. Therefore we propose a multifiltering GCN architecture–each single local GCN filter is one attribute extractor, so multiple such filters form one layer of feature extraction. Figure 1 visually shows this operation. For each hidden layer, we use multiple distinct local GCN filters (each filter has distinct parameters) to operate on the input of the previous hidden layer; for the first layer, the operation is performed on the input from the node attribute channels. This operation is shown below:
(10) 
where is the input attribute channel vector for node , is the aggregation output from the th local GCN aggregator, is the learnable parameters for the th local GCN aggregation function, and
is the nonlinear activation function. After concatenating these aggregated representation vectors from different local GCNs to form the final aggregated feature vector, we define our
MFGCN aggregation function as below:(11) 
where represents the aggregation output for node in the th layer. is the total number of Local GCN Filters. Following the forward propagation algorithm in [Hamilton et al.2017a], the final generated embedding vector is derived as:
(12) 
Where represents the hidden embedding representation in the layer, is the parameter of the fully connected encoding layer, and is the nonlinear activation function.
4.2 Network Embedding Framework
We design our network embedding system according to the Graph Auto Encoding framework. Algorithm 1 shows the whole procedure of the MFGCN embedding generation algorithm we propose. Figure 1 visually shows the whole system structure. The system can be divided into the encoding and decoding parts.
Encoding: In the encoding phase, the input is the node attribute vector . The MFGCN embedding generation procedure is performed on the input vector as shown in Algorithm 1, and finally get the embedding vector
Decoding: The decoding phase includes two parts, a supervised loss and a Skipgram model with Negative Sampling error (SGNN)
. These two loss function are integrated to form the final loss function:
(13) 
The supervised loss part is constructed by a fully connected layer followed by softmax layer for classifying different node labels, it is separated from the SGNN loss part. The input of both parts comes from the embedding output representation
.4.3 Training procedure
The minibatch gradient descent is applied in the training process. For each training batch, we randomly select batch sized number of nodes as the training center nodes. Random walk is performed to sample the context nodes for each center node . Here we use the random walk strategy according to node2vec, and the random walk length is set to be . For negative sampling part, we uniformly sample nodes that do not have connections with those onehop neighbor nodes of the center node; neither do they have connections with the context nodes sampled previously. The negative sample number is set to be 100 for each center node.
Training is performed jointly between the two loss functions and . For the supervised loss
, the training is straightforward by incorporating a softmax layer with crossentropy loss. For the SGNN loss function part, at each training epoch, we first propagate the center node and derive its embedding representation
. After sampling its context nodes, we input all these nodes to the system and derive the embedding representations of all context nodes . Then we perform negative sampling and input these negative samples into the system to get the embedding representations for these negative sampled nodes . Then we calculate the SGNN loss function according to 8. Note that we use all the normalized form of these embedding vector to perform inner product, since we are measuring their cosine similarities.
4.4 System Architecture
The system architecture is explicitly shown in Figure 1. The input to our MFGCN aggregator is the center node attribute vector as well as all of its neighborhood nodes attributes. We choose the network layer depth , since we find that one layer of MultiFiltering GCN can already achieve good performance. We pick our local GCN filter number to be 25, and filter size to be 16. We concatenate these filters to form a 400 dimension aggregation representation, then we concatenate again with the center node attribute, and feed into a fully connected encoding layer to generate the embedding vector. We set the dimension off the embedding vector to be 100. This embedding vector is used as the final feature representation for nodes.
5 Experiments
5.1 Experimental Setup
To empirically evaluate the effectiveness and efficiency of our proposed model, we apply MFGCN in four public benchmark datasets: Citeseer, Cora, PubMed and Wiki. Citeseer, Cora and PubMed are three paper citation networks, where each node represents a paper and each edge represents a citation relationship. Wiki is a Wikipedia hyperlink network, where each node represents a web page and each edge indicates a hyperlink from one page to another. The node attributes of these networks are extracted as bagofwords representation. Table 1 presents the specific statistics of these four datasets.
Citeseer  Cora  PubMed  Wiki  
# Nodes  3,312  2,708  19,717  2,405 
# Edges  4,660  5,278  44,327  17,981 
Feature Dimension  3,703  1,433  500  4,973 
Node classes  6  7  3  17 
Baselines We compare our method against several well known baselines: DeepWalk, node2vec, LINE, role2vec, Raw feature (raw attributes are used as the embedding representation), GraphSAGEGCN, GraphSAGEmean, GraphSAGEpool, and DANE (Deep Attribute Network Embedding). For sampling context nodes of a center node, we use the random walk strategy based on node2vec, and set , . For LINE, we adopt the second order proximity. Role2vec uses logarithmic binning and node attribute concatenation for role mapping functions. GraphSAGEGCN is a special version of our MFGCN model where the filter number is 1. For GraphSAGEMean, and GraphSAGEpool, we implement the algorithms according to the original paper, and we set the fixed GraphSAGE neighborhood sampling number to be 20. We set the latent representation dimension for all the comparing methods to be 100. We choose the architecture for our MFGCN model the same as described in section 4.4.
5.2 Link Prediction
Our first evaluation task is Link Prediction, which we test the models on all four benchmark datasets. Specifically, for each dataset, we randomly remove of the edges while keeping the graph connected, and train the model on the remaining graph. For training, since we do not reach node labels, we only use SGNN loss as the cost function. For testing, we use all the removed edges as the positive testing edges, and we select equal number of nodes pairs from the network which have no edge connecting them as the negative testing edge; then we test the models on predicting the positive testing edge, as well as detecting the nonexisted edges. The AUC scores of different methods are collected to reflect how well each model performs, shown in Table 2. From the experiment results, MFGCN generally performs better among all baseline methods.
Model  Citeceer  Cora  PubMed  Wiki 

DeepWalk  0.771  0.734  0.857  0.747 
Node2vec  0.721  0.726  0.868  0.761 
LINE  0.772  0.723  0.871  0.728 
Role2vec  0.921  0.896  0.898  0.848 
GraphSAGEmean  0.917  0.859  0.936  0.869 
GraphSAGEGCN  0.907  0.892  0.901  0.880 
GraphSAGEpool  0.937  0.891  0.932  0.875 
DANE  0.914  0.844  0.927  0.821 
MFGCN  0.924  0.913  0.944  0.893 
5.3 Node Classification
Our second evaluation task is node classification. We separately select , , nodes from the network as the 3 training sets, then test the classification accuracy on the remaining nodes. Microf1 score is used as the performance measurement. The testing results are shown in Table 3, 4, 5, and 6 respectively. From the test results, MFGCN shows significant improvement against all the baselines. Especially when the training data is limited, our MFGCN greatly outperforms others.
Model  10%  30%  50% 

DeepWalk  0.368  0.508  0.575 
Node2vec  0.373  0.507  0.604 
LINE  0.389  0.517  0.574 
Role2vec  0.518  0.659  0.699 
Raw Feature  0.551  0.693  0.705 
GraphSAGEmean  0.627  0.698  0.699 
GraphSAGEGCN  0.647  0.696  0.703 
GraphSAGEpool  0.623  0.706  0.716 
DANE  0.546  0.703  0.722 
MFGCN  0.686  0.713  0.746 
Model  10%  30%  50% 

DeepWalk  0.317  0.694  0.773 
Node2vec  0.338  0.683  0.794 
LINE  0.309  0.697  0.784 
Role2vec  0.420  0.635  0.715 
Raw Feature  0.369  0.652  0.734 
GraphSAGEmean  0.379  0.793  0.796 
GraphSAGEGCN  0.406  0.772  0.798 
GraphSAGEpool  0.439  0.778  0.788 
DANE  0.502  0.787  0.804 
MFGCN  0.578  0.814  0.815 
Model  10%  30%  50% 

DeepWalk  0.582  0.709  0.725 
Node2vec  0.593  0.705  0.727 
LINE  0.566  0.719  0.731 
Role2vec  0.616  0.716  0.733 
Raw Feature  0.637  0.709  0.724 
GraphSAGEmean  0.739  0.768  0.778 
GraphSAGEGCN  0.679  0.742  0.773 
GraphSAGEpool  0.671  0.749  0.764 
DANE  0.711  0.758  0.801 
MFGCN  0.758  0.771  0.813 
Model  10%  30%  50% 

DeepWalk  0.148  0.428  0.506 
Node2vec  0.156  0.413  0.527 
LINE  0.158  0.424  0.522 
Role2vec  0.397  0.562  0.625 
Raw Feature  0.211  0.423  0.532 
GraphSAGEmean  0.414  0.466  0.568 
GraphSAGEGCN  0.425  0.552  0.601 
GraphSAGEpool  0.432  0.558  0.560 
DANE  0.434  0.567  0.604 
MFGCN  0.466  0.597  0.644 
5.4 Optimized Filter Size
In order to show the effect of using different number of local GCN filters on node classification task, we conduct experiments of testing the F1score on of test nodes from Cora under various number of filters (from filter number 1 to filter number 100) for training. We also collect different training time for one iteration under different number of filters. Our training is performed on a work station with two RTX 2080 Ti GPU, and Intel Core i99820X. Figure 2 shows the testing result. We find that the F1 score constantly increases from to with the filter number changes from 1 to 25; then it drops to 0.78, and remains relatively the same score as the filter number keeps increasing. We think the reason for this change is that adding more filters when the filter size is small could help capture different aspect of node features, thus leads to the increase of F1 score; but when the filter number reaches a saturation point, the system starts to be overfitting as the filter number keeps rising. Therefore, we pick the filter number to be 25 for our MFGCN model to achieve the best performance.
5.5 2D visualization
To visually present the advantage of our MFGCN model, we plot the 2 Dimensional embedding visualization of different methods using tSNE [Maaten and Hinton2008] as shown in Figure 3, 4, 5 amd 6. These plots are generated by performing node embedding on the test nodes from Cora, where we use nodes for training. From the plot, MFGCN clearly shows better performance on distinguishing different clusters of nodes against other baselines.
6 Conclusion
In this paper, we propose MFGCN, a novel network embedding approach that extracts different aspects of node features by using multiple local GCN filters. To show the effectiveness of our model, we conduct various experiments. First, we show MFGCN could increase performance of AUC score on link prediction task against many baseline methods. Second, we conduct experiments on node classification task from four public benchmark datasets and show that by using MFGCN model, the F1 score has a significant improvement against baseline methods, especially when the dataset is limited.
To show the effectiveness of using different numbers of filters, we conduct experiments to measure the performance of using different number of filters on node classification task, and provide suggestions on choosing the optimal number of filters.
Finally, the embedding results of MFGCN against other baseline methods are visually shown, which demonstrates that our MFGCN model has superior performance over other baseline methods.
For future directions, we would like to continue our research on incorporating attention mechanism that selectively picks important filters.
References

[Chen et al.2018a]
Haochen Chen, Bryan Perozzi, Yifan Hu, and Steven Skiena.
Harp: Hierarchical representation learning for networks.
In
ThirtySecond AAAI Conference on Artificial Intelligence
, 2018.  [Chen et al.2018b] Jie Chen, Tengfei Ma, and Cao Xiao. Fastgcn: fast learning with graph convolutional networks via importance sampling. arXiv preprint arXiv:1801.10247, 2018.
 [Defferrard et al.2016] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems, pages 3844–3852, 2016.
 [Dong et al.2017] Yuxiao Dong, Nitesh V Chawla, and Ananthram Swami. metapath2vec: Scalable representation learning for heterogeneous networks. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pages 135–144. ACM, 2017.
 [Grover and Leskovec2016] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855–864. ACM, 2016.
 [Hamilton et al.2017a] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pages 1024–1034, 2017.
 [Hamilton et al.2017b] William L Hamilton, Rex Ying, and Jure Leskovec. Representation learning on graphs: Methods and applications. IEEE Data(base) Engineering Bulletin, 40:52–74, 2017.
 [Hammond et al.2011] David K Hammond, Pierre Vandergheynst, and Rémi Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30(2):129–150, 2011.
 [Huang et al.2017] Xiao Huang, Jundong Li, and Xia Hu. Accelerated attributed network embedding. In Proceedings of the 2017 SIAM international conference on data mining, pages 633–641. SIAM, 2017.
 [Kipf and Welling2016] Thomas N Kipf and Max Welling. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
 [Levy and Goldberg2014] Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. In Advances in neural information processing systems, pages 2177–2185, 2014.
 [Li et al.2018] Ruoyu Li, Sheng Wang, Feiyun Zhu, and Junzhou Huang. Adaptive graph convolutional neural networks. In ThirtySecond AAAI Conference on Artificial Intelligence, 2018.

[Lin et al.2015]
Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu.
Learning entity and relation embeddings for knowledge graph completion.
In Twentyninth AAAI conference on artificial intelligence, 2015.  [Maaten and Hinton2008] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using tsne. Journal of machine learning research, 9(Nov):2579–2605, 2008.
 [Mikolov et al.2013] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
 [Niepert et al.2016] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In International conference on machine learning, pages 2014–2023, 2016.
 [Ou et al.2016] Mingdong Ou, Peng Cui, Jian Pei, Ziwei Zhang, and Wenwu Zhu. Asymmetric transitivity preserving graph embedding. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1105–1114. ACM, 2016.
 [Perozzi et al.2014] Bryan Perozzi, Rami AlRfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701–710. ACM, 2014.
 [Qiu et al.2019] Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Chi Wang, Kuansan Wang, and Jie Tang. Netsmf: Largescale network embedding as sparse matrix factorization. In The World Wide Web Conference, pages 1509–1520. ACM, 2019.
 [Scarselli et al.2008] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2008.
 [Tang et al.2015] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Largescale information network embedding. In Proceedings of the 24th international conference on world wide web, pages 1067–1077. International World Wide Web Conferences Steering Committee, 2015.
 [Veličković et al.2017] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
 [Yang et al.2016] Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. Revisiting semisupervised learning with graph embeddings. arXiv preprint arXiv:1603.08861, 2016.
 [Zhang et al.2018] Zhen Zhang, Hongxia Yang, Jiajun Bu, Sheng Zhou, Pinggang Yu, Jianwei Zhang, Martin Ester, and Can Wang. Anrl: Attributed network representation learning via deep neural networks. In IJCAI, volume 18, pages 3155–3161, 2018.
Comments
There are no comments yet.