1 Introduction
As a efficient representation of data distribution, graph plays a important role for describing the intrinsic structure of data. Therefore, many existing works have constructed the significant theory and method depending on the graph structure of data in pattern recognition, such as graph cut building energy function for semantic segmentation task
veksler2019efficient , graphbased learning system constructing the accurate recommendations for the interaction of the different objects monti2017geometric ying2018graph , graph modeling molecules bioactivity for drug discovery defferrard2016convolutional gilmer2017neural and graph simulating the link connection of citation network for the different group classification defferrard2016convolutional gilmer2017neural khan2019multi . In fact, we usually observe objects and their relationship (this relationship is defined as the objects of structure, which often can be described by graph.) from multiple views, which provide the more abundant and complete information for object recognition. Learning on multigraph (multiple observation structure) can effectively mine multiple relationship to discriminate the different object.Existing learning methods on multigraph trend to tow ways. One is structure fusion Lin20131286 Lin2014146 7268821 7301305 Lin20161 Lin2017275 Lin2017Dynamic Lin2018structure lin2018class LINGF2018 lin2019transfer
or diffusion on tensor product graph
yang2011affinity yang2012affinity bai2019automatic li2019semi bai2017regularized bai2017ensemble based on the complete data, which include each view observation data. Another is graph convolutional networks for the salient graph structure preservation khan2019multi based on the incomplete data, which lost some view observation data. For example, link relationship can be extracted by application necessary in citation networks, but it can not be described by the corresponding observation data computation. In other words, these link relationship exists, while the corresponding support data lost. Therefore, the method based on graph convolutional networks usually ignores the complete complementary of the different observation structure based on the incomplete multiview data. To analysis this issue, we attempt to construct structure fusion based on graph convolutional networks for classification. Figure 1 shows the overall flow diagram of structure fusion based on graph convolutional networks (SFGCN). The inspiration of SFGCN comes from MultiGCN in the literature khan2019multi , but there are tow points difference comparison with MultiGCN. One is that SFGCN considers the inequality of multiple structures, while MultiGCN only equally deal with their relationship. The other is that SFGCN focuses on the contributions of all nodes structure in the fusion structure, while MultiGCN only emphasises on the salient structure of the part nodes. From the classification sense,the strong and weak links between nodes both considered for complementing structure can more fit to the intrinsic structure of the data for classification.Our contributions can be summarized as following. (a)We present a novel structure fusion based on graph convolutional networks (SFGCN) that discriminates the different classes by optimizing the linear relationship of multiple observation structure with balancing the specificity loss and the commonality loss. (b) In three citation datasets with document sparse feature and document link relationship, the proposed SFGCN outperforms the state of the arts for semisupervised classification. (c) Our model is generalized the different multigraph fusion methods for evaluating the performance of the proposed SFGCN.
2 Related Works
In this section, we mainly review recent related works about structure fusion and graph neural network.
2.1 Structure fusion
Structure fusion initially proposed in Lin20131286 can merge multiple structures for shape classification. In the followup works, the extend methods can be divided into three categories according to the different fusion ways. The first kind of methods try to find the optimized linear relationship of multiple observation structure based on the different manifold learning method Lin2014146 or statistics model analysis7268821 . The second kind of methods attempt to mine the nonlinear relationship of heterogeneous feature structure based on the global feature7301305 Lin20161 or the local feature encoding Lin2017275 . The third kind of methods can capture the dynamic changes of multiple structures for semisupervised classification Lin2017Dynamic or the structure propagation way for zeroshot learningLin2018structure lin2018class LINGF2018 lin2019transfer .
From above mention, existing methods emphasis on the completeness of data and their relationship based on data project, while graph convolutional networks focus on the transformation and evolution of data structure by deep learning frameworks. Therefore, we expect to draw support from structure fusion based on structure metric and graph convolutional networks for processing the incomplete view data, and find evolution law of the the fusion structure with the consideration of their specificity and commonality.
2.2 Graph neural network
Graph neural networks can discover the potential data relationship by the computation based on graph nodes and links. Especially, the computation is defined as convolution for graph data, and graph convolution networks (GCN) have become a promising direction in pattern recognition. In terms of the different node representation, graph convolution networks include spectralbased GCN and spatialbased GCN. Spectralbased GCN can define graph Fourier transform based on graph Laplacian matrix for projecting graph signal into the orthonormal space. The difference of these methods is the selection of the filter, which may be the learned parameters set
bruna2013spectral , Chebyshev polynomial defferrard2016convolutional , or the firstorder Chebyshev polynomialkipf2016semi chen2018fastgcn . Spatialbased GCN regards images as a special graph with a pixel describing a node. To avoid the storage of all states, these methods have present the improved training strategies, such as subgraph training hamilton2017inductive or stochastically asynchronous training dai2018learning . Furthermore, some complex networks architecture can utilize gating unite to control the selection of node neighborhoodliu2018geniepath , or design two graph convolution networks with the consideration of the local and global consistency on graphzhuang2018dual , or adjust the receptive field of node on graph by hyperparameters van2018filter .Because spectralbased GCN can explicitly construct the learning model on the graph structure,which can easily be separated from GCN architecture. Therefore, this point provides a way for processing multiple structures, which may be incremental. In this paper, we focus on the important role of graph (structure) from multiview data, and attempt to mine the plentiful information from multiple structures for spectralbased GCN inputting.
3 Structure fusion based on graph convolutional networks
To the best of our knowledge,existing structure fusion methods usually construct the optimizing function for feature projection, in which feature data and the corresponding structure jointly participate in computation. Because of the possible loss and the structure preservation of multiview data, we expect to build a novel structure fusion by structure metric, in which the optimizing function only involves multiple structures for avoiding the negative effect of the data lost. Simultaneously, multiple structures have each specificity and their commonality. Therefore,we also anticipate that a novel structure fusion can be constrained by these characteristics of multiple structures. Figure 2 demonstrates the internal mechanism of structure fusion in SFGCN. First, we construct the specificity loss based on spectral embedding method with the consideration of multiple structure linear relationship. Second, we measure the commonality loss between multiple structures based on distance metric in Grassman manifold. Finally, we jointly exploit the structure fusion based on two losses, and input GCN for classification.
3.1 Specificity loss of multiple structures
Given an object set with multiview, we can use graph to describe the observation distribution of data on each view. Therefore, the graph is the representation of the observation structure and can indicate multiple structures of data from multiview observations. Because multiple structures detail the same object set, each includes the same vertex set , or the possible different edges set . If is the adjacency matrix of and is the numerical expression of the structure in th view. In terms of spectral embedding, we can obtain the following optimization function on the embedding matrix ( is the number of samples, and is the dimension of embedding space) of each view.
(1) 
Where, is Laplacian matrix of , is the degree matrix for . Therefore, can still describe the characteristic of structure on graph . We can compute the embedding matrix by optimizing equation (1
), which is equivalent to a eigenvalue solution problem. When all eigenvalues are solved, eigenvectors corresponding to the smallest eigenvalues can build the embedding matrix
, which can project the original nodes into the low dimensional spectral space xia2010multiview . We can regard as the specificity loss of structure on graph graph , and then we can reformulate the specificity loss of multiple structures as follow.(2) 
Where, is the embedding matrix of multiple structures in graph and closely approximates . Suppose fusion structure is the linear combination of , then and have the same linear relationship , in which is the linear coefficient to encode the importance of multiple structures.
3.2 Commonality loss of multiple structures
To measure the commonality loss of multiple structures, we need metric the distance between Laplacian matrix and . According to the solvation of the equation (1), we can obtain the equation (3) for describing the internal connection between embedding matrix and the corresponding Laplacian matrix .
(3) 
Where, is diagonal matrix, in which diagonal values is the smallest eigenvalues of .
can be explained a subspace for preserving the smaller variance of the column in
, that is for reserving the bigger variance of the column in the structure . In other words, can keep the more discrimination of the data. Similarly, has the same sense in multiple observation structure. Therefore, we can replace the distance between Laplacian matrix and by the distance between and for indirectly computing the commonality loss of multiple structures. This point is consistent with the specificity loss of the assumption, which is that approximates between each view and multiview.In terms of Grassmann manifold theory lin2012multi turaga2011statistical , the orthonormal matrix can be regard as the column of spanning an unique subspace, which can be project into an unique point on Grassmann manifold . Similarly, also can be mapped into an unique point on this Grassmann manifold. Therefore, the principle angles between these subspaces can represent the distance between and . Furthermore, this distance can be reformulate as followingdong2013clustering .
(4) 
In multiple structures, we can define the commonality loss as the distance between and as following.
(5) 
3.3 Structure fusion by structure metric losses
As two structure metric losses, specificity loss can balance the contribution of the structure in each view, while commonality loss can consider the similarity of multiple structures in multiview. These structure metric losses can both constrain the linear relationship of multiple structures. Therefore, we combine these structure metric losses as a total loss for encoding the importance of multiple structures. The total loss can be reformulated as following.
(6) 
Where, is regularization parameter. From equation (6), we can construct the object optimization function as following.
(7) 
In commonality loss, constant term can not influence the loss trend change, so we may remove this term for conveniently computing. Equation (7) is reformulated as equation (8) for balancing parameter between and .
(8) 
Equation (8) is a nonconvex optimization problem, we can solve this problem by and alternated optimization. If is fixed, equation (8) can be transformed as a eigenvalue solving problem as following.
(9) 
Equation (9)is equivalent to a eigenvalue solution problem. When all eigenvalues of are solved, eigenvectors corresponding to the smallest eigenvalues can build the fusion embedding matrix . If is fixed, equation (8) can be converted into a quadratic programming problem as following.
(10) 
By alternated solving between equation (9) and equation (10), we can obtain fusion embedding matrix and the linear relationship of multiple structures. Furthermore, fusion structure (fusion adjacent matrix) can be computed by .
Algorithm 1 shows the pseudo code for fusion structure of multiple structures. In this algorithm, there are three steps. The first step (line 1) initializes the linear relationship of multiple structures. The second step(from line 2 to line 3) computes the Laplacian matrix and the spectral embedding in each view. The third step (from line 4 to line 6) alternately optimizes the spectral embedding fusion and the linear relationship of multiple structures. The last step (line 8) calculates fusion structure by the linear combination of each structure. Therefore, the complexity of this algorithm is , in which represents multiview; is the sample number; is the dimension of the selected eigenvectors; is the iterative times of optimization; is the number of bits in the input of algorithm.
3.4 Graph convolutional networks
In terms of the multiplication of convolution in the Fourier domain, graph convolution is defined as the the multiplication between the signal and the filter bruna2013spectral . Furtherly, graph convolution can also be approximated by order Chebyshev polynomials kipf2016semi as following.
(11) 
Where, is the eigendecomposition of the normalized Laplacian (
is the identity matrix;
is the degree matrix of graph ); ( and respectively are the rescaled degree and adjacent matrix by ); expresses the Chebyshev polynomials; .Fusion structure can directly be input into the above graph convolutional networks. The forward propagation based on two layers of graph convolutional networks can be indicated as following.
(12) 
Where, is the output of networks; is the representation matrix of each nodes; and respectively are the and layer filter parameters; and
are the different type of activation function located in the various layers.
4 Experiments
For evaluating the proposed SFGCN, we carry out the experiments from four aspects. Firstly, we conduct the comparing experiment between the proposed SFGCN and the baseline methods, which include graph convolutional networks (GCN)kipf2016semi with the combination view and MultiGCNkhan2019multi . Secondly,we utilize the different multigraph fusion methods for analyzing the intrinsic mechanism of the proposed SFGCN. Thirdly, we show the experimental results between the proposed SFGCN and the state of the art methods for the node classification in citation networks. Finally, we implement the proposed SFGCN method of the lost structure for demonstrating the importance of the complete structure.
4.1 Datasets
We use the papercitation networks of the citation networks in experiments. The three popular datasets usually utilized in node classification respectively are Cora, Citeseer and Pubmed. Cora dataset has classes that involve
the grouped publication about machine learning and their undirected graph. Citeseer dataset includes
classes that havescientific papers and their undirected graph. In these datasets, each publication stands for a node of the related graph and is represented by onehot vector, each element of which can indicate the presence and absence state of a word in the learned directory. Pubmed dataset has
classes that contain diabetesrelated publications and their undirected graph. In this dataset, each paper (each node of the related graph) can be described by a term frequencyinverse document frequency (TFIDF)wu2019comprehensive . Table 1shows the statistics of these datasets. To obtain the structure of the second view from publication description, we normalize the cosine similarity between these publication. If these similarity is greater than
, we produce an edge for the corresponding to nodes in the citation network. This configuration is same in the literature khan2019multi .Datasets 







Cora  
Citeseer  
Pubmed 
4.2 Experimental configuration
In experiments, we follow the configuration in GCNkipf2016semi , in which we train a twolayer GCN for maximum of epochs and test model in labeled samples. Moreover, we select the same validation set of labeled sample for hyperparameter optimization (dropout rate for all layers, number of hidden units and learning rate).
In proposed SFGCN, we initially set the linear relationship of multiple structures and regularization parameter as , and then update these parameters in iteration optimization. The iteration time of the algorithm is according it’s the convergence degree in fact.
4.3 Comparison with the baseline methods
The proposed method (SFGCN) can be constructed based on GCNkipf2016semi , and attempt to mine the different structure information for completing the intrinsic structure in multiview data. Therefore, two baseline methods (GCN and MultiGCN can find and capture the different structure information from the different consideration.) is involved for processing multiview data based on GCN. GCN for multiviewkipf2016semi can concatenate the different structure to build a sparse blockdiagonal matrix where each block corresponding to the different structure (the adjacent matrix of different graph). MultiGCN khan2019multi can preserve the significant structure of the different structure by manifold ranking. In contrast with these baseline methods, the proposed method (SFGCN) can not only enhance the common structure, but also retain the specific structure by structure fusion.
Table 2 shows that the classification performance of SFGCN outperforms that of the baseline methods and the least improvement of SFGCN respectively is for Cora, for Citeseer and for PubMed. However, GCN for multiview is not superior to GCN for singleview, and it demonstrates that information mining of multiview data is a key point for node classification. Therefore, SFGCN attempt to mine the structure information from multiview data for this purpose and obtain the better performance.
Method  Cora  Citeseer  PubMed 

GCN kipf2016semi for view1  
GCN kipf2016semi for veiw2  
GCN kipf2016semi for multiview  
MultiGCN khan2019multi  NA  
SFGCN  83.3  73.4  79.3 
4.4 structure fusion generalization
Structure fusion (SF) focuses on the complementation of the distribution structure from the different view data, and can be defined in section 3.3. However, the diffusion yang2012affinity bai2017regularized bai2017regularized and propagationLINGF2018 Lin2018structure of the different structure can also describe the complex relationship of the various structure, and become the important part of structure fusion. Therefore, we can define fusion structure by the propagation fusion (PF) of the different structure as follow.
(13) 
The propagation fusion can exchange and interact the relationship information between the various structures, and mine the neighbour relationship of multiple structures. However,this propagation can effect on the clustering performance of the original structure by highorder iteration multiplication. Therefore, we only consider zeroorder (for example SF) and firstorder (for instance PF) multiplication, that is structure propagation fusion (SPF) as follow.
(14) 
For evaluating structure fusion generalization, we compare structure fusion based graph convolutional networks (SFGCN), propagation fusion based graph convolutional networks (PFGCN) and structure propagation fusion based graph convolutional networks (SPFGCN).In Table 3, we observe that the performance of SPFGCN is better than that of other method, and the least improvement of SPFGCN respectively is for Cora, for Citeseer and for PubMed, while the performance of SP is superior to that of PFGCN, and the improvement of SFGCN respectively is for Cora, for Citeseer and for PubMed Therefore, PF and SF both are benefit for further mining the structure information and the role of SF is more important than that of PF.
Method  Cora  Citeseer  PubMed 

SFGCN  
PFGCN  
SPFGCN  83.5  73.5  80.0 
4.5 Comparison with the stateofthearts
Because graph convolutional networks and structure fusion are basic ideas for constructing the proposed method SPFGCN, we analyze six related stateofthearts methods for evaluating SPFGCN. These methods include two categories. One is node neighbour information exploiting for GCN, and another is node information fusion based on GCN.
Node neighbour information exploiting attempts to capture the distribution structure of the node neighbour for obtain the stable graph structure representation. For example, graph attention networks(GAT) can specify different weights to different nodes in a neighborhood Veli2017Graph ; stochastic training of graph convolutional networks (StoGCN) allows sampling an arbitrarily small neighbor size 2017arXiv171010568C ; deep graph infomax(DGI) can maximize mutual information between different level subgraph centered around nodes of interest (the different way for considering neighbour information)Veli2018Deep .
Node information fusion tries to mine the information from multiview node description or multiple structures for complementing the difference of multiview data. For instance, largescale learnable graph convolutional networks (LGCN) can fuse neighbouring nodes feature by ranking selection to transform graph data into gridlike structures in 1D format2018arXiv180803965G ; dual graph convolutional networks (DGCN) can consider local and global consistency for fusion different views graph of raw datazhuang2018dual ; MultiGCN can extract and select the significant structure form multiview structure by manifold rankingkhan2019multi .
The proposed method SPFGCN belongs to node information fusion method, and the difference compared with the above methods focuses on the complementary of multiple structures by mining their commonality, specificity and interactive propagation. Table 4 shows SPFGCN outperforms other stateoftheart methods except DGCN in Cora and PubMed datasets. Although SPFGCN and DGCN reach to the same performance in Cora and PubMed datasets, SPFGCN can preserve the higher computation efficient of the original GCN because of the separable computation between structure fusion and GCN.
Method  Cora  Citeseer  PubMed 

GATVeli2017Graph  
StoGCN2017arXiv171010568C  
DGIVeli2018Deep  
LGCN2018arXiv180803965G  
DGCNzhuang2018dual  83.5  80.0  
MultiGCNkhan2019multi  NA  
SFGCN  
SPFGCN  83.5  73.5  80.0 
4.6 Incomplete structure influence
Structure fusion can capture the complementary information of multiple structures, and this complementary information can supply an efficient way for incomplete structure influence. The main reason of the incomplete structure may be because of noise and data loss in practical situation. For evaluating the performance of the proposed methods under the condition of the incomplete structure, we design a experiment in all datasets. In semisupervised classification, the distribution structure of test datasets is more important than that of train datasets, and can assure the performance of classification because of the transfer relation of structure between train and test datasets. Therefore, we delete the some structure of test datasets to destroy this transfer relation for simulating incomplete structure.
In the details, we proportionally set the adjacency matrix(graph structure from the original dataset) of elements (corresponding to test datasets) to zero from to , and then respectively implement GCN for multiview kipf2016semi , DGCN zhuang2018dual , SFGCN and SPFGACN methods in all dataset. In figure 3, we select structure loss degree from ,,,,, to construct the different classification model for evaluating the performance of the compared methods. Especially, there is the smaller descent of SPFGCN classification accuracy with structure loss increasing from to , e.g. to on Cora, to on Citeseer and to on PubMed. We can observe that the proposed SFGCN and SPFGACN is more stable and robust with incomplete degree increasing of structure than GCN for multiview and DGCN. In this situation, the performance of SPFGACN is better than that of SFGACN, while the performance of GCN outperforms that of DGCN in Cora datasets, and the performance of DGCN is superior to that of GCN in Citeseer and PubMed datasets. The details of this reason can be analyzed in section 4.7.
4.7 Experimental results analysis
In our experiments, we compare the proposed method with eight methods, which contain two kinds of baseline methods (MultiGCNkhan2019multi , GCN kipf2016semi for multiview, GCN kipf2016semi for view1 and view2 in section 4.3), two kinds of structure fusion generalization methods (PFGCN and SFGCN in section 4.4), and six kinds of the stateoftheart methods(GATVeli2017Graph , StoGCN2017arXiv171010568C , DGIVeli2018Deep , LGCN2018arXiv180803965G , DGCNzhuang2018dual and MultiGCNkhan2019multi in section 4.5). These methods can utilize the graph structure mining based graph convolutional networks for semisupervised classification by the different ways. In contrast to other methods, the proposed SFGCN and SPFGCN methods focus on the complementary relationship of multiple structures by the consideration of their commonality and specificity. Moreover, the proposed SPFGCN method not only capture the optimization distribution of fusion structure, but also emphasize on the interactive propagation between the different structures. From the above experiments, we can observe several points as following.

The performance of SFGCN is superior to that of the baseline methods (MultiGCNkhan2019multi , GCN kipf2016semi for multiview, GCN kipf2016semi for view1 and view2 in section 4.3). GCN kipf2016semi constructs a general graph convolutional architecture by the firstorder approximation of spectral graph convolutions for greatly improving the computation efficiency of graph convolutional networks, and also provides a feasible deep mining frameworks for effective semisupervised classification. For using multiple structures, GCN for multiview can input a sparse blockdiagonal matrix, each block of which corresponding to the different structure. Therefore, the relationship of each block (the different structure) is ignored for GCN, and this point leads to the poor performance (in some times, the performance of GCN for multiview is worse than that of GCN for view1) of GCN for multiview. In contrast, MultiGCNkhan2019multi can capture the relationship of the different structure to preserve the significant structure of merging subspace. However, MultiGCNkhan2019multi neglects the optimizing fusion relationship of the different structure, while the proposed SFGCN focuses on finding these relationship by jointly considering the commonality and specificity loss of multiple structure for obtaining the better performance of semisupervised classification.

SPFGCN shows the best performance in structure fusion generalization experiments, whereas the performance of SFGCN is better than that of SPGCN. The main reason is that SFGCN emphasises on the complement information by the optimizing fusion relationship of the different structure, while SPGCN trends to the interactive propagation by the diffusion influence between the different structures. The complement fusion play the more important role than the interactive propagation because of the specificity structure of individual view data, but both fusion and propagation can contribute the multiple structures mining for enhancing the the performance of semisupervised classification.

The performance improvement of SPFGCN compared with six kinds of the stateoftheart methods is respectively different. The similar performance of SPFGCN is shown in the comparison with LGCN and DGCN in Cora, DGCN in PubMed. Except these situation, the better improvement of SPFGCN can be demonstrated in other methods. The main reason is that LGCN can emphasis on neighboring nodes feature fusion for the stable node representation and DGCN can correlate the local and global consistency for complementing the different structures. The proposed SPFGCN expects not only to capture the structure commonality for complementing the different information, but also to preserve the structure specificity for mining the discriminative information. Therefore, the proposed SPFGCN can improve the classification performance in the most experiments. In the least, the proposed SPFGCN have the similar performance than the best performance of other method in all experiments. In addition, the proposed SPFGCN is based on GCN frameworks, so it has the efficient implementation like GCN. In experiments, the computation efficiency of the proposed SPFGCN is the highest than that of the stateoftheart methods (the details of the computation efficiency in section 3.2).

Structure shows the distribution of data, and is very important for learning GCN model. Incomplete structure can evaluate the robustness of the related GCN model. We select the classical GCN, the stateoftheart DGCN, SFGCN and SPFGCN for the robust test. The proposed SPFGCN shows the best performance in three datasets. In Cora, the performance of GCN is better than DGCN,while the performance of GCN is worse than DGCN in Citeseer and PubMed. It shows that local and global consistency for fusing graph information in DGCN trend to the unstable characteristic because of the tight constraint of incomplete structure consistency. The loose constraint of GCN for incomplete structure correlation leads to the worse performance. The proposed SPFGCN can compromise these constrains for balancing the incomplete structure information by optimizing the weight of multiple structures, and also connect the different structure for complementing the different information. Therefore, the proposed SPFGCN obtains the best performance in experiments.

The proposed SPFGCN expect to mine the commonality and the specificity of multiple structures. The commonality describes the similarity characteristic of structures by Grassmann manifold metric, while the specificity narrates the difference characteristic of structures by spectral embedding. In the proposed method, the specificity is constructed based on the commonality. Therefore, we only execute the ablation experiment for preserving the commonality loss by deleting the specificity loss from the total loss. This experiment obtain the following performance, that is in Cora, in Citeseer and in PubMed. These results obviously are worse than the performance of the proposed SFGCN and SPFGCN, which can balance the commonality and specificity for mining the suited weight of multiple structures.
5 Conclusion
We have proposed structure fusion based on graph convolutional networks (SFGCN) to address the multiview data diversity and complexity for semisupervised classification. SFGCN can not only adapt spectral embedding to preserve the specificity of structure, but also model the relationship of the different structure to find the commonality of multiple structures by manifold metric. Furthermore, the proposed structure propagation fusion based graph convolutional networks(SPFGCN) can combine structure fusion framework with structure propagation to generating the completer structure graph for improving the performance of semisupervised classification. At last, the optimization learning of the SFGCN can obtain both the suitable weight for the different structure and the merge embedding space. For evaluating the proposed SFGCN and SPFGCN, we carry out the comparison experiments about the baseline methods, the different multigraph fusion methods, the state of the art methods and the the lost structure analysis on Cora,Citeseer and Pubmed datasets. Experiment results demonstrate SFGCN and SPFGCN get the promising results in semisupervised classification.
6 Acknowledgements
The authors would like to thank the anonymous reviewers for their insightful comments that help improve the quality of this paper. This work was supported by NSFC (Program No.61771386,Program No.61671376 and Program No.61671374), Natural Science Basic Research Plan in Shaanxi Province of China (Program No.2017JZ020).
References
References
 (1) O. Veksler, Efficient graph cut optimization for full crfs with quantized edges, IEEE transactions on pattern analysis and machine intelligencedoi:10.1109/TPAMI.2019.2906204.
 (2) F. Monti, M. Bronstein, X. Bresson, Geometric matrix completion with recurrent multigraph neural networks, in: Advances in Neural Information Processing Systems, 2017, pp. 3697–3707.

(3)
R. Ying, R. He, K. Chen, P. Eksombatchai, W. L. Hamilton, J. Leskovec, Graph convolutional neural networks for webscale recommender systems, in: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, 2018, pp. 974–983.
 (4) M. Defferrard, X. Bresson, P. Vandergheynst, Convolutional neural networks on graphs with fast localized spectral filtering, in: Advances in neural information processing systems, 2016, pp. 3844–3852.
 (5) J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, G. E. Dahl, Neural message passing for quantum chemistry, in: Proceedings of the 34th International Conference on Machine LearningVolume 70, JMLR. org, 2017, pp. 1263–1272.
 (6) M. R. Khan, J. E. Blumenstock, Multigcn: Graph convolutional networks for multiview networks, with applications to global poverty, arXiv preprint arXiv:1901.11213.
 (7) G. Lin, H. Zhu, X. Kang, C. Fan, E. Zhang, Multifeature structure fusion of contours for unsupervised shape classification, Pattern Recognition Letters 34 (11) (2013) 1286 – 1290.
 (8) G. Lin, H. Zhu, X. Kang, C. Fan, E. Zhang, Feature structure fusion and its application, Information Fusion 20 (2014) 146 – 154.
 (9) G. Lin, H. Zhu, X. Kang, Y. Miu, E. Zhang, Feature structure fusion modelling for classification, IET Image Processing 9 (10) (2015) 883–888.

(10)
G. Lin, G. Fan, L. Yu, X. Kang, E. Zhang, Heterogeneous structure fusion for target recognition in infrared imagery, in: 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2015, pp. 118–125.
 (11) G. Lin, G. Fan, X. Kang, E. Zhang, L. Yu, Heterogeneous feature structure fusion for classification, Pattern Recognition 53 (2016) 1 – 11.
 (12) G. Lin, C. Fan, H. Zhu, Y. Miu, X. Kang, Visual feature coding based on heterogeneous structure fusion for image classification, Information Fusion 36 (2017) 275 – 283.
 (13) G. Lin, K. Liao, B. Sun, Y. Chen, F. Zhao, Dynamic graph fusion label propagation for semisupervised multimodality classification, Pattern Recognition 68 (2017) 14–23.
 (14) G. Lin, Y. Chen, F. Zhao, Structure propagation for zeroshot learning, arXiv preprint arXiv:1711.09513.
 (15) G. Lin, C. Fan, W. Chen, Y. Chen, F. Zhao, Class label autoencoder for zeroshot learning, arXiv preprint arXiv:1801.08301.
 (16) G. Lin, Y. Chen, F. Zhao, Structure fusion and propagation for zeroshot learning, in: Chinese Conference on Pattern Recognition and Computer Vision (PRCV), Springer, 2018, pp. 465–477.
 (17) G. Lin, W. Chen, K. Liao, X. Kang, C. Fan, Transfer feature generating networks with semantic classes structure for zeroshot learning, arXiv preprint arXiv:1903.02204.

(18)
X. Yang, L. J. Latecki, Affinity learning on a tensor product graph with applications to shape and image retrieval, in: CVPR 2011, IEEE, 2011, pp. 2369–2376.
 (19) X. Yang, L. Prasad, L. J. Latecki, Affinity learning with diffusion on tensor product graph, IEEE transactions on pattern analysis and machine intelligence 35 (1) (2012) 28–38.
 (20) S. Bai, Z. Zhou, J. Wang, X. Bai, L. J. Latecki, Q. Tian, Automatic ensemble diffusion for 3d shape and image retrieval, IEEE Transactions on Image Processing 28 (1) (2019) 88–101.
 (21) Q. Li, S. An, L. Li, W. Liu, Semisupervised learning on graph with an alternating diffusion process, arXiv preprint arXiv:1902.06105.

(22)
S. Bai, X. Bai, Q. Tian, L. J. Latecki, Regularized diffusion process for visual retrieval, in: Proceedings of the ThirtyFirst AAAI Conference on Artificial Intelligence, AAAI Press, 2017, pp. 3967–3973.
 (23) S. Bai, Z. Zhou, J. Wang, X. Bai, L. Jan Latecki, Q. Tian, Ensemble diffusion for retrieval, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 774–783.
 (24) J. Bruna, W. Zaremba, A. Szlam, Y. LeCun, Spectral networks and locally connected networks on graphs, arXiv preprint arXiv:1312.6203.
 (25) T. N. Kipf, M. Welling, Semisupervised classification with graph convolutional networks, arXiv preprint arXiv:1609.02907.
 (26) J. Chen, T. Ma, C. Xiao, Fastgcn: fast learning with graph convolutional networks via importance sampling, arXiv preprint arXiv:1801.10247.
 (27) W. Hamilton, Z. Ying, J. Leskovec, Inductive representation learning on large graphs, in: Advances in Neural Information Processing Systems, 2017, pp. 1024–1034.
 (28) H. Dai, Z. Kozareva, B. Dai, A. Smola, L. Song, Learning steadystates of iterative algorithms over graphs, in: International Conference on Machine Learning, 2018, pp. 1114–1122.
 (29) Z. Liu, C. Chen, L. Li, J. Zhou, X. Li, L. Song, Y. Qi, Geniepath: Graph neural networks with adaptive receptive paths, arXiv preprint arXiv:1802.00910.
 (30) C. Zhuang, Q. Ma, Dual graph convolutional networks for graphbased semisupervised classification, in: Proceedings of the 2018 World Wide Web Conference on World Wide Web, International World Wide Web Conferences Steering Committee, 2018, pp. 499–508.
 (31) D. Van Tran, N. Navarin, A. Sperduti, On filter size in graph convolutional networks, arXiv preprint arXiv:1811.10435.
 (32) T. Xia, D. Tao, T. Mei, Y. Zhang, Multiview spectral embedding, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 40 (6) (2010) 1438–1446.

(33)
G.F. Lin, H. Zhu, C.X. Fan, E.H. Zhang, L. Luo, Multicluster feature selection based on grassmann manifold, Jisuanji Gongcheng/ Computer Engineering 38 (16) (2012) 178–181.
 (34) P. Turaga, A. Veeraraghavan, A. Srivastava, R. Chellappa, Statistical computations on grassmann and stiefel manifolds for image and videobased recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence 33 (11) (2011) 2273–2286.
 (35) X. Dong, P. Frossard, P. Vandergheynst, N. Nefedov, Clustering on multilayer graphs via subspace analysis on grassmann manifolds, IEEE Transactions on signal processing 62 (4) (2013) 905–918.
 (36) Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, P. S. Yu, A comprehensive survey on graph neural networks, arXiv preprint arXiv:1901.00596.
 (37) P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, Y. Bengio, Graph attention networks, arXiv preprint arXiv:1710.10903.
 (38) J. Chen, J. Zhu, L. Song, Stochastic training of graph convolutional networks with variance reduction, arXiv preprint arXiv:1710.10568.
 (39) P. Veličković, W. Fedus, W. L. Hamilton, P. Liò, Y. Bengio, R. D. Hjelm, Deep graph infomax, arXiv preprint arXiv:1809.10341.
 (40) H. Gao, Z. Wang, S. Ji, LargeScale Learnable Graph Convolutional Networks, arXiv preprint arXiv:1808.03965.
Comments
There are no comments yet.